Disclaimer: This post is the result of the joint work of Xuan Zou and myself for the final project CS294-129: Designing, Visualizing and Understanding Deep Neural Networks at UC Berkeley.

Background and Problem Statement

When we listen to a song we are usually listening to a rich combination sounds with different properties, such as vocals, chords, melodies or percussions. Usually, a composer and songwriter first decide on the main melody, and then fill the accompaniment music with chords and variations, to create a richer musical texture and to better fit the style it try to convey.

This post by Andrej Kaparthy has shown the incredible effectiveness of using RNN’s to capture sequential data. So the natural question to ask is, can RNN be used to create music? The answer is actually yes, as shown in Composing Music With Recurrent Neural Networks by Daniel Johnson (2015) and Magenta by Google Brain.

Here we believe that similar techniques can be used to generate accompaniment for music. However, because we have a input sequence for melody and an output sequence for generated accompaniment, we shall use a model that can map sequences to sequences.

left: arquitecture used for machine translation. right: arquitecture used for video captioning.

As seen in above image from abovementioned Kaparthy’s blog, these are 2 ways to map a sequence to another sequence using RNNs the left one is usually used for natural language translation, as described in this paper; by Cho et al., and the right one for video captioning.

Primer on RNN and Sequence-to-Sequence

For readers not familiar with neural networks, We can think a vanilla (fully connected) neural cell as a functional that takes a vector in \(R^n\) and returns a number, i.e. \(x = f(x)\). Usually \( f \) is taken to be a linear function followed by a nonlinearity, either the sigmoid function \( x \mapsto \frac{1}{1+e^x} \) or the relu \( x \mapsto \max(x, 0) \). A recurrent neural cell is then a vanilla neural cell with a saved internal state, so that the output does not only depend on the input, but also the interna state. We can think it as a function that acts on sequences \( (x_1, …, x_n) \), with an internal state sequence \( (s_1, …, s_n) \) returning \( (y_1, …, y_n) \) such that \( s_i = f(x_i, s_{i-1}) \) and \(y_i = g(y_{i+1}) \); with \(f \) usually chosen to be a linear function followed by tanh and \(g\) chosen as a linear function.

So, in the picture in the left, the first part tries to collapse the input sequence to a vector, and then use that vector to try to generate a corresponding sequence, which could be of different length. We can think the hidden state as a latent space that the sequence truly lives in, and its value captures the full meaning of the input sequence. While in the right picture, it is saying that the output of the first timestamp should be the first output, there is a tighter orderness between input and output, and the length of then must be the same. Since music accompaniment, such as chord, usually goes for every 4 or 8 bars, so we have chosen the first network.

Side note on music data format

Music are either stored as full semantic representation, such as pdfs of sheet music, or stored as full expressed representation, such as raw audio wave. Usually we work with some format that lies in between of those extremes, like mp3 or MIDI.

Spectrum of music format

Wav

Wav files are how raw audios are recorded. We can read wavs directly into a numpy array using scipy’s io module.

In [12]: import scipy.io.wavfile
In [13]: x = scipy.io.wavfile.read('./1980s-Casio-Piano-C5.wav')
In [14]: x
Out[14]: (44100, array([ 13,   0,   4, ..., -36, -27, -49], dtype=int16))

Here the first element of the tuple is number of amplitudes per second, and the second element is an array that represent the amplitudes at a time.

MIDI

MIDIs are very close to sheet music in the music format spectrum, as it provides pitches of individual notes played, however it drops the semantic information on measures and annotations.

To access the information in MIDI files, we used the python-midi library.

Then we can use it to see how a midi file looks like:

    import midi
    x = midi.read_midifile('./totalchange2.mid')
    print x[0][:10]

# Output:
#    midi.Track(\
#      [midi.PortEvent(tick=0, data=[0]),
#       midi.TrackNameEvent(tick=0, text='STRING MELODY', data=[83, 84, 82, 73, 78, 71, 32, 77, 69, 76, 79, 68, 89]),
#       midi.ProgramChangeEvent(tick=0, channel=1, data=[48]),
#       midi.ControlChangeEvent(tick=0, channel=1, data=[7, 40]),
#       midi.NoteOnEvent(tick=1760, channel=1, data=[70, 123]),
#       midi.NoteOnEvent(tick=0, channel=1, data=[82, 123]),
#       midi.NoteOnEvent(tick=80, channel=1, data=[82, 0]),
#       midi.NoteOnEvent(tick=0, channel=1, data=[70, 0]),
#       midi.NoteOnEvent(tick=0, channel=1, data=[67, 123]),
#       midi.NoteOnEvent(tick=0, channel=1, data=[79, 123])])

Above x[0] gives the first track of the midi and [:10] gives the first 10 events of the track. There are all sort of events, here we only care NoteOnEvent and the corresponding NoteOffEvent(not shown). The first number in the ‘data’ field is the instrument key code which maps one-to-one to the pitch of the note.

Besides forementioned works on music generation, there are also few works on automatic accompaniment generation.

Lichtenwalter at al. [2008] used sliding window sequential learning techniques to learn music style for automatic music generation; Andrej (http://zx.rs/) has presented a Markov-Chain based model in music generation; and Chen at al. evaluated chord-generation as a classification problem using simple models.

Base model

Originally we thought of using wav format for training, as the one of the original goals was to allow user sing or hum into the system and get accompaniment out directly, without need to even write in any musical notation. Curiously all the related works mentioned above uses either sheet music or MIDI as input, so we need to create our own baseline instead of just refering the results above.

For baseline model we just used fully connected neural networks trying to map a vector of amplitudes directly to the corresponding vector of amplitudes. We have found this nice data set from Cambridge Music Technology’s Multi-track library. This is a data set used to train people for music mixing.

For preprocessing, we identity the melody track through names, and merge all the non melody tracks into one. Then we chop the amplitude vector into a fixed length chunks (say, every 500 values). Then we train the network with L2 loss, making it to learn the mapping between those two vectors.

Initially, we thought that the model will not converge at all, as the musical structure in wav are very subtle. However the model did eventually converge.

The produced results have lots of random noise in them, though we could still find some “musicalness” in them.

Current Model

After playing around with the base model, we realized that in order to use a sequence based model, we have to make sense of the wav files as a sequence of “characters” or “words”. In other words, we need to know what are the individual notes, or pitches of the music. A standard way to get notes from amplitude space data is by using fourier transform, to get the signal to frequency space and we can then lookup in a table to figure out which musical note it corresponds.

However, using MIDI data we just get that for free. In fact, it turns out that MIDI are more widely available than raw wav, probably because people usually don’t listen to MIDI’s for fun, so it doesn’t hurt the interest of the people who are trying to sell music. Also, music in wav form usually contains features that we don’t care for accompaniment generation, such as the lyrics of the song, which MIDI doesn’t have.

MIDI data can be viewed as list of tracks, and each track has a TrackNameEvent indicating the instrument the track is played with. We used a simple heuristics by matching keywords in the instrument name, and we separate the tracks into 4 categories: “melody”, “percussion”, “guitar/piano-like” or “string-like”, according to the following:

CLASSES = (
('percussion', 
    ['drums' ,'drum' ,'snare' ,'shaker']),
('guitarlike', 
    ['bass', 'guitar' ,'gtr' ,'banjo', 'piano' ,'keyboard' ,'harp']), 
('stringslike', 
    ['trumpet' ,'organ' ,'flute' ,'sax' ,
     'polysynth' ,'whistle' ,'sax' ,'cello' ,
     'strings' ,'violin']),
('melody', 
    ['words' ,'melody' ,'choir' ,'voice' ,
     'lead' ,'melodie' ,'solo']),
)

then we trained separate networks with melody vs. percussion and melody vs. guitarlike. The heuristics is that they both have very different behaviors even having the same melody. So it could be easier to learn a simpler patterns than learning all at once. This is a similar idea to “curricular learning”.

To represent the each MIDI file as matrix for training, we could try treat those events as “words” in our alphabet, to embed those events directly vectors similar to word2vec used in the original seq2seq for translation, but for us, since we only care the pitches of the notes, so we used one-hot-vector encoding for the notes. We restrict the pitch range to 78 distinct notes, which is 6 octaves (each octave has 12 distinct “half notes”). The result will be a \(t \times n\) matrix, with \( t \) represents which time tick and \( n\) represents which notes are on at that time tick (see fragment in left).

After our preparation, we are ready for the training. The model consists of 50 LSTM neurons for encoding and 50 for decoding.

seq2seq diagram from Cho et. al.

Sequence-to-sequence network takes a sequence of equal length as input, so we need to reshape our \( n \times 78\) matrix into \( m \times l \times 78\), with \( m \times l = n \) where we have chosen \( l = 100\), in other words, treat 100 notes as a sequence.

At each mini batch, the LSTM on the encoding side will encode the given melody into a internal state, which is then feed to the decoder network, in which should produce an accompaniment. Now we compute the loss as follows: \[ loss = \sum \log( 1 + \exp(\frac{y - \hat{y}}{2})) \]

In other words, we treat generated vector as the probability that a given note is on, so we compare the difference between the generated notes with the original one by computing the binary crossentropy on each note and sum all of them. Note that we cannot just use categorical crossentropy because there could be several 1’s in the ground truth.

After training for a few hours, here is one of the generated samples (converted to from midi to mp3):

original melody:

generated accompaniment:

combined together:

Here we combined the generated accompaniment back with the melody. We can observe that the accompaniment are relatively simple, having lots of repeated notes. For percussion it is actually fine, though for chords we might like more variations. But after all the model gets the beats right.

Tools

For this project we have used Keras on Theano to construct seq2seq network. Similar network for tensorflow can be built as well, as detailed in this tutorial.

Lessons Learned and Future improvements

  1. Generate music is pretty hard.
  2. MIDI works much better than wav, and is easier to get (it is also much more smaller in file size)
  3. Our ways to distinguish melody and accompaniment could be better than just keyword matching in filenames.
  4. By using binary crossentropy on each note, we are treating the event of each note is on as independent events. This is actually not true, because even though there could be several notes on at same time because of a chord, but it is unlikely to have 30 notes on at the same time. One idea is to add a penality when number of on notes are too many, but that could make the loss non-differentiable.
  5. Another potential improvement is to learn the joint probability of a note is one given some other note is on. In other words, build a graphical model with each note being a node, and edges represent the probability that 2 notes are on at the same time. We can use the MIDI files to estimate the weights on the edges. Then use Monte Carlo methods such as Gibbs sampling to get the accompaniment. We can combine this and the original network, so that the seq2seq models outputs a single note (so that we can use categorical crossentropy as loss), then use sampling to get a chord.
  6. Sometimes part of a accompaniment track actually belongs to melody. For example, in a guitar track there are mostly chords, but there can also be a occational solo. The solo part should belong to melody. In the current setting, when the guitar solo part appears is usually then the melody is mellow, so the model learns a big bias vector in their weights because it need to map 0 vector (the silent melody) to a very rich and vary accompaniment (the guitar solo).
  7. We can try to incorporate domain knowledge from music theory. This is the approach of Magenta library. It used comformity to music theory, such as “stay in key” and “don’t repeat note too much” to compute a reward and used Deep Q learning to train the model. This idea potentially can solve the problem that out generated music is too “simple” and repetitive.

Site note: Team contribution

Xuan and I did most of the work together physically. We both contributed equally in finding the source, programming and writeups. Though Xuan did more preprocessing and I did more model programming.

References:

Ilya Sutskever, Oriol Vinyals, Quoc V. Le. Sequence to Sequence Learning with Neural Networks, 2015. arXiv:1409.3215 [cs.CL]

Daniel Johnson. Composing Music With Recurrent Neural Networks. http://www.hexahedria.com/2015/08/03/composing-music-with-recurrent-neural-networks/

Douglas Eck (Google Brain). Magenta. https://magenta.tensorflow.org/welcome-to-magenta

Andrej Kaparthy. The Unreasonable Effectiveness of Recurrent Neural Networks, 2015. http://karpathy.github.io/2015/05/21/rnn-effectiveness/

Primer on Dependency Injection

In a system constructed in a object oriented fashion, we usually have two types of objects: Data objects, where stores the data and Service objects, which manipulates the data. For example, if it is a database backed application it usually has some object that talks to the database, which is the Service object.

Say, we have 3 service objects

class ServiceA(object):
    def do_work():
        pass

class ServiceB(object):
    def do_work():
        pass

class ServiceC(object):
    def do_work():
        pass

Now say, that ServiceB need to use ServiceA, it need to get to ServiceA somehow. One of the antipatterns people used to use is to make ServiceA a singleton, then you have something like this:

class ServiceA(object):
    def do_work():
        pass

    @classmethod
    def get_instance(cls):
        # blah blah

class ServiceB(object):
    def do_work():
        ServiceA.get_instance().do_work()

class ServiceC(object):
    def do_work():
        pass

Dependency Injection basically says, don’t do that shit!! Using singleton making testing of ServiceB a pain, and makes it impossible to let ServiceB work with another service similar to A. If ServiceB needs ServiceA, it should ask it in the constructor, like this:

class ServiceA(object):
    def do_work(self):
        pass

    @classmethod
    def get_instance(cls):
        # blah blah

class ServiceB(object):
    def __init__(self, service_a):
        self.service_a = service_a

    def do_work(self):
        self.service_a.do_work()

class ServiceC(object):
    def do_work(self):
        pass

This (pretty long) Google talk explains this idea very well.

Sometimes the term Dependency Injection is sometimes refers a dependency injection framework, like Spring and Guice for Java, all that those framework does it to save you from typing out the constructors of the services. Here we will only talk it as the idea of asking dependencies explicitly, usually in constructors.

Dependency injection in web frameworks

Usually we instantiate service objects in the program’s entry point, like the main function. However, most web frameworks there is no such entry point.

Here is a extremely simple wsgi app written with bottle.

import bottle
app = Bottle()
@app.get('/')
def index():
    return 'hello world'

if __name__ == '__main__':
    bottle.run(app)

Now, index needs to use ServiceA, ServiceB, and ServiceC. Where do you put them? Usual approaches are using globals, annotations or closures.

As globals

import bottle
app = Bottle()

a = ServiceA()
b = ServiceB(service_a=a)
c = ServiceC(b, a)

@app.get('/')
def index():
    # do stuff with a,b,c
    return 'hello world'

if __name__ == '__main__':
    bottle.run(app)

Note that here a, b, c are effectively singletons, but that does not violate principles of dependency injection because inside of ServiceB does not read ServiceA from the global, but from its member variable. That makes us free to pass in a different object for ServiceA when needed.

The advantage of thise approach is the simplicity. We can also move all the instantiation to a config.py file and every other files just import the services it needs. However, now index is really hard to test. We cannot test it independently of ServiceA, ServiceB and ServiceC, and cannot mock those services out without monkey patching. We can sort of mitigate that by move most of the functionality inside some service and let the url handler just forward the call, and leave those handlers untested.

Decorators

import bottle
app = Bottle()

a = ServiceA()
b = ServiceB(service_a=a)
c = ServiceC(b, a)

@app.get('/')
@uses_service(a,b,c)
def index(a, b, c):
    # do stuff with a,b,c
    return 'hello world'

if __name__ == '__main__':
    bottle.run(app)

We can write a custom decorator to pass the dependent service as parameters to the needed function. This makes it look little nicer, and gives the impression that index function is not reading globals anymore. However, we cannot directly call index with custom a, b and c as the decorator call replaces the old function with a wrapped one. So it’s really the same.

Of course we can test the unwrapped version by not using decorator syntax, like this

def index(a, b, c):
    # do stuff with a,b,c
    return 'hello world'
app.get('/')(uses_service(a,b,c)(index))

but this is plain ugly. Also, doing this for every url handler could be a pain!

Using closures

import bottle

def make_wgsi_app(a, b ,c)
  app = Bottle()

  @app.get('/')
  @uses_service(a,b,c)
  def index(a, b, c):
      # do stuff with a,b,c
      return 'hello world'
  return app

if __name__ == '__main__':
    a = ServiceA()
    b = ServiceB(service_a=a)
    c = ServiceC(b, a)
    bottle.run(make_wgsi_app(a, b, c))

This is my favorite. It effectively put the service instantiation in the point of entry, and allows testing the wgsi app passing mocked out versions of service a b or c. Though, because make_wsgi_app returns a wsgi app instead of a function object, we need to test index through it, webtest package is a great tool to accomplish that.

Using this method, you can pass dependencies to a group of url handlers with similar dependencies as well.

ML;NL(TL;DR) El truco es usar LaTex in HTML. (Sigue leyendo si no sabes de que hablo).

Aqui les traigo un truco para escribir matemáticas con formulas cheveres como esto: $$ \sum_{n=0}^{\infty} \int_1^{10} \frac{x^2+1}{\sin x}dx $$ o esto: $$ \frac{x_1 + x_2 + … + x_n}{n} \geq \sqrt[n]{x_1x_2…x_n}$$

Los unicos que necesitas para poder hacer esto son

  • Un navegador de web decente, i.e. Chrome o Firefox.
  • Estar en internet.
  • Un editor de texto.

Primer paso, abre tu editor de texto favorito. Ojo, no es Word, no es Word, Word no es un editor de texto!! Si usas Word no funca! Si no tienes un editor de texto usas Notepad por momento y consíguete uno mas decente después. (Recomiendo Atom o Sublime Text).

y copias los siguientes:

<html>
<body>
Habla loco!!
</body>
</html>

Guardas en un archivo llamado hola.html, ahi el icono debe aparecer el icono de tu browser, y cuando lo abres, abrirá en tu browser verias “Habla loco!!” ahi.. De hecho, cualquier cosa que escribes dentro del <body></body>que no tenga <> para salir iguales, y ahi es donde escribiremos las fórmulas. Ojo si estas en Windows puede ser que Notepad asume que estas guardando con extensión .txt y te guarda hola.html.txt en vez de hola.html, si te hace eso no va a funcionar, tienes que guardar como .html.

Bueno, luego para poner fórmulas, primero copias y pegas lo siguiente justamente arriba de <body>

<head>
<script type="text/javascript"
  src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML">
</script>
</head>

Esto permita que insertara comandos para tipear formulas llamado LaTex (no menciones condones please, ya no es chistoso después de 10 mil veces). Los comandos de LaTex es super intuitivo, los principales son: \frac{arriba}{abajo} hace una fracción, _ para poner subscripts, ^ para poner exponente, \sum para el simbolo de sumas. Con esos ya pueden hacer fórmulas más o menos complicados. Si necesitas más símbolos, mire en la pagina de Art of Problem Solving. Al final, tienes que poner tu formula entre simbolos \( \) para fórmulas dentro de una linea, o \[ \] para formulas en una linea aparte cada lado. Por ejemplo, esto:

<html>
<head>
<script type="text/javascript"
  src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML">
</script>
</head>
<body>
Aqui viene la formula tuca:
\(  \frac{x_1 + x_2 + ... + x_n}{n} \geq \sqrt[n]{x_1x_2...x_n} \)

Y aqui una mas tuca:
\[ \sum_{n=0}^{\infty} \int_1^{10} \frac{x^2+1}{\sin x}dx \]

</body>
</html>

produce esto cuando lo abres en el browser:

Aqui viene la formula tuca: \( \frac{x_1 + x_2 + … + x_n}{n} \geq \sqrt[n]{x_1x_2…x_n} \)

Y aqui una mas tuca: \[ \sum_{n=0}^{\infty} \int_1^{10} \frac{x^2+1}{\sin x}dx \]

Esto es, de hecho, la forma típica para meter fórmulas en las páginas de web, pero nada nos para para usarlo en el escritorio. Y después si es que quieres instalar LaTex ya en serio, pueden seguir las instruciones aqui: https://www.artofproblemsolving.com/wiki/index.php?title=LaTeX:Downloads.

Hace mucho mucho tiempo, en un lugar muy muy lejano, habia un pueblo aislado de los otros pueblos. El pueblo hay \( n \) hombres y \( n \) mujeres solteros en edad de casarse. Y el matrimonio es algo muy importante para la estabilidad de la sociedad del pueblo, la gente pregunta, ¿ hay como casarse a los chicos, para que sea un matrimonio estable, es decir, que no haya el caso de que un marido de una, y una mujer de otro, prefieren escapar y estar juntos?

Formalmente, asumimos que cada hombre tiene una lista secreta que ordenó a las mujeres en orden de su perferencia, y asi mismos, cada mujer orden&oacute a los hombres en su perferencia. El orden de cada persona puede ser distinta, ya que cada persona tiene su individualidad, como en la vida real. Y asumimos que todos terminan casados al final (no tanto como la vida real, ouch). Lo que la gente no quiere que pase, es que haya 2 parejas, digamos (Goku, Chi-Chi) y (Vegeta, Bulma) donde Goku prefiere a Bulma mas que su esposa Chi-Chi y Bulma prefiere a Goku mas que su esposo Vegeta .

La buena noticia es que, en 1962, dos matemáticos demostraron que, dado que el número de hombres y mujeres son iguales, siempre hay como hacer que todos los matrimonios estable. Y de hecho, ellos formularon un algoritmo de aparear a la gente, para llegar a una formulación estable.

Y el algorithmo es va más o menos así:

  1. todos los hombres va y declara a la mas preferida de su lista, en orden.
  2. Si su preferida es soltera, lo acepta y ellos se amarra.
  3. Si su preferida no es soltera, y ella prefiere a su pelado actual, (o, su pelado actual esta mas arriba en la lista), no lo para bola.
  4. O sino, si ella prefiere mas el nuevo, entonces corta con su pelado actual y amarra con el nuevo.
  5. y el rechazado regrese y mira la siguiente de su lista y sigue.

El chiste es que ese proceso eventualmente termina, y todos estan amarrados con alguien. De ahi despues de un año o dos de noviazgo terminan casando, y ahí tienen tus matrimonios estables.

Y porque carajo eso funca? Bueno la intuicion es lo siguiente: primero se nota que, aunque los hombres amarra y despues puede ser abandonados, las mujeres nunca vuelve soltera una vez amarrada. Es decir el numero de solteras disnumiye estrictamente. Entonces el proceso tiene que terminar. Cuando el proceso termina, si hubiera 2 parejas, digamos (Goku, Chi-Chi) y (Vegeta, Bulma) , entonces, como Goku prefiere a Bulma , entonces significa que Goku declaró antes que su esposa actual. Y como Bulma esta con otra persona Vegeta, significa que Bulma rechazó a Goku. Entonces no puede preferir a Goku más que Vegeta.

Bueno, una vez sabiendo como obtener estabilidad social del pueblo, otra pregunta que podemos hacer es, el proceso descrito conviene más a los hombres o a las mujeres?

Aparentemente a las mujeres mas, ya que ellas no tienen que tomar iniciativas, una vez amarrada nunca es rechazada, y si cambia de pelado es por uno mejor. Y los hombres hace todo, puede salir con corazones rotos varias veces hasta encontrar la final. Pero el resultado no es asi.

Puede que el proceso conviene a las mujeres, pero el resultado conviene a los hombres. De hecho, el apareamiento obtenido de la forma descrito arriba es llamado uun apareamiento “optimo para hombres” (male-optimal), y su definición es: “los hombres termina con la mejor mujer que quiere estar con el, y las mujeres termina con el peor hombre que ella puede aceptar” (Donde las definiciones de “mejor” y “peor” es basado en el escala de cada persona). Una forma de observar eso es ver el caso especial donde todos los hombres tienen gustos super distintos y sus mas preferidas son todas distintas. Este caso, el algoritmo termina en el primer paso, cada hombre declara su perferida y es aceptado, y ahi termina. Obteniendo el óptimo para los hombres.

Aparte de existir el apareamiento optimo para masculino, tambien hay uno que es optimo para mujeres. Y la formas de o btenerlo es sincillamente, cambiando el papel de hombre y mujeres. Entre esos 2 apareamientos estables que es optimo para algunos hombres y mujeres pero no todos de un género. No conozco algoritmos para obtenerlos aun.

En conclusion, la matématica demuestra que:

  1. Si las mujeres quieren conseguir el mejor hombre que puede conseguir, tienen que ser activas en vez de pasivas.
  2. Si estas casada, no tiene razón de estar celosa, ya que ya sabes que eres la m&aacutes preferida que le pare bola. En cambio, los hombres si deben estar cuidados.

Bueno, conclusiones anteriores estan asumiendo que la preferencia de uno nunca cambia, y que la poblacion inicial nunca cambia, que no es siempre verdad.

Este teorema es el que siempre uso cuando para mostrar que la mate es chévere. Invito a todos que sepa ingles que vea este video en youtube Ella explica mejor que yo XD.

ML;NL(TL;DR): Si has hecho páginas con plantillas de HTML, puedes usar lo mismo para XML.

Si quieres ingresar facturas emitidas al sistema de SRI, tiene 2 formas: o tipeas esos datos manualmente usando el aplicacion Java que se baja en las pajinas de SRI, o generas archivos de algun formato que ellos puede entender e importar al aplicacion antemencionado.

Entre ellos, tuve un request de generar archivos XML compartible para declarar el “Anexo Transaccional Simplificado” (ATS).

La ficha tecnica de ATS se encuentra aqui

Basicamente, se requiere un archivo en XML que se ve asi:

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<iva>
  <TipoIDInformante>R</TipoIDInformante>
  <IdInformante></IdInformante> <!-- RUC -->
  <razonSocial></razonSocial> <!-- nombre de la compania -->
  <totalVentas></totalVentas>
  <!-- mas campos emitidos -->
  <compras>
    <!-- una lista de compras -->
  </compras>
  <ventas>
    <!-- Aqui viene uno de estos por cada cliente -->
    <detalleVentas>
      <idCliente>123</idCliente>
      <valorRetIva>0.00</valorRetIva>
      <valorRetRenta>0.00</valorRetRenta>
      <!-- mas campos emitidos -->
    </detalleVentas>
  </ventas>
  <ventasEstablecimiento>
    <ventaEst>
      <codEstab>001</codEstab>
      <ventasEstab></ventasEstab>
    </ventaEst>
  </ventasEstablecimiento>

  <anulados>
    <!-- Aqui viene uno de estos por cada uno de facturas anulados -->
    <detalleAnulados>
      <tipoComprobante>01</tipoComprobante>
      <autorizacion>1111897538</autorizacion>
      <!-- mas campos emitidos -->
    </detalleAnulados>
  </anulados>
</iva>

Como ya he hecho un servicio REST que retorna JSON, asumí que retornar XML en vez de JSON no es muy diferente. Que consistirán pasos como: 1. generar objetos con datos de bases de dato 2. convertir objetos en un dictionario (dict). 3. Convertir dict en XML (previamente en JSON, que es sincillamente llamando a json.dumps).

Googlee “python generate xml”, y llegué a la biblioteca de lxml. Me di cuenta que tener diccionario no sirve para un carajo, y hay que generar XML desde principio.

Planee de escribir un base class que generara XML como lo hice para dict con SerializableMixin aqui, luego crear 4 clases, para ventasEstablecimiento, detalleVentas, detalleAnulados, e iva (que contiene a los demas). Pero me puse vago por unos días…

Cuando volví a ver el problema, y viendo las páginas web que estoy haciendo con HTML. Se me pegó que HTML es XML, HTML es XML!! HTML es XML!!! (repite 3 veces para las cosas importantes). Y teniendo ya años escribiendo platillas que genera HTML, porque no uso lo mismo para XML!!.

Al final, escribí un plantilla de jinja2 aqui. Que es basicamente copia y pega del formato de arriba.