You will have to address a varying sequence length, one way or another. will likely have either perform some padding (e.g. using zeros to make all sequences equal to a max. sequence length).
Other approaches, e.g. used within NLP, to make training more efficient, are to splice series together (sentences in NLP), using a clear break/splitter (full-stops/periods in NLP).
Using a convolutional network sort of makes sense to me in your situation, predicting a binary output. As the convolutions will be measuring correlations in the input space, I can imagine the success of the model will be highly dependend on the nature of the problem. For some intuition of conv nets uses for sequences, have a look at this great introductory article. If each of your six sequences are inter-related, it allows the convolutions to pick up on those cross-correlations. If they are not at all related, I would proably first try Recurrent Networks (RNNs), such as the LSTM you mentioned.
Getting you head around the dimensions of a multi-variate LSTM can be daunting at first, but once you have addressed theissue of varying sequence length, it becomes a lot more manageable.
I don't know what framework you are using, but as an example in Keras/Tensorflow, the dimensions for you problem would be something like:
(batch_size, sequence_length, num_sequences)
batch_size can be set to None to give flexibility around your available hardware. sequence_length is where you need to decide on a length to use/create via padding/trimming etc. num_sequences = 6 :-)
If helpful, check out these threads, where I explained that stuff in more detail.