Connexion entre générateurs de stabilisateurs et matrices de contrôle de parité dans le code Steane

15

Je travaille par Mike et Ike (Nielsen et Chuang) pour l'auto-apprentissage, et je lis sur les codes de stabilisateur dans le chapitre 10. Je suis ingénieur électricien avec un peu d'expérience en théorie de l'information classique, mais je suis en aucun cas un expert en théorie du codage algébrique. Mon algèbre abstraite est essentiellement un peu plus que ce qui est en annexe.

Je pense que je comprends parfaitement la construction de Calderbank-Shor-Steane, où deux codes classiques linéaires sont utilisés pour construire un code quantique. Le code Steane est construit en utilisant C1 (le code des flips qbit) comme le code de Hamming [7,4,3] et (le code des flips de phase) comme le même code. La matrice de contrôle de parité du code [7,4,3] est: .C2

[000111101100111010101]

Les générateurs de stabilisateurs pour le code Steane peuvent être écrits comme suit:

NameOperatorg1IIIXXXXg2IXXIIXXg3XIXIXIXg4IIIZZZZg5IZZIIZZg6ZIZIZIZ
où, pour ma santé mentale et ainsi de suite.jejejeXXXX=jejejeXXXX

Il est souligné dans le livre que les X et les Z sont dans les mêmes positions que les 1 dans le code de contrôle de parité d'origine. L'exercice 10.32 demande de vérifier que les mots de code pour le code Steane sont stabilisés par cet ensemble. Je pourrais évidemment le brancher et le vérifier à la main. Cependant, il est indiqué qu'avec l'observation des similitudes entre la matrice de contrôle de parité et le générateur, l'exercice est "évident".

J'ai vu ce fait noté dans d'autres endroits ( http://www-bcf.usc.edu/~tbrun/Course/lecture21.pdf ), mais il me manque une sorte d'intuition (probablement évidente). Je pense que je manque une autre connexion entre les mots de code classiques et les codes quantiques autres que la façon dont ils sont utilisés dans l'indexation des éléments de base dans la construction du code (c'est-à-dire la section 10.4.2).

Travis C Cuvelier
la source

Réponses:

6

Il y a quelques conventions et intuitions ici, qu'il serait peut-être utile de préciser -

  • Signer les bits contre {0,1} bits

    La première étape consiste à effectuer ce qu'on appelle parfois le «grand décalage de notation» et à considérer les bits (même les bits classiques) comme étant codés dans les signes. C'est productif à faire si ce qui vous intéresse le plus, ce sont les parités des chaînes de bits, car les bit-flips et les sign-flips agissent essentiellement de la même manière. Nous mappons et , de sorte que par exemple la séquence de bits serait représentée par la séquence de signes .0+111(0,0,1,0,1)(+1,+1,1,+1,1)

    Les parités de séquences de bits correspondent alors à des produits de séquences de signes. Par exemple, tout comme nous reconnaîtrions comme calcul de parité, nous pouvons reconnaître comme représentant le même calcul de parité en utilisant la convention de signe. Exercice. Calculez la 'parité' de et de . Est-ce la même chose?00101=0(+1)(+1)(1)(+1)(1)=+1

    (1,1,+1,1)(+1,1,+1,+1)

  • Contrôles de parité utilisant des bits de signe

    Dans la convention {0,1} bits, les contrôles de parité ont une belle représentation en tant que produit scalaire de deux vecteurs booléens, de sorte que nous pouvons réaliser des calculs de parité compliqués sous forme de transformations linéaires. En passant aux bits de signe, nous avons inévitablement perdu le lien avec l'algèbre linéaire au niveau de la notation , car nous prenons des produits plutôt que des sommes. Au niveau du calcul, car il ne s'agit que d'un changement de notation, nous n'avons pas vraiment à nous inquiéter trop. Mais au niveau mathématique pur, nous devons maintenant réfléchir un peu à ce que nous faisons avec les matrices de contrôle de parité.

    Lorsque nous utilisons des bits de signe, nous pouvons toujours représenter une «matrice de contrôle de parité» comme une matrice de 0 et de 1, au lieu de signes ± 1. Pourquoi? Une réponse est qu'un vecteur ligne décrivant un contrôle de parité des bits est d'un type différent de la séquence de bits eux-mêmes: il décrit une fonction sur les données, pas les données elles-mêmes. Le tableau des 0 et des 1 nécessite désormais une interprétation différente - au lieu de coefficients linéaires dans une somme, ils correspondent aux exposants d'un produit. Si nous avons des bits de signe , et nous voulons calculer un contrôle de parité donné par un vecteur ligne , le contrôle de parité est alors calculé par (s1,s2,,sn){1,+1}n(b1,b2,,bn){0,1}

    (s1)b1(s2)b2[](sn)bn{1,+1},
    where recall that s0=1 for all s. As with {0,1}-bits, you can think of the row (b1,b2,,bn) as just representing a 'mask' which determines which bits sj make a non-trivial contribution to the parity computation.

    Exercise. Compute the result of the parity check (0,1,0,1,0,1,0) on (+1,-1,-1,-1,-1,+1,-1).

  • Eigenvalues as parities.

    The reason why we would want to encode bits in signs in quantum information theory is because of the way that information is stored in quantum states — or more to the point, the way that we can describe accessing that information. Specifically, we may talk a lot about the standard basis, but the reason why it is meaningful is because we can extract that information by measurement of an observable.

    This observable could just be the projector |11|, where |0 has eigenvalue 0 and |1 has eigenvalue 1, but it is often helpful to prefer to describe things in terms of the Pauli matrices. In this case, we would talk about the standard basis as the eigenbasis of the Z operator, in which case we have |0 as the +1-eigenvector of Z and |1 as the −1-eigenvector of Z.

    So: we have the emergence of sign-bits (in this case, eigenvalues) as representing the information stored in a qubit. And better still, we can do this in a way which is not specific to the standard basis: we can talk about information stored in the 'conjugate' basis, just by considering whether the state is an eigenstate of X, and what eigenvalue it has. But more than this, we can talk about the eigenvalues of a multi-qubit Pauli operator as encoding parities of multiple bits — the tensor product ZZ represents a way of accessing the product of the sign-bits, that is to say the parity, of two qubits in the standard basis. In this sense, the eigenvalue of a state with respect to a multi-qubit Pauli operator — if that eigenvalue is defined (i.e. in the case that the state is an eigenvalue of the Pauli operator) — is in effect the outcome of a parity calculation of information stored in some choice of basis for each of the qubits.

    Exercise. What is the parity of the state |11 with respect to ZZ? Does this state have a well-defined parity with respect to XX?

    Exercise. What is the parity of the state |+- with respect to XX? Does this state have a well-defined parity with respect to ZZ?

    Exercise. What is the parity of |Φ+=12(|00+|11) with respect to ZZ and XX?

  • Stabiliser generators as parity checks.

    We are now in a position to appreciate the role of stabiliser generators as being analogous to a parity check matrix. Consider the case of the 7-qubit CSS code, with generators

    GénérateurFacteurs tensoriels123456septg1XXXXg2XXXXg3XXXXg4ZZZZg5ZZZZg6ZZZZ
    I've omitted the identity tensor factors above, as one might sometimes omit the 0s from a {0,1} matrix, and for the same reason: in a given stabiliser operator, the identity matrix corresponds to a tensor factor which is not included in the 'mask' of qubits for which we are computing the parity. For each generator, we are only interested in those tensor factors which are being acted on somehow, because those contribute to the parity outcome.

    Now, the 'codewords' (the encoded standard basis states) of the 7-qubit CSS code are given by

    |0L|0000000+|0001111+|0110011+|0111100+|1010101+|1011010+|1100110+|1101001=yC|y,|1L|1111111+|1110000+|1001100+|1000011+|0101010+|0100101+|0011001+|0010110=yC|y1111111,
    C000111101100111010101. Notably, these bit-strings correspond to the positions of the X operators in the generators g1, g2, and g3C|0L|1L will just be shuffled around.

    The generators g4, g5, and g6 above are all describing the parities of information encoded in standard basis states. The encoded basis states you are given are superpositions of codewords drawn from a linear code, and those codewords all have even parity with respect to the parity-check matrix from that code. As g4 through g6 just describe those same parity checks, it follows that the eigenvalue of the encoded basis states is +1 (corresponding to even parity).

    This is the way in which

    'with the observation about the similarities between the parity check matrix and the generator the exercise is "self evident"'

    — because the stabilisers either manifestly permute the standard basis terms in the two 'codewords', or manifestly are testing parity properties which by construction the codewords will have.

  • Moving beyond codewords

    The list of generators in the table you provide represent the first steps in a powerful technique, known as the stabiliser formalism, in which states are described using no more or less than the parity properties which are known to hold of them.

    Some states, such as standard basis states, conjugate basis states, and the perfectly entangled states |Φ+|00+|11 and |Ψ|01|10 can be completely characterised by their parity properties. (The state |Φ+ is the only one which is a +1-eigenvector of XX and ZZ; the state |Ψ is the only one which is a −1-eigenvector of both these operators.) These are known as stabiliser states, and one can consider how they are affected by unitary transformations and measurements by tracking how the parity properties themselves transform. For instance, a state which is stabilised by XX before applying a Hadamard on qubit 1, will be stabilised by ZX afterwards, because (HI)(XX)(HI)=ZX. Rather than transform the state, we transform the parity property which we know to hold of that state.

    You can use this also to characterise how subspaces characterised by these parity properties will transform. For instance, given an unknown state in the 7-qubit CSS code, I don't know enough about the state to tell you what state you will get if you apply Hadamards on all of the qubits, but I can tell you that it is stabilised by the generators gj=(H7)gj(H7), which consist of

    GeneratorTensor factors1234567g1ZZZZg2ZZZZg3ZZZZg4XXXXg5XXXXg6XXXX
    This is just a permutation of the generators of the 7-qubit CSS code, so I can conclude that the result is also a state in that same code.

    There is one thing about the stabiliser formalism which might seem mysterious at first: you aren't really dealing with information about the states that tells you anything about how they expand as superpositions of the standard basis. You're just dealing abstractly with the generators. And in fact, this is the point: you don't really want to spend your life writing out exponentially long superpositions all day, do you? What you really want are tools to allow you to reason about quantum states which require you to write things out as linear combinations as rarely as possible, because any time you write something as a linear combination, you are (a) making a lot of work for yourself, and (b) preferring some basis in a way which might prevent you from noticing some useful property which you can access using a different basis.

    Still: it is sometimes useful to reason about 'encoded states' in error correcting codes — for instance, in order to see what effect an operation such as H7 might have on the codespace of the 7-qubit code. What should one do instead of writing out superpositions?

    The answer is to describe these states in terms of observables — in terms of parity properties — to fix those states. For instance, just as |0 is the +1-eigenstate of Z, we can characterise the logical state |0L of the 7-qubit CSS code as the +1-eigenstate of

    ZL=ZZZZZZZ
    and similarly, |1L as the −1-eigenstate of ZL. (It is important that ZL=Z7 commutes with the generators {g1,,g6}, so that it is possible to be a +1-eigenstate of ZL at the same time as having the parity properties described by those generators.) This also allows us to move swiftly beyond the standard basis: using the fact that X7 anti commutes with Z7 the same way that X anti commutes with Z, and also as X7 commutes with the generators gi, we can describe |+L as being the +1-eigenstate of
    XL=XXXXXXX,
    and similarly, |L as the −1-eigenstate of XL. We may say that the encoded standard basis is, in particular, encoded in the parities of all of the qubits with respect to Z operators; and the encoded 'conjugate' basis is encoded in the parities of all of the qubits with respect to X operators.

    By fixing a notion of encoded operators, and using this to indirectly represent encoded states, we may observe that

    (H7)XL(H7)=ZL,(H7)ZL(H7)=XL,
    which is the same relation as obtains between X and Z with respect to conjugation by Hadamards; which allows us to conclude that for this encoding of information in the 7-qubit CSS code, H7 not only preserves the codespace but is an encoded Hadamard operation.

Thus we see that the idea of observables as a way of describing information about a quantum states in the form of sign bits — and in particular tensor products as a way of representing information about parities of bits — plays a central role in describing how the CSS code generators represent parity checks, and also in how we can describe properties of error correcting codes without reference to basis states.

Niel de Beaudrap
la source
After having read this I still don't get how it is obvious that X4X5X6X7 (I take this example) stabilize the code space. In your answer you seemed to use the fact you know what the code space looks like noticing that X4X5X6X7 applied on |0L gives |1L. What perturbs me is that if we talked about Z4Z5Z6Z7 operators, I would directly see the link with parity check matrix as their eigenvalues give the parity of the bits 4 to 7 exactly like the first line of parity check matrix. But the X operator are not diagonal in the 0/1 basis. So I don't get...
StarBucK
While it isn't ideal to fuss with the state-vectors for the code-words, from the expansion above we can see that X4X5X6X7 permutes the standard basis components of |0L, and similarly for the standard-basis components of |1L. While this picture involves non-trivial transformations of parts of the state, the overall effect is stabilisation. To see how to see the X stabilisers as parity-checks, the way you do with Z-stabilisers, maybe you could consider the effect of the X stabilisers of |+L and |L, and in the conjugate basis.
Niel de Beaudrap
Thank you for your answer. Ok so to be sure: do you agree that if we don't look at the basis of the code space but only at the parity check matrix and the generators, the only thing we can directly understand is the fact the Z generator can be read in the parity check matrix. But for the X generator even if appears they follow a similar structure it is not obvious to understand why without further calculation ? Because in the Nielsen&Chuang the way it is presented is as if it was obvious. So I wondered if I missed something ?
StarBucK
2

One way that you could construct what the codeword is is to project on the +1 eigenspace of the generators,

|C1=126(i=16(I+gi))|0000000.
Concentrate, to start with, one the first 3 generators
(I+g1)(I+g2)(I+g3).
If you expand this out, you'll see that it creates all the terms in the group (I,g1,g2,g3,g1g2,g1g3,g2g3,g1g2g3). Corresponding it to binary strings, the action of multiplying two terms (since X2=I) is just like addition modulo 2. So, contained within the code are all of the words generated by the parity check matrix (and this is a group, with group operation of addition modulo 2).

Now, if you multiply by one of the X stabilizers, that's like doing the addition modulo 2 on the corresponding bit strings. But, because we've already generated the group, by definition every group element is mapped to another (unique) group element. In other words, if I do

g1×{I,g1,g2,g3,g1g2,g1g3,g2g3,g1g2g3}={g1I,g1g2,g1g3,g2,g3,g1g2g3,g2g3},
I get back the set I started with (using g12=I, and the commutation of the stabilizers), and therefore I'm projecting onto the same state. Hence, the state is stabilized by g1 to g3.

You can effectively make the same argument for g4 to g6. I prefer to think about first applying a Hadamard to every qubit, so the Xs are changed to Zs and vice versa. The set of stabilizers are unchanged, so the code is unchanged, but the Z stabilizer is mapped to an X stabilizer, about which we have already argued.

DaftWullie
la source
1

Ce qui suit ne répond peut-être pas exactement à votre question, mais vise plutôt à fournir un contexte pour l'aider à devenir aussi «évident» que vos sources le prétendent.

le Z l'opérateur a des états propres |0 (avec valeur propre +1) et |1 (avec valeur propre -1). leZZ opérateur sur deux qubits a donc des états propres

|00,|01,|dix,|11
. Les valeurs propres de celles-ci dépendent de la parité des chaînes de bits. Par exemple, avec|00 on multiplie le +1 valeurs propres de l'individu Z opérateurs pour obtenir +1. Pour|11 on multiplie le -1 valeurs propres ensemble et aussi obtenir +1 pour ZZ. Donc, ces deux chaînes de bits de parité paire ont une valeur propre+1, tout comme leur superposition. Pour les deux états de parité impairs (|01 et |dix) il faut multiplier un +1 avec un -1et obtenez un -1 valeur propre pour ZZ.

Notez également que les superpositions de chaînes de bits à parité fixe (comme certaines α|00+β|00) sont également des états propres et ont la valeur propre associée à leur parité. Donc, mesurerZZ n'effondrerait pas une telle superposition.

Cette analyse reste valable alors que nous passons à un plus grand nombre de qubits. Donc, si nous voulons connaître la parité des qubits 1, 3, 5 et 7 (pour choisir un exemple pertinent), nous pourrions utiliser l'opérateurZjeZjeZjeZ. Si nous mesurons cela et obtenons le résultat+1, nous savons que ce sous-ensemble de qubits a un état représenté par une chaîne de bits de parité paire, ou une superposition de celui-ci. Si nous obtenons-1, nous savons que c'est un état de parité impair.

Cela nous permet d'écrire le code de Hamming [7,4,3] en utilisant la notation des codes de stabilisateur quantique. Chaque contrôle de parité est transformé en un générateur de stabilisateur qui aje sur chaque qubit non impliqué dans le contrôle, et Zsur chaque qubit qui est. Le code résultant protègera contre les erreurs anticommutant avecZ (et ont donc pour effet de retourner les bits).

Bien sûr, les qubits ne nous limitent pas à travailler uniquement dans le Zbase. Nous pourrions encoder nos qubits pour un code Hamming classique dans le|+ et |-états à la place. Ce sont les états propres deX, il vous suffit donc de remplacer Z avec X in everything I've said to see how this kind of code works. It would protect against errors that anticommute with X (and so have the effect of flipping phase).

The magic of the Steane code, of course, is that it does both at the same time and protects against everything.

James Wootton
la source