Je vous donne une liste de vecteurs bit de largeur . Votre objectif est de renvoyer deux vecteurs de bits de la liste qui n'ont pas de 1 en commun, ou bien de signaler qu'aucune telle paire n'existe.
Par exemple, si je vous donne la seule solution est . Alternativement, l'entrée n'a pas de solution. Et toute liste qui contient le vecteur de bits tout à zéro et un autre élément a une solution triviale .
Voici un exemple légèrement plus difficile, sans solution (chaque ligne est un vecteur de bits, les carrés noirs sont 1s et les carrés blancs sont 0s):
■ ■ ■ ■ □ □ □ □ □ □ □ □ □
■ □ □ □ ■ ■ ■ □ □ □ □ □ □
■ □ □ □ □ □ □ ■ ■ ■ □ □ □
■ □ □ □ □ □ □ □ □ □ ■ ■ ■
□ ■ □ □ □ ■ □ □ □ ■ ■ □ □
□ ■ □ □ ■ □ □ □ ■ □ □ □ ■
□ ■ □ □ □ □ ■ ■ □ □ □ ■ □ <-- All row pairs share a black square
□ □ ■ □ □ □ ■ □ ■ □ ■ □ □
□ □ ■ □ □ ■ □ ■ □ □ □ □ ■
□ □ ■ □ ■ □ □ □ □ ■ □ ■ □
□ □ □ ■ ■ □ □ ■ □ □ ■ □ □
□ □ □ ■ □ □ ■ □ □ ■ □ □ ■
□ □ □ ■ □ ■ □ □ ■ □ □ ■ □
Dans quelle mesure peut-on trouver deux vecteurs binaires non superposés ou s’il n’existe pas?
L'algorithme naïf, où vous comparez simplement chaque paire possible, est . Est-il possible de faire mieux?
la source
Réponses:
Warmup: bitvectors aléatoires
En guise d'échauffement, nous pouvons commencer avec le cas où chaque bitvector est choisi de manière uniforme au hasard. Il s'avère ensuite que le problème peut être résolu enO(n1.6min(k,lgn)) temps (plus précisément, le 1.6 peut être remplacé par lg3 ).
Nous considérerons la variante à deux ensembles suivante du problème:
Étant donné les ensemblesS,T⊆{0,1}k de vecteurs binaires, déterminer où il existe une paire non superposée s∈S,t∈T .
La technique de base pour résoudre ce problème consiste à diviser pour mieux régner. Voici un algorithme de temps utilisant la division et la conquête:O(n1.6k)
Divisez et T en fonction de la première position de bit. En d'autres termes, forme S 0 = { s ∈ S : s 0 = 0 } , S 1 = { s ∈ S : s 0 = 1 } , T 0 = { t ∈ T : t 0 = 0 } , T 1 = { t ∈ T : 0S T S0={s∈S:s0=0} S1={s∈S:s0=1} T0={t∈T:t0=0} T1={t∈T:t0=1} .
Recherchez maintenant récursivement une paire non superposée de , de S 0 , T 1 et de T 1 , S 0 . Si un appel récursif trouve une paire non chevauchante, sortez-la, sinon sortez "Aucune paire chevauchante existe".S0,T0 S0,T1 T1,S0
Puisque tous les vecteurs binaires sont choisis au hasard, nous pouvons nous attendre et | T b | ≈ | T | / 2 . Ainsi, nous avons trois appels récursifs et nous avons réduit la taille du problème d'un facteur deux (la taille des deux ensembles est réduite d'un facteur deux). Après la séparation de lg min ( | S | , | T | ) , l'un des deux ensembles est réduit à la taille 1 et le problème peut être résolu en temps linéaire. Nous obtenons une relation de récurrence le long des lignes de|Sb|≈|S|/2 |Tb|≈|T|/2 lgmin(|S|,|T|) , dont la solution est T ( n ) = O ( n 1,6 k )T(n)=3T(n/2)+O(nk) T(n)=O(n1.6k) . En tenant compte du temps d'exécution plus précisément dans le cas des deux ensembles, nous voyons que le temps d'exécution est .O(min(|S|,|T|)0.6max(|S|,|T|)k)
Ceci peut être encore amélioré, en notant que si , alors la probabilité qu'il existe une paire non chevauchante est exponentiellement petite. En particulier, si x , y sont deux vecteurs aléatoires, la probabilité qu'ils ne chevauche pas ( 3 / quatre ) k . Si | S | = | T | = n , il existe n 2 de ces paires, donc par une union liée, la probabilité qu'il existe une paire non chevauchante est au plus n 2 ( 3k≥2.5lgn+100 x,y (3/4)k |S|=|T|=n n2 . Lorsque k ≥ 2,5 lg n + 100 , ceci est ≤ 1 / deux cent . Donc, en tant qu'étape de prétraitement, si k ≥ 2,5 lg n + 100 , alors nous pouvons immédiatement retourner "Aucune paire non chevauchante existe" (la probabilité que cela soit incorrect est négligeable), sinon nous exécutons l'algorithme ci-dessus.n2(3/4)k k≥2.5lgn+100 ≤1/2100 k≥2.5lgn+100
On obtient ainsi un temps de fonctionnement de (ou O ( min ( | S | , | T | ) 0,6 max ( | S | , | T | ) min ( k , lg n) ) ) pour la variante à deux ensembles proposée ci-dessus), pour le cas spécial où les vecteurs binaires sont choisis uniformément au hasard.O(n1.6min(k,lgn)) O(min(|S|,|T|)0.6max(|S|,|T|)min(k,lgn))
Of course, this is not a worst-case analysis. Random bitvectors are considerably easier than the worst case -- but let's treat it as a warmup, to get some ideas that perhaps we can apply to the general case.
Lessons from the warmup
We can learn a few lessons from the warmup above. First, divide-and-conquer (splitting on a bit position) seems helpful. Second, you want to split on a bit position with as many1 's in that position as possible; the more 0 's there are, the less reduction in subproblem size you get.
Third, this suggests that the problem gets harder as the density of1 's gets smaller -- if there are very few 1 's among the bitvectors (they are mostly 0 's), the problem looks quite hard, as each split reduces the size of the subproblems a little bit. So, define the density Δ to be the fraction of bits that are 1 (i.e., out of all nk bits), and the density of bit position i to be the fraction of bitvectors that are 1 at position i .
Handling very low density
As a next step, we might wonder what happens if the density is extremely small. It turns out that if the density in every bit position is smaller than1/k−−√ , we're guaranteed that a non-overlapping pair exists: there is a (non-constructive) existence argument showing that some non-overlapping pair must exist. This doesn't help us find it, but at least we know it exists.
Why is this the case? Let's say that a pair of bitvectorsx,y is covered by bit position i if xi=yi=1 . Note that every pair of overlapping bitvectors must be covered by some bit position. Now, if we fix a particular bit position i , the number of pairs that can be covered by that bit position is at most (nΔ(i))2<n2/k . Summing across all k of the bit positions, we find that the total number of pairs that are covered by some bit position is <n2 . This means there must exist some pair that's not covered by any bit position, which implies that this pair is non-overlapping. So if the density is sufficiently low in every bit position, then a non-overlapping pair surely exists.
However, I'm at a loss to identify a fast algorithm to find such a non-overlapping pair, in these regime, even though one is guaranteed to exist. I don't immediately see any techniques that would yield a running time that has a sub-quadratic dependence onn . So, this is a nice special case to focus on, if you want to spend some time thinking about this problem.
Towards a general-case algorithm
In the general case, a natural heuristic seems to be: pick the bit positioni with the most number of 1 's (i.e., with the highest density), and split on it. In other words:
Find a bit positioni that maximizes Δ(i) .
SplitS and T based upon bit position i . In other words, form S0={s∈S:si=0} , S1={s∈S:si=1} , T0={t∈T:ti=0} , T1={t∈T:ti=1} .
Now recursively look for a non-overlapping pair fromS0,T0 , from S0,T1 , and from T1,S0 . If any recursive call finds a non-overlapping pair, output it, otherwise output "No overlapping pair exists".
The challenge is to analyze its performance in the worst case.
Let's assume that as a pre-processing step we first compute the density of every bit position. Also, ifΔ(i)<1/k−−√ for every i , assume that the pre-processing step outputs "An overlapping pair exists" (I realize that this doesn't exhibit an example of an overlapping pair, but let's set that aside as a separate challenge). All this can be done in O(nk) time. The density information can be maintained efficiently as we do recursive calls; it won't be the dominant contributor to running time.
What will the running time of this procedure be? I'm not sure, but here are a few observations that might help. Each level of recursion reduces the problem size by aboutn/k−−√ bitvectors (e.g., from n bitvectors to n−n/k−−√ bitvectors). Therefore, the recursion can only go about k−−√ levels deep. However, I'm not immediately sure how to count the number of leaves in the recursion tree (there are a lot less than 3k√ leaves), so I'm not sure what running time this should lead to.
la source
Faster solution whenn≈k , using matrix multiplication
Suppose thatn=k . Our goal is to do better than an O(n2k)=O(n3) running time.
We can think of the bitvectors and bit positions as nodes in a graph. There is an edge between a bitvector node and a bit position node when the bitvector has a 1 in that position. The resulting graph is bipartite (with the bitvector-representing nodes on one side and the bitposition-representing nodes on the other), and hasn+k=2n nodes.
Given the adjacency matrixM of a graph, we can tell if there is a two-hop path between two vertices by squaring M and checking if the resulting matrix has an "edge" between those two vertices (i.e. the edge's entry in the squared matrix is non-zero). For our purposes, a zero entry in the squared adjacency matrix corresponds to a non-overlapping pair of bitvectors (i.e. a solution). A lack of any zeroes means there's no solution.
Squaring an n x n matrix can be done inO(nω) time, where ω is known to be under 2.373 and conjectured to be 2 .
So the algorithm is:
The most expensive step is squaring the adjacency matrix. Ifn=k then the overall algorithm takes O((n+k)ω)=O(nω) time, which is better than the naive O(n3) time.
This solution is also faster whenk grows not-too-much-slower and not-too-much-faster than n . As long as k∈Ω(nω−2) and k∈O(n2ω−1) , then (n+k)ω is better than n2k . For w≈2.373 that translates to n0.731≤k≤n1.373 (asymptotically). If w limits to 2, then the bounds widen towards nϵ≤k≤n2−ϵ .
la source
This is equivalent to finding a bit vector which is a subset of the complement of another vector; ie its 1's occur only where 0's occur in the other.
If k (or the number of 1's) is small, you can getO(n2k) time by simply generating all the subsets of the complement of each bitvector and putting them in a trie (using backtracking). If a bitvector is found in the trie (we can check each before complement-subset insertion) then we have a non-overlapping pair.
If the number of 1's or 0's is bounded to an even lower number than k, then the exponent can be replaced by that. The subset-indexing can be on either each vector or its complement, so long as probing uses the opposite.
There's also a scheme for superset-finding in a trie that only stores each vector only once, but does bit-skipping during probes for what I believe is similar aggregate complexity; ie it haso(k) insertion but o(2k) searches.
la source
Represent the bit vectors as ann×k matrix M . Take i and j between 1 and n .
Complexity
Using naive multiplication, this requiresO(n2k) arithmetic operations. If n=k , it takes O(n2.37) operations using the utterly impractical Coppersmith-Winograd algorithm, or O(n2.8) using the Strassen algorithm. If k=O(n0.302) , then the problem may be solved using n2+o(1) operations.
la source