Yes. I will do that.

You are right with the no. 2. But this would mean unicast packets are flooded, or we have to wait for ARP timeout and hosts re-arp again.
I will look into this with more detail later.

On Mon, May 21, 2018 at 10:27 AM Josh Bailey <> wrote:
OK. Let's take care of the first change first - keen to see that. Thanks for working on this.

I don't follow why the switch originated probe is necessary - if the correct action is taken, to disable a down stacking link (like we do for LACP), then the packet can just be flooded via a different stacking port per the existing algorithm?

Maybe if you take care of 1 first, you can check via one of the existing integration tests and show that this is not handled (or is handled)?

On Mon, May 21, 2018 at 10:07 AM Truong Huu Trung <> wrote:
Yes, it is small change to implement the probing.

Say this topo:

                           /---- S2 ----\
H1 - (root) S1 --/         |          \---- S4
                          \---- S3 ---- /
S3 learns H1, it calculates unicast rules by querying all DPs to see the origin of H1.
When link S1-S3 downs, it needs to know where is H1 to recalculate unicast rules. 
But this won't work it S1, S3 are controlled by different Faucets. For now, all stack switches have to be with the same Faucet.

If we assign each S1 an unique MAC, during the broadcast (this is when we learn hosts), rewrite with the MAC of the origin switch.
We can compute static unicast rules based on these MAC for making sure the flood backets are delivered to all stack switches.
That should work similar to QinQ.

On Mon, May 21, 2018 at 9:52 AM Josh Bailey <> wrote:
Thanks. For 1, the current code already sends DP ID/port ID in the probe. Do you propose to use this to implement down-after-count? If so I imagine this should be a relatively small change - keen to see that.

For 2 - I don't follow how the mapping works. Would you be able to summarize how the mapping is used with a simple topology (say, one root switch and one non-root switch)? ie. packet comes in on edge, packet is compared with something, etc.

QinQ would be very convenient, unfortunately there is not widespread support for it. In the case where you replace the destination address, how do you account for different kinds of broadcasts/multicasts/a flooded unicast?

On Mon, May 21, 2018 at 8:05 AM Truong Huu Trung <> wrote:
Two protocols being proposed as ways towards making the stack failure tolerance. Currently, the root switch or a link down can break broadcast and unicast in the stack.

1. probing to discover a physical link or a valve failure:
by periodically sending probe packets via all physical stack links. Missing e.g. 3 consecutive packets implies link or peer down. With multiple stack links between switches, all links down means peer down.
This can be implemented by extending the current LLDP beacon. By including dp_id, port_id to the probe, one can compare the received info with the config file to determine if miscabling has happened.

2. state propagation.
Two states that are important for broadcast and unicast calculation after a failure: mapping between a MAC and a switch, and link state.
When calculating unicast rules, we need mappings host-to-switch. Two possible ways for that: tagging (QinQ) or overloading dest MAC (actually replace broadcast MAC with unique MAC of the origin sw; switches will have rules to match and broadcast the packets to non-stack ports as normal).
For link state, some kind of gossip can be used.

I've have implemented some part of this. but really want to hear your comments to make it usable.

Faucet-dev mailing list