I have done some reading, it looks like 802.1ad is an amendment to 802.1q. I assumed it was a separate protocol. tl;dr however, 0x8100 means you have a CFI bit, so, set it to 0 please.

On 9 October 2014 at 2:02:35 pm, Ewen McNeill (nznog@ewen.mcneill.gen.nz) wrote:
On 9/10/14 13:26, Nathan Ward wrote: 
> Key bit is “802.1ad”, not 802.1q. Using 0x88a8 vs 0x8100/0x9100 is 
> signalling that you’re using 802.1ad vs. stacked 802.1q, so should set 
> this bit appropriate to the tag type. 

For those playing along at home: 
- ethertype = 0x8100 means 802.1Q or 802.1aq or some pre-standard tag 

- ethertype = 0x88a8 means 802.1ad (standardised double tagging) 

- ethertype = 0x9100, 0x9200, ... is pre-standard double tagging (looks 
like 0x9100 was the most common of those alternatives). 

See, eg, 

AFAICT the bit in that position in 802.1Q (0x8100) used to mean CFI (eg, 
802.1Q-2003) and was changed by 802.1Q-2005 to mean DEI; the bit in that 
position in 802.1ad (0x88a8) seems to have always meant DEI. If the bit 
is clear (0), then the difference between 802.1Q-2003 and 802.1Q-2005 
probably doesn't matter here; if it is set, it obviously does. Joy. 

This is not strictly true. 802.1ad updates 802.1q.

It’s perhaps unfortunate that the article on Wikipedia is worded the way it is, as though CFI is deprecated. CFI is still valid in CVLAN tags, which 802.1ad specifies as having TPID 0x8100. DEI is only valid in SVLAN frames, which have TPID 0x88a8.

A distinct Ethertype has been allocated (Table 9-1) for use in the TPID field (9.4) of each tag type so they can be distinguished from each other, and from other protocols.

The semantics and structure of the S-TAG is identical to that of the C-TAG, with the exception that bit 5 in octet 1, the Drop Eligible Indicator (DEI) bit, does not convey a CFI (9.6). The priority and drop_eligible parameters are conveyed in the 3-bit PCP field and the DEI bit as specified in 6.7.3.

If you’re sending 0x8100, you’re setting CFI, so, make it 0.

> I’m with Don on this one - the frame type bits signal how to interpret 
> the following bits, you can’t just swap them around. 

Sadly it looks to be even less clear than that. Using 0x88a8 (as Nathan 
suggests) appears to clear up the confusion. But using 0x8100 appears 
to require asking "umm, which 802.1Q/802.1aq/pre-standard double-tagging 
thing was the sender following" if the CFI/DEI bit was set. 

So, eg, force clearing that bit as you receive it into your 0x8100 
network (and, eg, apply your own QoS), upgrading everything to speak 
only modern 802.1Q (ie assume bit is always DEI), or using 0x88a8 
(802.1ad) seem to be the only reasonably compatible options to avoid 
equipment confusion. 

Definitely looks like something that would benefit from being more 
clearly documented by Chorus as a technical item RSPs that choose 0x8100 
as their ethertype and use this particular product should watch out for. 

My view remains the same - LFCs should implement 802.1ad as written, and use CFI for 0x8100 frames and DEI for 0x88a8 frames.


PS: It appears one can get actual IEEE 802.1 standards as PDFs here: 


if one is willing to give up an email address to IEEE (I haven't 
actually tried to give them an email address though). 

Already done :-)

You can also get them from:



Nathan Ward