How are you planning on layer2 tunneling for clients?

Just to start this off, I have no intension of using the tunneling in the PoP DNs. I don’t want to run into a scaling issue down the road AND I’m planning on a large part of my radio mix being MDUs. I had considered using the tunnel and running PPPoE over that to simplify deployment, requiring the client to do PPPoE on their device. I’m not sure if we can do GRE tunnels from non PoP DNs/CNs to a separate GRE endpoint (like a mikrotik router etc) or not so that’s a little up in the air. Plus I’m not the biggest fan of PPPoE.

I’m leaning towards a dmark router at each site (MDU or single customer) doing EoIP over IPv6 to the core and just using cnwave as a meshed backhaul network essentially. I’m not really sure I like this model because I don’t really want to have mikrotik routers all over the place. I would much much rather just have a direct connection on singles or a single switch on MDUs. Ideally, SRv6 to VLAN tagging so I can just do port VLANs but SRv6 is down the road.

How are others building out to handle this?

Better question is: Do you need L2 tunneling?

MDUs in our network get a bridged connection to either a L3 switch or a router and a L2 managed switch for ethernet hand off and we route client data. We originally used PPPoE and have been moving away to a more IPoE method. If you need client separation/isolation then use QinQ and route the provider vlan.

Have you looked at Positron? We have been looking at this as a solution to some of the older MDUs that would else need cables up the outside (which ownerships have been very active in saying not happening). Designed for providing from a fiber connection but will accept any ethernet based connection.

I specifically need IPv4 to the customer at the MDU and to the single customer. As I understand it, cnwave doesn’t handle IPv4 so we need to tunnel back to the core to get that. Am I wrong?

you are wrong.
cnwave passes ipv4 and ipv6. its a bridge. what matters is whats on each end of the bridge.

infrastructure of the clients MDU (unkown on prem cable types and all) once you place your router/L3 switch on site, you can decide what passes to and from where and in which IP scheme or both!
If your an IPv6 only setup then provide the client with IPv6 and a router that does private ipv4 to ipv6 translation. Else you can provide a routed IPv4 address to the client at the clients router. set the cnwave data vlan to something your not using on your end and place a route in your tower router for that traffic to that vlan, so you can use the management vlan to manage the link like any other ptp link. client side can simply be configured that all ingress packets get tagged to the data vlan and all egress packets get the tag removed.

I’m looking right at cambium’s slides, explicitly says this is ipv6 only, layer3, ie not a bridge.

cnWave 60GHz can be configured as a L2 bridge and will support IPv4.

It does this through a very simple configuration option setting up a tunnel with can be either terminated at the PoP DN or in the latest beta software in an external concentrator.

You’ll find a number of knowledge base article showing you how to set this up here:

I wouldn’t call those tunnels ‘native’ IPv4 support, and the CPU limitations steer me away from that.

questions:
Can the tunnels be configured from the customer side port to a separate device? Ie, can I do a GRE tunnel from a CN to a mikrotik at the head-end?

Can the L2 Bridge do a VLAN? ie, GRE bridged to a VLAN on the LAN port? Kinda goes with #1

Can multiple tunnels/bridges be configured? ie, a dedicated tunnel for each customer back to the core?

Can the tunnel be terminated at the PoP DN into a VLAN?

Yes. In 1.0.1 beta, we added an option for external tunnel concentrator. This Knowledge Base article has more details:


To any router supporting standard IPv6 L2 GRE. i.e, protocol 0x6558, transparent ethernet bridging. I think Microtik supports proprietary EoIPv6 tunnels, not standard IPv6 GRE


GRE can carry the VLAN traffic. Not sure I am answering the question.


As described in the user guide:

L2 bridge employs Ethernet over GRE (EoGRE) to carry the customer traffic across the Terragraph network.
when L2 bridge is enabled, all CNs and DNs will automatically create a EoGRE tunnel with their POP node and the POP node will create a tunnel back to each of those CNs/DNs. The tunnel is capable of carrying both IPv4 and IPv6 customer traffic between CN and POP. Where the IPv6 over tunnel can be optionally disabled from the UI.


All tunnels at the PoP get bridged. In Release 1.1, we will introduce VLAN trunk ports.

1 Like

Great response, thank you!

Confirming Mikrotik doesn’t support 0x6558, just posting for anyone else diving into this. It actually is the standard(or near it, this is mikrotik after all) GRE layer3 0x0800 that they support. Their EoIP is essentially 0x6558 but their proprietary version.

Are you guys going to expose the tunnel config via an API or anything I can query? Let’s say I drop in a linux box as the external concentrator, sure would be nice to build those GRE tunnels automatically by querying cnmaestro or the e2e controller or something.

Yes, cnMaestro supports API to query the topology. Topology contains list of nodes and their IPv6 addresses. A linux box could build the tunnels automatically using this API.

1 Like

What are the chances cambium will offer some sort of concentrator hardware? something that can do these GRE tunnels but also handle SRv6 etc etc, so we have a complete solution from cambium for this.

Ideally, the customer side looks like layer2 and the head-end looks like layer 2 and we don’t have to concern ourselves with any of the terragraph peculiarities.

Maybe a small appliance that can terminate these tunnels as well as has a 10G switch to handle the PoP DN connections etc.

1 Like