Just to start this off, I have no intension of using the tunneling in the PoP DNs. I don’t want to run into a scaling issue down the road AND I’m planning on a large part of my radio mix being MDUs. I had considered using the tunnel and running PPPoE over that to simplify deployment, requiring the client to do PPPoE on their device. I’m not sure if we can do GRE tunnels from non PoP DNs/CNs to a separate GRE endpoint (like a mikrotik router etc) or not so that’s a little up in the air. Plus I’m not the biggest fan of PPPoE.
I’m leaning towards a dmark router at each site (MDU or single customer) doing EoIP over IPv6 to the core and just using cnwave as a meshed backhaul network essentially. I’m not really sure I like this model because I don’t really want to have mikrotik routers all over the place. I would much much rather just have a direct connection on singles or a single switch on MDUs. Ideally, SRv6 to VLAN tagging so I can just do port VLANs but SRv6 is down the road.
MDUs in our network get a bridged connection to either a L3 switch or a router and a L2 managed switch for ethernet hand off and we route client data. We originally used PPPoE and have been moving away to a more IPoE method. If you need client separation/isolation then use QinQ and route the provider vlan.
Have you looked at Positron? We have been looking at this as a solution to some of the older MDUs that would else need cables up the outside (which ownerships have been very active in saying not happening). Designed for providing from a fiber connection but will accept any ethernet based connection.
you are wrong.
cnwave passes ipv4 and ipv6. its a bridge. what matters is whats on each end of the bridge.
infrastructure of the clients MDU (unkown on prem cable types and all) once you place your router/L3 switch on site, you can decide what passes to and from where and in which IP scheme or both!
If your an IPv6 only setup then provide the client with IPv6 and a router that does private ipv4 to ipv6 translation. Else you can provide a routed IPv4 address to the client at the clients router. set the cnwave data vlan to something your not using on your end and place a route in your tower router for that traffic to that vlan, so you can use the management vlan to manage the link like any other ptp link. client side can simply be configured that all ingress packets get tagged to the data vlan and all egress packets get the tag removed.
Yes. In 1.0.1 beta, we added an option for external tunnel concentrator. This Knowledge Base article has more details:
To any router supporting standard IPv6 L2 GRE. i.e, protocol 0x6558, transparent ethernet bridging. I think Microtik supports proprietary EoIPv6 tunnels, not standard IPv6 GRE
GRE can carry the VLAN traffic. Not sure I am answering the question.
As described in the user guide:
L2 bridge employs Ethernet over GRE (EoGRE) to carry the customer traffic across the Terragraph network.
when L2 bridge is enabled, all CNs and DNs will automatically create a EoGRE tunnel with their POP node and the POP node will create a tunnel back to each of those CNs/DNs. The tunnel is capable of carrying both IPv4 and IPv6 customer traffic between CN and POP. Where the IPv6 over tunnel can be optionally disabled from the UI.
All tunnels at the PoP get bridged. In Release 1.1, we will introduce VLAN trunk ports.
Confirming Mikrotik doesn’t support 0x6558, just posting for anyone else diving into this. It actually is the standard(or near it, this is mikrotik after all) GRE layer3 0x0800 that they support. Their EoIP is essentially 0x6558 but their proprietary version.
Are you guys going to expose the tunnel config via an API or anything I can query? Let’s say I drop in a linux box as the external concentrator, sure would be nice to build those GRE tunnels automatically by querying cnmaestro or the e2e controller or something.