To configure the FireBrick as a PPPoE BRAS, you need to configure a
PPPoE link (similar to configuring a client) - either by defining a
ppp top-level object in the XML config or by using the
web user interface. PPPoE links can be created from the "Interface" category
icon under the section "PPPoE settings", where you can click to "Add" a new
link. The mode should be set to bras-l2tp.
When the FireBrick is acting as a server, you should also configure
the l2tp object to contain a suitable
incoming configuration. This is because the PPPoE
connections appear as if they've arrived via L2TP, so they have the same
options of local IP termination or relay via L2TP onwards to another LNS.
See Section 14.3.1 for more information about the
handling of incoming L2TP tunnels.
Since the BRAS uses these "virtual" L2TP connections, many of
the options in the ppp object are ignored or not used to
configure the server. Instead you should configure the incoming PPPoE
connection using L2TP settings.
The BRAS can be associated with an L2TP incoming section by matching the
ac-name of the BRAS ppp entry with the remote-hostname
of the incoming L2TP entry. Note that when ac-name is specified on the
BRAS, the client must also be configured with a matching ac-name.
To avoid this, it's also possible to use the name element of the BRAS
ppp entry as this will be used for matching with the remote-hostname if
no ac-name is specified. This makes it possible to associate BRAS ppp and L2TP
incoming sections without needing to specify an ac-name on the client.
If a remote-hostname is not specified for an incoming
section, then that configuration will match with all remaining BRAS entries, so make sure
that specific L2TP incoming sections occur first in your config file.
L2TP settings can be created from the web user interface by selecting
the "Tunnels" category icon and selecting "Edit L2TP settings". Select "Add
an incoming connection". You can specify various options such as
pppdns and the local-hostname (i.e.
the hostname reported by the BRAS).
A simple configuration using local authentication might contain:
<ppp port="PPP_PORT" mode="bras-l2tp" name="bras-example"/>
...
<l2tp>
<incoming remote-hostname="bras-example" local-ppp-ip="..." pppdns1="...">
<match username="..." password="..." remote-ppp-ip="..." />
<match username="..." password="..." remote-ppp-ip="..." />
</incoming>
</l2tp>More complex configurations might typically use RADIUS to decide whether the session is accepted and what settings should be applied, or might relay sessions down an L2TP tunnel.
Just like for the PPPoE client, the BRAS mode supports baby jumbo frame negotiation to allow full 1500 byte MTU operation (as described earlier).
If an interface is configured to work in PPPoE BRAS mode, then it can accept packets with an additional VLAN tag. This is passed as the NAS_PORT on RADIUS requests relating to the connection. The reply packets have the same VLAN tag added. Where the interface is set up on VLAN 0 (untagged) then the additional VLAN tag is only processed where there is not an interface or ppp setting for that specific VLAN configured.
When an incoming PPPoE connection occurs, all we reliably know is the vlans it came in on and the client MAC. Therefore we don't have the username or other information that we might use when steering other protocols (for instance L2TP), so some slightly different approaches are taken.
The FireBrick balancing mechanisms for PPPoE act by suppressing PADI responses on units that do not want particular connections, as there isn't a general mechanism in the protocol to direct connections to a specific unit.
Several FireBricks can act as a receiving cluster, sharing information about relevant configuration and load-state by sending small UDP broadcast packets over a LAN.
The interface for this controlling LAN is configured via the port and vlan attributes of the ppp-balance tag in the configuration.
PPPoE connections may be assigned to a "balance group". These help to spread load more evenly around a pool of PPPoE servers, by suppressing responses to PADIs on units with higher load and allowing those with more capacity to take any new connections.
The metric used for load is the total rx and tx connection speed limit in bytes for all existing sessions on the unit, both L2TP and PPPoE, here referred to as the "committed capacity".
The capacity attribute sets a target maximum, which is used when determining the available capacity of the unit. This would usually be the amount of committed capacity you'd expect to be able to handle without contention, which is often a multiple of the unit's actual capacity as it is usually very unlikely for it to all be used at once. If the unit is doing a lot of something else, e.g. routing, then you may want to tweak this down to steer more of the load elsewhere.
PPPoE entries where the mode is bras-l2tp can be assigned to a group via the balance-group attribute. Different units receiving the same PADI should be configured so that the bras entries for the same PADI map to the same balance group.
For units with the same priority we prefer the one with the highest available capacity.
Lower priority units will only accept connections when they have capacity and there are no higher priority options with remaining capacity.
The equivalence-threshold parameter determines how much more capacity another peer must have for this unit to stop accepting new connections. This allows several similarly loaded units to be simultaneously accepting which can help in failover cases or with large bursts of new connections.
To operate optimally the networks should be configured so that when all members of the group can see each other on the broadcast interface then they will also be receiving the same PADIs. Members of a group that do not appear to be receiving as many PADIs as their peers will be suppressed, so there is a backstop if a misconfiguration occurs between units or switches. This mechanism also kicks in if a network partition occurs where the PADIs are not delivered to all units but the load sharing packets are. However this may take some time, unlike "normal" partition detection which activates in less than a second.
It is also possible to direct all connections for a few specific client MACs onto a given unit. This mechanism overrides the balance groups and is only intended for temporary diversion of a small number of test connections.
A list of MACs may be supplied in the macs attribute of the ppp-balance tag.