WireGuard on Brume, any real life figures?

I am considering the Brume as I have a 500 Mbps connection and the 750S-Ext is too slow to carry my over VPN traffic in this day and age. I see the Brume says it can offer a VPN connection speed up to 280 Mbps but the 750S-Ext says up to 68 and I only get 25 to 50 Mbps. While pretty close, I am still interested in seeing how the Brume fairs in real-life conditions.

I really appreciate any response including real-life results here. Please also mention your VPN provider (if you host your own just say your own or hosting provider name if possible). I am using GCE and Cloudflare Warp personally.

And what VPN protocol are you talking about here? OpenVPN? Wireguard?

OpenVPN is much slower because the processors used in GL routers usually don’t have crypto engines built in to do hardware decoding, all is done in software. Wireguard on the other hand was built from the ground up to be fast, so much better theoretical figures as well as real world ones.

1 Like

WireGuard…20characters.

1 Like

@jeffsf @sfx2000 Could you guys give him a real world Wireguard test on your Brummes? I don’t have mine yet, i will add my info here at a later date.

2 Likes

Don’t have much to share - as I don’t use commercial VPN providers.

Recent trip I had brume hosting WG server at home, but that connection is a 300/15 connection, over customer cable modem.

Brume WG end-point was behind a pfSense SG-2440, so no concern there - SG-2440 has no issues with 300/15 connections.

Should note that that 15Mbps uplink is really the limit here with a self hosted WG or OPVN server…

The client end-point was USB150 as a WG client. So that’s a bit of a limit as well. Hotel WiFi wasn’t that great, and very crowded wifi at that… USB150 was seeing at best around 500 Kbps

That being said - 50Mbps on Brume is certainly doable on both OVPN and WG, I have no doubt there

1 Like

@jeffsf @sfx2000 You guys can also do some LAN server to client throughput tests, to see what the maximum is.

Time is the fire in which we all burn - don’t have much time to set up a LAN side test harness, as I’m busy with $$$DayJob right now…

Below are the results of some benchmark throughput testing with WireGuard, routing, and NAT of my GL-AR750S (OpenWrt master firmware from August, 2019) and Brume (as-delivered GL.iNet firmware) using flent and the multi-stream top _8down, tcp_8up, and RRUL tests.

These tests all have at least 8 concurrent TCP streams and are much more aggressive than are single-stream iperf or netperf testing.

SQM, when used, is “piece of cake”. The search looks for the highest throughput where the eight streams are “within 1%” of each other.

Latency is significantly less when either device isn’t being pushed to its limits.

Packet loss and latency may impact throughput in “real-world” siutuations. I don’t have a link at home that is sufficient to reliably measure at these rates.

More details on the testing at

(The Brume results are not posted there as I have not yet built and flashed comparable Brume firmware from OpenWrt master or the GL.iNet sources.)

I also have not tested performance under Ubuntu, which may be different due to the compiler optimizations for OpenWrt (typically for size) and that of Ubuntu (typically for performance).

Key

290
(8)
Throughput in Mbits/s
ping time in ms

WireGuard, Routing/NAT

Device 8 Dn 8 Up RRUL 8 Dn / SQM 8 Up / SQM RRUL / SQM
GL-AR750S Flow offload disabled 80
(33)
75
(34)
75
(37)
41
(9)
50
(9)
46
(10)
GL-MV1000 Flow offload disabled 260
(9)
260
(8)
220
(12)
161
(8)
163
(5)
156
(6)
GL-MV1000 Flow offload (as-shipped) 290
(8)
260
(8)
270
(9)
168
(8)
196
(6)
176
(6)
2 Likes

Numbers look reasonable - and agree that flent is a better tool for testing routers compared to iperf/netperf, and gives better feedback.

Thought that SW flow-offload and SQM/Cake were still a bit incompatible, but my head has been deep in $DayJob for the last couple of weeks.

Would be nice to see Brume on Master at some point - I do feel it’s one of the better 3720 examples out there that are publically available - espressoBIN does have it’s challenges :wink:

I did sync up to GL-Inet, and checked out 19.07, but right now it fails to build due to dependency issues that need further review.

Apologize for the uninformed questions:

This test of the brume suggests that, if you had a link much faster than 260-290 Mbps (e.g. 1 Gbps), the fastest the brume hardware could route, firewall, and run WG is around 260-290 Mbps?

Do you have any tests where the brume is not running a VPN, i.e. it is acting simply as an edge router on a very fast connection? Should the mvebu a53 target on your OpenVPN forum post be representative?

The Brume and newly released Brume-W both can do 800+ Mbps from WAN to LAN.
I can’t find old posts with this figure, but it has been asked in the Brume-W kickstarter comments:

https://www.kickstarter.com/projects/glinet/brume-w-pocket-sized-wireless-gateway-for-edge-computing/comments

Brume can do 800MB+ on straight NAT on WAN to LAN last time I checked (I was a early Brume tester)

Convexa-B (B1300) is similar, but lower performance on WG/OVPN as it’s CPU bound compared to Brume (Cortex-A74 vs Cortex-A532)

You were too busy with your day job back then :smiley:

Still too busy with DayJob, LOL…

I did a project on 3720 - it’s a surprising performer… not as good as the Armada 38x, but good enough.

GL-Inet did pick up more than a few fixes compared to the Marvell reference platform with Brume

sfx

I recently bought Gl.iNET Brume Router and tested the speed.

Used TorGuard VPN provider with UK location:

I have a 500 Mbps internet connection; on wired connection speed, I’m getting 300 Mbps and on wireless 272-279 Mbps easily.

Then I installed Openwrt on Raspberry Pi4 with two ethernets, one Wlan to connect apple AP and the other Lan for connecting with my ISP connection and configured WireGuard with TorGuard, and I’m getting 497 Mbps speed and Wireless 479 Mbps. so I’m happy to keep Raspberry Pi4 :slight_smile:

I’m using CAT 8 cables for router and PC connection.

I hope this help.

Qamar

1 Like

So a Pi is still faster? I’m on LTE, so can’t get that fast, so PI is fine?