<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en"><generator uri="https://jekyllrb.com/" version="4.4.1">Jekyll</generator><link href="https://2guystek.tv/feed.xml" rel="self" type="application/atom+xml" /><link href="https://2guystek.tv/" rel="alternate" type="text/html" hreflang="en" /><updated>2026-04-14T17:45:36+00:00</updated><id>https://2guystek.tv/feed.xml</id><subtitle>Welcome to 2GuysTek!</subtitle><author><name>2GT_BK</name></author><entry><title type="html">MikroTik Manager Is Now Open Source — You Asked, Here It Is</title><link href="https://2guystek.tv/homelab/networking/infrastructure/2026/04/13/mikrotik-manager-open-source-release.html" rel="alternate" type="text/html" title="MikroTik Manager Is Now Open Source — You Asked, Here It Is" /><published>2026-04-13T00:00:00+00:00</published><updated>2026-04-13T00:00:00+00:00</updated><id>https://2guystek.tv/homelab/networking/infrastructure/2026/04/13/mikrotik-manager-open-source-release</id><content type="html" xml:base="https://2guystek.tv/homelab/networking/infrastructure/2026/04/13/mikrotik-manager-open-source-release.html"><![CDATA[<p>I said I probably wouldn’t release it publicly.</p>

<p>You all had other plans.</p>

<hr />

<h2 id="you-made-me-do-this">You Made Me Do This</h2>

<p>A few days ago I published a post (and video) about a project I vibe-coded over a handful of nights using Claude Opus inside VS Code — a unified MikroTik management platform. Something like UniFi or Meraki, but for MikroTik hardware. I fully expected the response to be split between people who were excited about the concept and people who were ready to tar and feather me for writing production code with AI. And honestly, it was. But what I did <em>not</em> expect was <strong>over 70 comments</strong> asking me to open source and release the project.</p>

<p>Seventy. I’ve been doing this long enough to know that the community doesn’t usually agree on anything in that kind of volume or with that kind of consistency. So I listened.</p>

<p><strong><a href="https://github.com/2GT-Media-Group-LLC/mikrotik-manager">MikroTik Manager is now live on GitHub.</a></strong></p>

<p>Before we get into the how-to, I want to set some expectations up front, because I’d rather be honest with you than have you feeling burned later.</p>

<hr />

<h2 id="setting-expectations-this-is-a-beta-release">Setting Expectations: This Is a Beta Release</h2>

<p>This is version <strong>0.10.0 Beta</strong>. It works. I’ve been running it in my own homelab and I use it regularly. But it is beta software, and it was built by one person with an AI assistant over a few evenings — not by a team of engineers who went through a formal development and QA process.</p>

<p><strong>Updates will happen on my schedule.</strong> I’m a dad, I work in IT full-time, and I make videos on the side. When I feel like adding features or fixing bugs, I will. I’m not committing to a roadmap, a release cadence, or SLA of any kind. If that’s a dealbreaker for you, I totally get it — but I wanted to be upfront rather than ghost a community that’s shown this much interest.</p>

<p>What I <em>will</em> say is this: contributions are welcome. If you’re a developer and you want to add something, fix something, or improve something, the door is open. More on that later.</p>

<p>With that said — let’s talk about what this thing actually does, because I think you’re going to like it.</p>

<hr />

<h2 id="what-is-mikrotik-manager">What Is MikroTik Manager?</h2>

<p>MikroTik Manager is a <strong>self-hosted, full-stack network management platform</strong> for MikroTik devices. It gives you a single web interface — a real one, with a modern UI — to monitor, configure, and manage your entire MikroTik infrastructure: routers, switches, and wireless access points.</p>

<p>If you’ve been relying on Winbox, the built-in web admin (which is basically Winbox in a browser), or raw SSH to manage your MikroTik gear, this is meant to be a significant quality-of-life upgrade. It communicates with your devices via the <strong>RouterOS API</strong> and SSH, so no agents, no special firmware, and no changes to your existing network are required beyond enabling the API service on each device you want to manage.</p>

<hr />

<h2 id="features-the-full-breakdown">Features: The Full Breakdown</h2>

<p>Let me walk you through everything the platform does right now, section by section.</p>

<h3 id="dashboard">Dashboard</h3>

<p>The dashboard is your command center. At a glance you get:</p>

<ul>
  <li><strong>Live KPI cards</strong> — total devices in the system, online vs. offline count, total connected wireless clients, and active alert count</li>
  <li><strong>Device type distribution chart</strong> — a quick visual breakdown of routers vs. switches vs. APs in your environment</li>
  <li><strong>Historical client count graph</strong> — track connected client counts over time with adjustable ranges from 1 hour out to 30 days</li>
  <li><strong>Firmware update notifications</strong> — if any of your devices have a newer firmware version available, it shows up here with per-device detail</li>
</ul>

<p>This is the view I have open most of the time. It gives you a fast health check of the whole environment without having to dig into individual devices.</p>

<h3 id="device-management">Device Management</h3>

<p>Adding a device is straightforward: give it a name, IP address, credentials, and API port, and the platform takes it from there. From that point on, it polls each device automatically for status, model, firmware version, and RouterOS version.</p>

<p>Beyond the basics, each device supports:</p>
<ul>
  <li><strong>Per-device notes</strong> — document whatever you want about that piece of gear</li>
  <li><strong>Rack location</strong> — record the rack name, slot, and physical address</li>
  <li><strong>Map integration</strong> — enter a physical address and the platform generates a map pin for that device, which also populates a global map on the dashboard showing where all your gear lives</li>
  <li><strong>Credential encryption at rest</strong> — device passwords are encrypted in the database, not stored in plain text</li>
</ul>

<p>The device list gives you a quick view of all your hardware: names, IPs, model, firmware, status, last-seen timestamps, and controls to refresh or remove a device.</p>

<h3 id="per-device-views">Per-Device Views</h3>

<p>Each device has its own detail page with multiple tabs. Here’s what each one covers.</p>

<h4 id="overview">Overview</h4>

<p>The overview tab is the first thing you land on for any device. You get:</p>
<ul>
  <li>Cards for current CPU load, memory usage, uptime, and OS version</li>
  <li>A full system info card covering everything from the model and firmware version down to the device type, API port, and when it was last contacted</li>
  <li>The physical details card for rack and location info</li>
  <li>An in-line map from the physical address you entered</li>
  <li>Buttons to open the device’s native web admin, launch a <strong>draggable in-browser SSH terminal</strong> directly to the device, and force an immediate configuration sync</li>
</ul>

<p>That SSH terminal is one of my favorite features. You’re already in the management platform — being able to drop to a terminal on the device without opening a separate SSH client is a genuine quality-of-life improvement.</p>

<h4 id="ports-switches">Ports (Switches)</h4>

<p>This tab is where I spent the most time during the build, and it shows. The platform has no hard-coded knowledge of MikroTik hardware models. When you add a switch, it pulls port data from the device and <strong>dynamically builds a visual port diagram</strong> that represents the actual physical hardware layout. Port states are color-coded: red for offline, green for online, blue for selected.</p>

<p>Below the diagram is a full port list showing name, status, speed, MTU, default VLAN, MAC address, comments, and a per-port reload control.</p>

<p>Selecting a port drops you into:</p>
<ul>
  <li><strong>Throughput and packet graphs</strong> with 1, 3, 6, 12, and 24-hour time range selectors</li>
  <li>Detailed port info: state, rate, duplex, auto-negotiation, RX/TX flow control</li>
  <li><strong>Transceiver details</strong> — for SFP+ ports, it identifies the connected cable type and reports all available optic information from the transceiver</li>
</ul>

<p>Port configuration is done directly in the interface — enable/disable, comment, MTU, PoE settings (where supported), link settings, and VLAN assignment. <strong>Select multiple ports at once</strong> to configure LAGs and LACP trunks in a single operation.</p>

<h4 id="vlans-switches">VLANs (Switches)</h4>

<p>A clean list of all configured VLANs, their IDs, names, associated bridges, and a tagged/untagged port breakdown per VLAN. Each row has edit and delete actions inline. The <strong>Add VLAN</strong> button opens a creation dialog to define a VLAN ID, associate it to a bridge, and assign tagged or untagged ports — no CLI required.</p>

<h4 id="routing-routers">Routing (Routers)</h4>

<p>For devices in router mode, this tab surfaces full routing management:</p>
<ul>
  <li><strong>Route table</strong> — view and manage static routes</li>
  <li><strong>OSPF</strong> — configuration and management</li>
  <li><strong>BGP</strong> — configuration and management</li>
  <li><strong>Route Filters and Route Tables</strong></li>
</ul>

<h4 id="firewall">Firewall</h4>

<p>MikroTik’s firewall capability is powerful, but navigating it in Winbox is not fun. This tab surfaces all configured firewall rules in a readable format and lets you create and manage rules through the UI.</p>

<h4 id="config">Config</h4>

<p>Your one-stop shop for device-level settings:</p>
<ul>
  <li>Device name</li>
  <li>Date, time, timezone, and NTP server</li>
  <li>DNS servers</li>
  <li>Management IP configuration</li>
  <li><strong>Built-in firmware update checker</strong> — click the button, find out if there’s a new version, and if there is, you can install it and reboot the device directly from the platform</li>
</ul>

<h4 id="hardware">Hardware</h4>

<p>Rich telemetry for the physical device:</p>
<ul>
  <li><strong>CPU and memory usage graphs</strong> with 6h, 12h, 24h, and 7-day ranges</li>
  <li><strong>Internal storage</strong> details and utilization</li>
  <li><strong>Temperature readings</strong> from all available sensors, with <strong>Fahrenheit/Celsius toggle</strong> for my friends outside the US</li>
  <li><strong>Fan status and RPM</strong> for all internal fans</li>
  <li><strong>Power supply status</strong></li>
  <li><strong>Voltage monitoring</strong> for barrel-powered and 2-wire devices</li>
</ul>

<h4 id="tools">Tools</h4>

<p>I wanted the platform to actually be useful for troubleshooting, so these diagnostics are built in:</p>
<ul>
  <li><strong>Reboot</strong> the device</li>
  <li><strong>Ping</strong> any address from any interface on the device</li>
  <li><strong>Traceroute</strong> from the device</li>
  <li><strong>IP Range Scan</strong> with optional reverse DNS lookups</li>
  <li><strong>Wake-on-LAN</strong> — transmit a WOL packet from the switch to any MAC address on your network</li>
</ul>

<hr />

<h3 id="wireless-management">Wireless Management</h3>

<p>The wireless section is one of the more fully-featured parts of the platform, especially if you’re running MikroTik APs alongside your switches.</p>

<p><strong>Per-AP SSID management</strong> — Create, edit, enable/disable, and delete wireless interfaces on individual APs directly from the UI. No Winbox, no SSH.</p>

<p><strong>Bulk SSID deployment</strong> — This is the one that will save you real time if you have more than a couple of APs. Push an SSID configuration to all managed APs simultaneously with a single action. Want every AP to have a guest SSID with consistent settings? Do it once instead of once per device.</p>

<p><strong>Security profile management</strong> — WPA2/WPA3, PSK, and EAP configurations managed from the platform.</p>

<p><strong>Hardware radio information</strong> — Band filtering and hardware radio details for both the RouterOS 7 <code class="language-plaintext highlighter-rouge">wifi</code> package and the legacy <code class="language-plaintext highlighter-rouge">wlan</code> package, so it works with older deployments too.</p>

<p><strong>Spectral scans</strong> — Schedule or trigger on-demand spectral scans per radio to see what’s going on in the RF environment around your APs.</p>

<p><strong>AP scans</strong> — Schedule or run on-demand scans for nearby access points. Useful for surveying the wireless landscape and identifying interference sources.</p>

<p><strong>Real-time radio monitoring</strong> — Live radio status as you’d expect.</p>

<p><strong>Wireless client tracking</strong> — All wireless clients with vendor lookup via OUI database. Know what’s connected and who made it.</p>

<hr />

<h3 id="network-services">Network Services</h3>

<p>This is where the platform goes from “nice device manager” to “real infrastructure tool.” Four core network services are managed from a unified interface, and each one supports <strong>multi-device management with conflict detection</strong> — meaning you can manage the same service across multiple MikroTik devices without stepping on yourself.</p>

<table>
  <thead>
    <tr>
      <th>Service</th>
      <th>What You Can Do</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>**DHCP**</td>
      <td>IPv4 and IPv6 servers, address pools, static leases, live lease table</td>
    </tr>
    <tr>
      <td>**DNS**</td>
      <td>Upstream servers, static records (A/AAAA/CNAME/MX/NS/PTR/TXT/SRV), cache flush, DNS-over-HTTPS</td>
    </tr>
    <tr>
      <td>**NTP**</td>
      <td>Server (broadcast/manycast) and client (unicast/multicast) configuration, sync status</td>
    </tr>
    <tr>
      <td>**WireGuard**</td>
      <td>Interface management, peer configuration, public key display, RX/TX statistics</td>
    </tr>
  </tbody>
</table>

<p>The WireGuard management in particular is something I’ve wanted in a homelab tool for a long time. Having it alongside DHCP, DNS, and NTP in one place is genuinely useful.</p>

<hr />

<h3 id="network-topology">Network Topology</h3>

<p>The topology view builds an <strong>auto-discovered network map</strong> of your infrastructure using LLDP, CDP, and MNDP neighbor data pulled from all connected devices. The result is an interactive node graph with device type icons and protocol-priority link deduplication — meaning if two devices report the same link via multiple protocols, you see one clean connection, not three overlapping ones.</p>

<p>It’s one of those features that’s genuinely impressive to look at the first time your topology renders correctly.</p>

<hr />

<h3 id="client-tracking">Client Tracking</h3>

<p>The clients section aggregates all known network clients seen across every device in the system into a single view. You can filter by device, client type, or active status, and search by MAC address, IP, or hostname. Each client has a detail page with connection history and vendor identification from the OUI database.</p>

<p>A historical client count metric gives you trend data over time — useful for spotting unusual spikes in connected devices.</p>

<hr />

<h3 id="backups">Backups</h3>

<p>Trigger RouterOS configuration backups on demand via SSH for any connected device. Backup files are stored within the platform and can be downloaded or deleted from the UI. Simple, but the kind of thing that matters when something goes wrong at 2am.</p>

<hr />

<h3 id="alerts">Alerts</h3>

<p>Configurable alert rules with cooldown periods so you’re not getting hammered with repeat notifications:</p>

<ul>
  <li>Device online / offline events</li>
  <li>High CPU or memory usage (with configurable thresholds)</li>
  <li>SSL certificate expiry warnings</li>
  <li>Firmware update available</li>
  <li>RouterOS log errors and warnings</li>
  <li>New device discovered on the network</li>
</ul>

<hr />

<h3 id="global-search">Global Search</h3>

<p>A universal search bar in the top navigation lets you search across devices, clients, and events from anywhere in the platform. Type an IP, a MAC address, a hostname, or a device name and it returns relevant results instantly.</p>

<hr />

<h3 id="user-management-and-access-control">User Management and Access Control</h3>

<p>Multi-user with role-based access control:</p>

<ul>
  <li><strong>Admin</strong> — full access to everything, including user management</li>
  <li><strong>Operator</strong> — read/write access to devices and network features, no user admin</li>
  <li><strong>Viewer</strong> — read-only access</li>
</ul>

<p>JWT authentication with secure session handling. Admin-only user creation and role assignment. Default credentials on first run are <code class="language-plaintext highlighter-rouge">admin</code> / <code class="language-plaintext highlighter-rouge">admin</code> — <strong>change these immediately</strong>.</p>

<hr />

<h3 id="tls--https">TLS / HTTPS</h3>

<p>The platform generates a <strong>self-signed certificate automatically on first run</strong>, so you’re on HTTPS out of the box. If you have a real certificate (from Let’s Encrypt or your internal CA), you can upload it through the Settings UI. nginx handles TLS termination and automatic HTTP→HTTPS redirect.</p>

<hr />

<h2 id="the-tech-stack">The Tech Stack</h2>

<p>Here’s what’s running under the hood, for those who want to know what they’re deploying:</p>

<table>
  <thead>
    <tr>
      <th>Layer</th>
      <th>Technology</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>**Frontend**</td>
      <td>React 18, TypeScript, Vite, Tailwind CSS</td>
    </tr>
    <tr>
      <td>**State / Data**</td>
      <td>TanStack Query v5, React Router v6, Zustand</td>
    </tr>
    <tr>
      <td>**Charts**</td>
      <td>Recharts</td>
    </tr>
    <tr>
      <td>**Topology**</td>
      <td>@xyflow/react</td>
    </tr>
    <tr>
      <td>**Maps**</td>
      <td>Leaflet</td>
    </tr>
    <tr>
      <td>**Terminal**</td>
      <td>xterm.js</td>
    </tr>
    <tr>
      <td>**Backend**</td>
      <td>Node.js, Express, TypeScript</td>
    </tr>
    <tr>
      <td>**Primary DB**</td>
      <td>PostgreSQL 15</td>
    </tr>
    <tr>
      <td>**Time-series DB**</td>
      <td>InfluxDB 2.7</td>
    </tr>
    <tr>
      <td>**Cache / Queue**</td>
      <td>Redis 7, BullMQ</td>
    </tr>
    <tr>
      <td>**Real-time**</td>
      <td>Socket.IO</td>
    </tr>
    <tr>
      <td>**Device comms**</td>
      <td>RouterOS API (port 8728), SSH2</td>
    </tr>
    <tr>
      <td>**Proxy**</td>
      <td>nginx (TLS termination, static file serving)</td>
    </tr>
    <tr>
      <td>**Container**</td>
      <td>Docker Compose</td>
    </tr>
  </tbody>
</table>

<p>Everything ships as a Docker Compose stack. PostgreSQL for relational data, InfluxDB for the time-series metrics that power all the graphs, Redis for caching and the job queue, and nginx in front of everything. It’s a real stack — not a SQLite file and a Python script.</p>

<hr />

<h2 id="requirements">Requirements</h2>

<p>Before you deploy:</p>

<ul>
  <li><strong>Docker and Docker Compose v2+</strong> on the host you’re deploying to</li>
  <li><strong>MikroTik devices running RouterOS 6.x or 7.x</strong> with the API service enabled</li>
  <li><strong>Network access</strong> from the host running MikroTik Manager to your devices on port <strong>8728</strong> (or your configured API port)</li>
</ul>

<p>That’s it. No special networking, no agents on the MikroTik side, just the RouterOS API service turned on.</p>

<hr />

<h2 id="quick-start">Quick Start</h2>

<h3 id="1-clone-the-repository">1. Clone the Repository</h3>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git clone https://github.com/2GT-Media-Group-LLC/mikrotik-manager.git
<span class="nb">cd </span>mikrotik-manager
</code></pre></div></div>

<h3 id="2-configure-your-environment">2. Configure Your Environment</h3>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">cp</span> .env.example .env
</code></pre></div></div>

<p>Open <code class="language-plaintext highlighter-rouge">.env</code> and at minimum change these two values — and I mean it, actually change them:</p>

<pre><code class="language-env">JWT_SECRET=your_long_random_jwt_secret_here
ENCRYPTION_KEY=your_32_character_encryption_key_
</code></pre>

<p>The <code class="language-plaintext highlighter-rouge">JWT_SECRET</code> signs your authentication tokens. The <code class="language-plaintext highlighter-rouge">ENCRYPTION_KEY</code> is the key used to encrypt device credentials at rest in the database. If you leave these as the defaults and your box is ever compromised, you’ve given away the keys to your network. Use a password manager to generate proper random strings.</p>

<p>The other defaults are fine for a local homelab deployment:</p>

<table>
  <thead>
    <tr>
      <th>Variable</th>
      <th>Default</th>
      <th>What It Does</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td><code class="language-plaintext highlighter-rouge">JWT_SECRET</code></td>
      <td>*(change this)*</td>
      <td>Signs JWT tokens</td>
    </tr>
    <tr>
      <td><code class="language-plaintext highlighter-rouge">ENCRYPTION_KEY</code></td>
      <td>*(change this)*</td>
      <td>Encrypts device passwords at rest</td>
    </tr>
    <tr>
      <td><code class="language-plaintext highlighter-rouge">DB_PASSWORD</code></td>
      <td><code class="language-plaintext highlighter-rouge">mikrotik_secure_pw</code></td>
      <td>PostgreSQL password</td>
    </tr>
    <tr>
      <td><code class="language-plaintext highlighter-rouge">INFLUXDB_TOKEN</code></td>
      <td><code class="language-plaintext highlighter-rouge">mytoken123456789</code></td>
      <td>InfluxDB admin token</td>
    </tr>
    <tr>
      <td><code class="language-plaintext highlighter-rouge">HTTP_PORT</code></td>
      <td><code class="language-plaintext highlighter-rouge">80</code></td>
      <td>Host port for HTTP (redirects to HTTPS)</td>
    </tr>
    <tr>
      <td><code class="language-plaintext highlighter-rouge">HTTPS_PORT</code></td>
      <td><code class="language-plaintext highlighter-rouge">443</code></td>
      <td>Host port for HTTPS</td>
    </tr>
  </tbody>
</table>

<p>Never commit your <code class="language-plaintext highlighter-rouge">.env</code> file to version control. The <code class="language-plaintext highlighter-rouge">.gitignore</code> already excludes it, but worth saying explicitly.</p>

<h3 id="3-start-the-stack">3. Start the Stack</h3>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker compose up <span class="nt">-d</span>
</code></pre></div></div>

<p>On the first run, Docker will:</p>
<ul>
  <li>Build the React frontend into static files</li>
  <li>Build the TypeScript backend into a Node.js application</li>
  <li>Initialize PostgreSQL with the database schema</li>
  <li>Initialize InfluxDB</li>
  <li>Generate a self-signed TLS certificate</li>
</ul>

<p>This takes a few minutes the first time. Once it’s done:</p>

<h3 id="4-open-the-app">4. Open the App</h3>

<p>Navigate to <strong>https://localhost</strong> (or your server’s IP or hostname). Accept the self-signed certificate warning in your browser — or upload a real certificate in <strong>Settings → TLS Certificate</strong> if you want to skip that permanently.</p>

<h3 id="5-log-in">5. Log In</h3>

<table>
  <thead>
    <tr>
      <th>Username</th>
      <th>Password</th>
      <th>Role</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td><code class="language-plaintext highlighter-rouge">admin</code></td>
      <td><code class="language-plaintext highlighter-rouge">admin</code></td>
      <td>Admin</td>
    </tr>
  </tbody>
</table>

<p>Go to <strong>Settings → Users</strong> and change the admin password before you do anything else.</p>

<h3 id="updating">Updating</h3>

<p>When a new version drops:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git pull
docker compose up <span class="nt">-d</span> <span class="nt">--build</span> backend nginx
</code></pre></div></div>

<p>Database migrations run automatically on backend startup, so there’s nothing else to do.</p>

<hr />

<h2 id="a-word-on-security">A Word on Security</h2>

<p>I know some of you are already typing the comment. <em>“This was vibecoded, how do you know it’s secure?”</em></p>

<p>Fair question. Here’s my honest answer: I don’t have a penetration test report for this. What I can tell you is that it was built with secure practices in mind — credentials encrypted at rest, JWT-based auth, HTTPS enforced, role-based access control. But this is beta software. <strong>Don’t expose it directly to the internet.</strong> Run it inside your homelab, behind a VPN, or behind an authenticated reverse proxy if you need remote access. The same rules you’d apply to any self-hosted management tool apply here.</p>

<p>If you find a security issue, open an issue on GitHub. I’d rather know about it.</p>

<hr />

<h2 id="contributing">Contributing</h2>

<p>The project is open source under the <strong>AGPLv3 license</strong>. If you want to contribute:</p>

<ol>
  <li>Fork the repository</li>
  <li>Create a feature branch: <code class="language-plaintext highlighter-rouge">git checkout -b feature/your-feature</code></li>
  <li>Commit your changes</li>
  <li>Open a pull request</li>
</ol>

<p>The one ask: <strong>open an issue before submitting a PR</strong> so we can discuss the approach first. Nothing worse than putting real time into a feature only to have it not fit the direction of the project.</p>

<p>If you’re not a developer but you find a bug, open an issue. That’s contributing too.</p>

<p>The <strong>AGPLv3</strong> license means the code is free to use, modify, and distribute. If you run a modified version as a hosted service, you’re required to make your modified source available to users of that service under the same license. I chose AGPLv3 specifically to make sure improvements stay in the open.</p>

<hr />

<h2 id="whats-next">What’s Next</h2>

<p>I’ll be honest with you: I don’t have a formal roadmap. The topology view needs more polish. There are probably bugs I haven’t hit yet. And I’ve got a list of features I’d still like to add when I find the time — but I’m not making promises about when that happens.</p>

<p>What I do know is that the foundation is solid. The stack is real, the features are genuinely useful, and it’s the tool I actually run in my own homelab. That’s the best endorsement I can give it.</p>

<p>If enough of you use it, find issues, and contribute fixes, it’ll get better faster. That’s the whole point of open source.</p>

<hr />

<p>Head over to <strong><a href="https://github.com/2GT-Media-Group-LLC/mikrotik-manager">GitHub</a></strong>, give it a star if you find it useful, and let me know in the comments how it goes in your environment. And if you missed the original post about how this whole thing came to be — including the part where I have a mild existential crisis about AI — <a href="/_posts/2026-04-08-i-vibecoded-a-mikrotik-manager.md">check that one out first</a>.</p>

<hr />

<p>Thanks as always to everyone who’s supporting us through <strong>Patreon</strong> and the <strong>YouTube Membership</strong> program — none of this happens without you. Come hang out in the <strong>community Discord</strong> if you want to talk through your setup or just want to nerd out with like-minded homelabbers and engineers. We’ll see you on the next one!</p>]]></content><author><name>2GT_BK</name></author><category term="Homelab" /><category term="Networking" /><category term="Infrastructure" /><category term="Mikrotik" /><category term="OpenSource" /><category term="Docker" /><category term="Networking" /><category term="Management" /><category term="SelfHosted" /><category term="TypeScript" /><summary type="html"><![CDATA[After 70+ comments asking me to release the MikroTik Manager I vibe-coded, I'm doing it. Here's a complete breakdown of every feature, the tech stack, and how to get it running in your homelab today.]]></summary></entry><entry><title type="html">I Vibe-Coded a Mikrotik Management Platform — And It Completely Changed How I Think About AI</title><link href="https://2guystek.tv/homelab/networking/infrastructure/2026/04/08/i-vibecoded-a-mikrotik-manager.html" rel="alternate" type="text/html" title="I Vibe-Coded a Mikrotik Management Platform — And It Completely Changed How I Think About AI" /><published>2026-04-08T00:00:00+00:00</published><updated>2026-04-08T00:00:00+00:00</updated><id>https://2guystek.tv/homelab/networking/infrastructure/2026/04/08/i-vibecoded-a-mikrotik-manager</id><content type="html" xml:base="https://2guystek.tv/homelab/networking/infrastructure/2026/04/08/i-vibecoded-a-mikrotik-manager.html"><![CDATA[<p><img src="//youtu.be/NXUKyWIH90c" alt="" /></p>

<p>In all the years I’ve been working on my homelab, working in IT, and making videos, I don’t think I’ve ever been as personally conflicted as I am right now. I’m equally 50% excited and stunned, and 50% uneasy and uncomfortable with what I’m about to show you. Stick around, friends — this one’s gonna be an interesting one.</p>

<hr />

<h2 id="introduction">Introduction</h2>

<p>Hey there homelabbers, self-hosters, IT-pros, and engineers. Rich here! A few days ago, I created something that didn’t exist. I did it on a whim just to see if I could, and what resulted inadvertently completely changed how I feel about homelabbing, what I think about the future, and how I feel about AI and the future of creating.</p>

<p>For those of you who are regular viewers, you’ve probably heard me talk a lot more about AI in the live show. Recently I put together a local AI stack running Ollama and Open WebUI, and while that’s been a fun side project, it really hadn’t moved the bar for me in terms of how I felt about using or creating with AI.</p>

<p>Then a few weeks back, <strong>PegaProx</strong> hit the scene, and I made a video about it because I was truly impressed by what that small team of three people built. The comments were split — plenty of people pushed back hard on vibecoding, argued about security concerns, and insisted the original Proxmox UI was good enough. But there was no shortage of positive reactions either. No matter how you feel about AI in software engineering, it’s hard to argue with what PegaProx produced, because what they made is objectively incredible.</p>

<p>That experience sent me down my own path.</p>

<hr />

<h2 id="the-problem-mikrotik-is-powerful-and-painful">The Problem: Mikrotik is Powerful and Painful</h2>

<p>I started looking around my homelab and seeing problems I could potentially solve. I landed on one that’s been bothering me for a while: <strong>Mikrotik</strong>.</p>

<p>I love my UniFi stack — the gear is good, it’s self-hosted, and the single pane of glass is great. But Ubiquiti’s higher-end switching is expensive, and their stock availability has been a recurring headache. My alternative is a <strong>Mikrotik switch with dual 100Gig ports and 8 25Gig ports</strong> — and that little switch costs less than $1,000. It runs my entire backbone between my Proxmox server and my TrueNAS storage server.</p>

<p>The fact that a mere mortal can have native 100Gig throughput between two servers, with 8 25Gig ports left over, for under a grand is absolutely mind-blowing. But there’s a massive catch.</p>

<p><strong>Mikrotik is kind of a household name in Europe</strong>, and the hardware and features you get for the price can’t be matched anywhere I know of. But if I’m being blunt: configuring and managing these devices is a huge pain in the ass. Your options are their <strong>Winbox software</strong>, the built-in web admin (which is effectively the same as Winbox), or straight SSH. None of these options are anywhere near user-friendly, and unless you’re patient and have strong Google-fu, you’re going to struggle.</p>

<hr />

<h2 id="the-idea-a-unifi-style-manager-for-mikrotik">The Idea: A UniFi-Style Manager for Mikrotik</h2>

<p>Punch drunk off the possibilities I’d seen with PegaProx, I decided to see if I could vibe-code a singular management platform — something like UniFi or Meraki — for Mikrotik hardware. A single pane of glass that lets me configure, manage, and monitor at least 85% of the general functionality you’d expect from those other platforms.</p>

<p>After talking through AI project ideas with friends, doing some light research, and a fair bit of just winging it, I decided to build using <strong>Claude Opus 4.6 inside Visual Studio Code</strong>. I signed up for a $20/month Claude Pro subscription, integrated it into VS Code, and sat down to give it a shot.</p>

<h3 id="the-prompt">The Prompt</h3>

<p>The hardest part was articulating exactly what I wanted. Here’s the prompt I ultimately landed on:</p>

<blockquote>
  <p><em>“I have network devices made by a company called Mikrotik. I want to build a unified, singular management system that allows me to configure, manage, and monitor all aspects of Mikrotik devices. I need to be able to add the devices to the system, configure their addressing, ports, VLANs, and monitor port traffic and device health. I want the system to display a port diagram for the device with selectable ports for individual configuration, as well as be able to select multiple ports for mass editing of port configs.</em></p>

  <p><em>I want a dashboard landing page that shows overall device status and health, active clients, and aggregated alerts from the devices.</em></p>

  <p><em>I want the system to be multi-user, with the ability to assign users to roles like admin and read-only.</em></p>

  <p><em>Lastly, I want this deployed using Docker containers. The system you’re running on has Docker installed for you to test in, and I want the platform written using modern frameworks with a modern look and feel.</em></p>

  <p><em>Ask any questions you have.”</em></p>
</blockquote>

<p>Then I pressed enter.</p>

<hr />

<h2 id="the-build-34-nights-of-iteration">The Build: 3–4 Nights of Iteration</h2>

<p>Over the course of a few days, working on and off, I collaborated with Claude as it built and deployed the platform. I’d test, find bugs or design elements I didn’t like, Claude would fix them, and we’d keep going. The process was deeply iterative — I’d have a deployed feature I liked, realize I wanted something additional, tell Claude, it would implement it, and around and around we went.</p>

<p>I even went out and bought another cheap Mikrotik device, put it in router mode, and added it to the platform specifically so I could work with Claude to add router management features. All of this while writing video scripts, playing D&amp;D with friends, and watching Plex.</p>

<p>In the end, I had something that blew me away. Never in my wildest dreams could I have imagined the finished product I created.</p>

<hr />

<h2 id="mikrotik-manager-a-full-walkthrough">Mikrotik Manager: A Full Walkthrough</h2>

<h3 id="login-and-dashboard">Login and Dashboard</h3>

<p>The login screen features a living, animated network-like pattern that dynamically changes on every page load — inspired by Nutanix Prism. Dark mode is, of course, included, and it looks particularly sharp against the animation.</p>

<p>Once logged in, you land on the <strong>Dashboard</strong> with a familiar layout: collapsible left navigation and content on the right. At the top sits a <strong>universal search bar</strong> — type in IP addresses, device names, events, and the system returns matching results instantly.</p>

<p>The main dashboard area includes:</p>
<ul>
  <li><strong>Macro cards</strong> for total devices, active clients, alerts in the last 24 hours, and online vs. offline device counts</li>
  <li>A <strong>running graph</strong> of total connected clients detected across the network</li>
  <li>A <strong>device type breakdown pie chart</strong></li>
  <li>A <strong>map</strong> showing the physical locations of all added devices</li>
  <li>A <strong>recent events card</strong> showing aggregated logs from all devices, filterable by error, warning, and info level</li>
</ul>

<h3 id="device-management">Device Management</h3>

<p>The <strong>Devices</strong> section lists all added Mikrotik hardware with names, IP addresses, model, firmware version, status, last-seen timestamps, and refresh/delete controls. Adding a device is as simple as clicking <strong>Add Device</strong>, filling in the connection details, and saving.</p>

<p>Each device has its own detail view:</p>

<p><strong>Overview Tab</strong></p>
<ul>
  <li>Cards for current CPU load, memory usage, device uptime, and OS version</li>
  <li>A detailed system info card: name, IP, model, OS version, firmware version, device type, API port, date added, and last contact</li>
  <li>An <strong>editable physical details card</strong> for location, rack name, rack slot, and notes</li>
  <li>A <strong>map generated from the entered physical address</strong> — the same data that populates the main dashboard map</li>
  <li>Buttons to open the device’s web admin, launch a <strong>draggable in-browser SSH terminal</strong> directly to the device, and force an immediate configuration sync</li>
</ul>

<h3 id="ports-tab">Ports Tab</h3>

<p>This is the part I’m most proud of. The manager has no built-in knowledge of Mikrotik hardware models. When you first add a device, the system pulls port details from the switch and <strong>dynamically builds a visual port diagram</strong> representing the actual hardware. Port states are color-coded: red for offline, green for online, and blue when selected.</p>

<p>Below the diagram is a full port list showing name, status, speed, MTU, default VLAN, MAC address, comments, and a per-port reload button.</p>

<p><strong>Selecting a port</strong> gives you:</p>
<ul>
  <li><strong>Throughput and packet graphs</strong> with 1, 3, 6, 12, and 24-hour time selectors</li>
  <li>Detailed port info: state, rate, duplex, auto-negotiation, RX/TX flow control</li>
  <li><strong>Transceiver details</strong> — for SFP ports it will identify the connected cable type and report all available optic information</li>
</ul>

<p><strong>Configuring a port</strong> is done directly in the interface — enable/disable, comment, MTU, PoE (if supported), link settings, and VLAN configuration. <strong>Select multiple ports</strong> together and you can configure LAGs and LACP trunks from the same view.</p>

<h3 id="vlans-tab">VLANs Tab</h3>

<p>A full list of configured VLANs, their IDs, names, associated bridges, and a tagged/untagged port breakdown per VLAN. Each row has edit and delete actions. The <strong>Add VLAN</strong> button opens a creation card to define a VLAN ID, associate it to a bridge, and assign tagged or untagged ports — all without touching the CLI.</p>

<h3 id="routing-tab">Routing Tab</h3>

<p>For Mikrotik devices running in router mode, this tab surfaces full routing functionality:</p>
<ul>
  <li><strong>Route table</strong> for managing static routes</li>
  <li><strong>OSPF</strong> configuration and management</li>
  <li><strong>BGP</strong> configuration and management</li>
  <li><strong>Route Filters</strong> and <strong>Route Tables</strong></li>
</ul>

<h3 id="firewall-tab">Firewall Tab</h3>

<p>All Mikrotik devices have built-in firewall capability. This tab lets you view, manage, and create firewall rules through the UI — no Winbox required.</p>

<h3 id="config-tab">Config Tab</h3>

<p>Your one-stop shop for device-level settings:</p>
<ul>
  <li>Device name editing</li>
  <li>Date, time, timezone, and NTP server configuration</li>
  <li>DNS server configuration</li>
  <li>Management IP address configuration</li>
  <li><strong>Built-in update checker</strong> — click <strong>Check for Update</strong>, and if a new firmware is available, you can install and reboot the device right from the platform</li>
</ul>

<h3 id="hardware-tab">Hardware Tab</h3>

<p>Rich hardware telemetry for the device:</p>
<ul>
  <li><strong>CPU and memory usage graphs</strong> with 6, 12, 24-hour, and 7-day time scales</li>
  <li><strong>Internal storage</strong> details and utilization</li>
  <li><strong>Temperature readings</strong> from all available sensors, with <strong>Fahrenheit/Celsius toggle</strong> for my metric friends outside the US</li>
  <li><strong>Fan status and RPM</strong> for all internal fans</li>
  <li><strong>Power supply status</strong></li>
  <li><strong>Voltage monitoring</strong> for devices powered via barrel connectors or 2-wire interfaces</li>
</ul>

<h3 id="tools-tab">Tools Tab</h3>

<p>I wanted the platform to be a real troubleshooting tool, so I surfaced the diagnostics already built into Mikrotik hardware:</p>
<ul>
  <li><strong>Reboot</strong> the device</li>
  <li><strong>Ping</strong> any address from any interface on the device</li>
  <li><strong>Traceroute</strong> from the device</li>
  <li><strong>IP Range Scan</strong> with optional reverse DNS lookups</li>
  <li><strong>Wake-on-LAN</strong> packet transmission from the switch</li>
</ul>

<hr />

<h2 id="beyond-devices-the-rest-of-the-platform">Beyond Devices: The Rest of the Platform</h2>

<p><strong>Clients</strong> — All known network clients seen by any Mikrotik device in the system. Each entry is configurable with custom names, and reverse DNS names auto-populate if you have DNS configured on your network.</p>

<p><strong>Events</strong> — Aggregated logs from all connected hardware, searchable and filterable by topic, log level, and individual device. A <strong>clear button</strong> lets you wipe all stored events.</p>

<p><strong>Topology</strong> — Still a work in progress. I’m building a dynamic network topology map from LLDP data pulled from all connected devices. The foundations are there and it’s kinda-sorta working — I just need to spend more time with Claude ironing out interface mapping and data interpretation.</p>

<p><strong>Backups</strong> — Create configuration backups for any connected device, stored within the platform. Download, restore to the source device, or delete — all from the UI. This was actually the very first feature I built.</p>

<p><strong>Switches / Routers</strong> — Dedicated sections that aggregate all switch-type or router-type devices respectively, with macro cards linking back to individual device pages. Each section has its own <strong>Settings</strong> tab for platform-wide configurations like LLDP and SNMP that are applied globally to all devices of that type.</p>

<p><strong>Platform Settings</strong> — Control system-wide defaults:</p>
<ul>
  <li>Default theme</li>
  <li>Backend polling intervals</li>
  <li>MAC scan settings (with enable/disable toggle)</li>
  <li>Automatic reverse DNS lookup (with enable/disable toggle)</li>
  <li>Maximum event data retention in days</li>
  <li><strong>Users and Roles</strong> — manage user accounts, assign roles (Admin, Operator, Read-Only), create passwords</li>
  <li><strong>My Password</strong> — self-service password change for the current user</li>
</ul>

<hr />

<h2 id="the-dilemma-the-excitement-wore-off">The Dilemma: The Excitement Wore Off</h2>

<p>You’d think building something like this and seeing how well it turned out would fill me with only pure joy. It didn’t — not entirely.</p>

<p>After the excitement wore off, it left me genuinely conflicted. For more than a few nights, I’d be lying in bed thinking of the next cool feature to add, only for a wave of existential dread to follow. I spent real time trying to figure out what those feelings were, and the best I can describe them is <strong>fear</strong>.</p>

<p>Anthropic — the company behind Claude, the AI I used to build this — published a report on the current and projected effects of AI on the labor market. It doesn’t paint a rosy picture for skilled technology workers in the long run. And we’re all seeing the headlines: large tech companies announcing layoffs while simultaneously pouring that headcount savings directly into AI programs. It’s hard not to feel like AI is a slow-moving threat, growing quietly until it’s turned loose on all of us as a weapon of corporate cost-cutting.</p>

<p>I don’t have the answers. There’s nothing I can personally do to control the future — only the decisions I make. Here’s what I’ve come to accept: <strong>it doesn’t matter if you don’t like AI, or think vibecoding is trash.</strong> These tools are coming, faster than anything I’ve seen move in my adult life. For me, the only real option is to embrace them as quickly as possible.</p>

<hr />

<h2 id="the-realization-im-no-longer-just-a-consumer">The Realization: I’m No Longer Just a Consumer</h2>

<p>Building the Mikrotik Manager hit me with something that hadn’t occurred to me before. All these years of homelabbing, I’ve been a <strong>consumer</strong>. If something didn’t exist, I couldn’t create it — I’m not a software engineer, so everything I’ve run has been the fruit of someone else’s labor. That was just the way it was.</p>

<p>But not anymore. Now I can build the things I actually want.</p>

<p>The analogy that keeps coming back to me is <strong>3D printing</strong>. Before I got into 3D printing, if I needed something, I’d go to Amazon and hope someone else had already made it. With a printer, if I need a specific part or have a crazy idea, I design it, print it, realize my measurements were off, redesign it, print it again, and eventually I have exactly what I needed. No compromises, no waiting.</p>

<p>Creating with AI is exactly that. The accessibility barrier is lower, the iteration cycle is faster, and now my homelab can run things I built myself, customized exactly to my needs. I no longer have to wait for the next cool open source project to drop. I can create that project.</p>

<hr />

<h2 id="but-is-it-secure">“But Is It Secure?”</h2>

<p>I can already hear a significant portion of you screaming at the screen: <em>“Your vibecoded apps aren’t secure!”</em></p>

<p>You might be right. But here’s the point I think everyone is missing:</p>

<p>I wouldn’t 3D print a structural load-bearing component, bolt it onto an aircraft, and fly in it without extensive testing. Likewise, I wouldn’t deploy a vibecoded app into my production day job or expose it to the internet without significant vulnerability and security testing first.</p>

<p>I’m not claiming everything AI generates is production-ready out of the box. I’m saying that <strong>you need to put in the effort to validate that what you’re creating aligns with its intended use case.</strong> That’s the same thing we’ve been doing with software and hardware for years. Nothing about that responsibility changes just because AI helped write the code.</p>

<hr />

<h2 id="whats-next-for-mikrotik-manager">What’s Next for Mikrotik Manager</h2>

<p>I have no current plans to release it publicly. If there’s enough interest in the community, <strong>open source and free</strong> is the only way I’d do it. But I’m realistic about my bandwidth — being a dad, an engineer in my day job, and a YouTuber doesn’t leave a lot of room for maintaining a public GitHub repository.</p>

<p>That said, never say never.</p>

<hr />

<h2 id="final-thoughts">Final Thoughts</h2>

<p>I opened this by telling you I was 50% excited and 50% uneasy. Honestly? I’m still sitting right there. I don’t think that feeling fully goes away, and I’m not sure it should. But what I do know is that for the first time, I can see a path to make my ideas real in a way I didn’t have before. For my homelab, that means more things made by me, built exactly the way I want them.</p>

<p>Whatever the future looks like for all of us in tech, leaning into these tools feels like the right response.</p>

<p>I’d genuinely love to know where you stand on all of this — what you think about AI in software engineering, what it means for your career, your homelab, and your future. Drop it in the comments. This is a conversation worth having.</p>

<hr />

<p>Thanks for reading, folks, and thank you to the fine people who support us through <strong>Patreon</strong> and the <strong>YouTube Membership</strong> program. If you’d like to support what we do here, consider checking those out. Join our <strong>community Discord</strong> and chat with me and like-minded homelabbers, geeks, and nerds — and as always, we’ll see you on the next one!</p>]]></content><author><name>2GT_BK</name></author><category term="Homelab" /><category term="Networking" /><category term="Infrastructure" /><category term="Mikrotik" /><category term="AI" /><category term="VibeCoding" /><category term="Claude" /><category term="Networking" /><category term="Docker" /><category term="Management" /><summary type="html"><![CDATA[I spent a few nights chatting with Claude Opus and accidentally built a full Mikrotik unified management platform. Here's what I made, how I made it, and the existential crisis that came with it.]]></summary></entry><entry><title type="html">Kasm Workspaces: Enterprise-Grade Remote Desktops for Your Homelab (and Your Day Job)</title><link href="https://2guystek.tv/infrastructure/homelab/2026/04/03/kasm-workspaces-review.html" rel="alternate" type="text/html" title="Kasm Workspaces: Enterprise-Grade Remote Desktops for Your Homelab (and Your Day Job)" /><published>2026-04-03T00:00:00+00:00</published><updated>2026-04-03T00:00:00+00:00</updated><id>https://2guystek.tv/infrastructure/homelab/2026/04/03/kasm-workspaces-review</id><content type="html" xml:base="https://2guystek.tv/infrastructure/homelab/2026/04/03/kasm-workspaces-review.html"><![CDATA[<p><img src="//youtu.be/33Q_POCQcNk" alt="" /></p>

<p>Most people hear “remote workspaces” or “VDI” and immediately assume I’m about to talk about some giant enterprise stack that costs a fortune, requires a ton of hardware to run, and has absolutely nothing to do with the homelab. But that is not the case this time. Kasm Workspaces is one of those platforms that kind of breaks that mold, because once you start looking at what it can actually do, there are some very real use cases here for homelabbers and businesses alike. So, let’s dig in!</p>

<hr />

<h2 id="what-is-kasm-workspaces">What Is Kasm Workspaces?</h2>

<p>How many times have you wanted to look at something that might be a little bit sus — like a link or a website that gives you that not-so-safe feeling — or wanted to quickly spin up a Linux desktop to test something without having to spend a ton of time building a VM you’re just going to throw away after? Or maybe you want a way to RDP into Windows machines as quickly as possible from any system with a web browser?</p>

<p>Well, <strong>Kasm Workspaces</strong> is the answer. Fully featured and completely free to run in your homelab, and enterprise-ready for your day job as well.</p>

<p>Let’s get the formalities out of the way first — this video is sponsored by <strong><a href="https://kasm.com/">Kasm Technologies</a></strong>, the company behind Kasm Workspaces, which is awesome for me because I’ve been using Kasm Workspaces in my homelab for a long time now, and I love working with companies that actually use their software at home!</p>

<p>In this post, I’m going to walk you through what Kasm Workspaces is, show you how to spin it up in your homelab, do a general walkthrough of the UI, its features, and configs, and then we’ll deploy our first workspace to show you how easy it is to use. But first, let’s take a quick look at Kasm Technologies, the company, and its background.</p>

<hr />

<h2 id="kasm-technologies-the-background">Kasm Technologies: The Background</h2>

<p>Kasm Technologies was founded in 2017, and their origin story is a little different from your typical software startup. The founding team came out of US Federal and DoD cybersecurity work, and the core problem they were solving was: how do you give people secure, isolated workspaces without the whole thing becoming an expensive, brittle mess? They figured that out for some of the most paranoid security environments on the planet, and then in 2020 they asked: why should only government agencies get access to this?</p>

<p>So they spun up the commercial entity, made it free for individuals and the homelab community, and built out enterprise tiers for businesses that need the full feature set. They’ve never taken outside funding either, which tells you something about how they’re building this. It’s engineers who built something for real-world security needs and decided to make it available to everyone.</p>

<hr />

<h2 id="the-sweet-spot-homelab-and-enterprise-same-technology">The Sweet Spot: Homelab and Enterprise, Same Technology</h2>

<p>Kasm Workspaces hits a really interesting sweet spot. In the homelab, it gives you an easy way to spin up clean, isolated browsers, apps, and desktops completely on demand — great for testing, sandboxing, or just keeping your main system clean. But here’s the thing: those same capabilities don’t stop being useful when you walk into the office. Secure remote access, streamed apps and desktops, browser isolation that protects endpoints, all of it delivered through nothing more than a web browser. The homelab use case and the enterprise use case are literally the same technology. You’re just scaling the stakes.</p>

<p>I’ve been using it in my homelab in three different ways:</p>

<ul>
  <li><strong>Disposable browsers</strong> — I use their disposable browsers as a way to test links and websites that I don’t feel comfortable visiting on my main PC. If you get what I mean.</li>
  <li><strong>On-demand Linux distros</strong> — I regularly spin up different Linux distributions to play around without having to go through the effort of building a VM, installing the distro, and so on. That immediacy of being able to, with a few clicks, start a Debian distro — for example — is great because I don’t have to do a bunch of work just to check a thing and then throw it away.</li>
  <li><strong>Browser-based RDP</strong> — This may upset you, so be prepared to clutch your pearls. I use it to access my Windows VMs and desktops via RDP. Accessing those systems through my browser, again with just a few clicks, is so much easier and streamlined than starting the Windows app and going that route.</li>
</ul>

<hr />

<h2 id="hardware-requirements">Hardware Requirements</h2>

<p>Let’s talk about the hardware requirements for running Kasm Workspaces in your homelab. The minimum requirements below are a solid starting point, but remember: the more concurrent sessions you plan to run, the more resources you’re going to need.</p>

<p>Kasm can be installed on physical hardware or within a VM and supports modern versions of <strong>Ubuntu, Debian, and Red Hat Linux</strong>, on both <strong>x86_64 and ARM64</strong> architectures. Kasm does <strong>not</strong> support installation in an LXC container or on Windows via WSL/WSL2.</p>

<table>
  <thead>
    <tr>
      <th>Resource</th>
      <th>Minimum</th>
      <th>Recommended</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>CPU</td>
      <td>2 cores</td>
      <td>4+ cores</td>
    </tr>
    <tr>
      <td>RAM</td>
      <td>4 GB</td>
      <td>8–16 GB</td>
    </tr>
    <tr>
      <td>Storage</td>
      <td>50 GB</td>
      <td>100 GB+</td>
    </tr>
  </tbody>
</table>

<p>The recommendations above are what I recommend for a solid homelab deployment. There aren’t official recommendations from Kasm, but those are reasonable numbers to get you off to a great start.</p>

<p>One more thing worth noting: you can likely install Kasm on most any Linux distribution as long as <strong>bash, openssl, Docker CE, and Docker Compose</strong> are already installed. So give it a shot if Ubuntu, Debian, or RHEL aren’t your thing.</p>

<hr />

<h2 id="installing-kasm-workspaces">Installing Kasm Workspaces</h2>

<p>I’m installing into an Ubuntu Server 24.04 VM that I’ve already created, following my own recommendations for hardware provisioning: 8 cores, 16GB of RAM, and 100GB of storage. Let’s get started.</p>

<p>The first thing I like to do before installing anything into a fresh VM is make sure it’s up to date on patches. Once that’s done, the Kasm install is incredibly easy. You can find the install command in <a href="https://docs.kasm.com/docs/latest/install/single_server_install">Kasm’s documentation</a>, but here’s what it looks like:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">cd</span> /tmp
curl <span class="nt">-O</span> https://kasm-static-content.s3.amazonaws.com/kasm_release_1.18.1.tar.gz
<span class="nb">tar</span> <span class="nt">-xf</span> kasm_release_1.18.1.tar.gz
<span class="nb">sudo </span>bash kasm_release/install.sh
</code></pre></div></div>

<p>The command will <code class="language-plaintext highlighter-rouge">cd</code> into the temp directory, download the current Kasm release archive, untar it, and then execute the install script.</p>

<p>You’ll need a user with sudoers privileges, as you’ll be prompted for your password before the actual installation begins. Before the install starts, you’ll also be hit with the Kasm Workspaces EULA — answer <code class="language-plaintext highlighter-rouge">Y</code> and press enter.</p>

<p>As the process runs, the installer takes care of everything: installing all necessary services and daemons (like Docker CE), downloading all necessary containers, and so on. This can take a while to complete, so let it run.</p>

<p>Once finished, the installer outputs a list of user and service account usernames and passwords that were generated during the install. <strong>Copy these and save them somewhere safe</strong> — you’ll need them to log in for the first time. Don’t worry, the user account passwords can be changed from within the platform once you’re in.</p>

<hr />

<h2 id="first-login-and-admin-dashboard">First Login and Admin Dashboard</h2>

<p>Pop open your browser and enter <code class="language-plaintext highlighter-rouge">https://</code> followed by the IP address or hostname of your VM and press enter. By default, Kasm Workspaces is deployed with a self-signed certificate — head to <strong>Advanced</strong> and proceed to the site to bypass that screen.</p>

<p>Log in with the admin account (<code class="language-plaintext highlighter-rouge">admin@kasm.local</code>) and the generated password from the installer. Once in, you land on the <strong>Admin Dashboard</strong>, which gives you a top-level overview of the status and health of your Kasm Workspaces — cards for users and login status, created sessions, and current statistics and image usage. Fresh install, so it’s all empty for now.</p>

<hr />

<h2 id="workspaces-and-the-registry">Workspaces and the Registry</h2>

<p>Expand <strong>Workspaces</strong> on the left and select the <strong>Workspaces</strong> subsection. This is where all of your created workspaces can be managed. With a fresh install, it’s empty.</p>

<p>Over in <strong>Registry</strong> is where the good stuff is. Kasm Technologies has created essentially an app store-like experience where they list ready-to-download-and-run workspaces that they continually maintain and add to. They have a ton of different Linux distributions — AlmaLinux, Alpine, Debian, Oracle, RHEL 9, and more — as well as dedicated apps you can run: everything from your favorite browser in a box to Visual Studio Code. Everything sandboxed, contained, and adding them to your workspace is as easy as installing an app.</p>

<p>This one feature really brings it all together for the homelab. The ease with which you can install what you need and start using it is hands-down one of my favorite parts of Kasm Workspaces.</p>

<hr />

<h2 id="sessions-management">Sessions Management</h2>

<p>The <strong>Sessions</strong> section is where you manage session activity:</p>

<ul>
  <li><strong>Sessions</strong> — See active sessions currently running.</li>
  <li><strong>History</strong> — A historical list of previously run sessions.</li>
  <li><strong>Session Staging</strong> — Pre-provision workspace containers ahead of time instead of creating them only when a user clicks launch. Users can be handed a session from a ready-made pool instead of waiting for a fresh container to spin up.</li>
  <li><strong>Session Casting</strong> — Create a special URL that launches a Kasm session directly for a specific workspace. That link can be used for normal authenticated users or opened up for anonymous access if you want a fast, frictionless way to hand someone a ready-to-go Kasm environment.</li>
</ul>

<hr />

<h2 id="access-management">Access Management</h2>

<p><strong>Users</strong> — Create users, edit account information, assign them to groups, manage login-related details, and tie in things like file mappings and other per-user settings. This is also a good time to change your admin password from the randomly generated one created on install.</p>

<p><strong>Groups</strong> — Instead of configuring permissions and behavior one user at a time, you assign users to groups and apply policies, workspace access, and feature controls at the group level. By default, Kasm includes system groups like <strong>Administrators</strong> and <strong>All Users</strong>, with All Users acting as the baseline policy layer for everyone.</p>

<p><strong>Authentication</strong> — Kasm supports <strong>LDAP, SAML, OpenID, and physical tokens</strong>, giving you the ability to tie Kasm into your identity stack. This lets you control SSO, security requirements, and group-based access mapping from your existing IdP — a huge deal for enterprise deployments.</p>

<hr />

<h2 id="infrastructure">Infrastructure</h2>

<p><strong>Docker Agents</strong> — Where you manage the worker nodes that actually run user sessions. This is the infrastructure layer that determines where workspaces launch, how much CPU, memory, or GPU capacity is available, and how session workloads are distributed across your environment.</p>

<p><strong>Servers</strong> — Where you define and manage actual servers your users can connect to, whether that’s RDP, VNC, SSH, or KasmVNC-based systems. I have all of my Windows computers and VMs defined in my production instance so I can use Kasm to connect to them via RDP through the browser. It makes it trivial to connect without having to deal with RDP clients for different OSes.</p>

<p><strong>Server Enrollment Tokens</strong> — Kasm’s way of securely adding remote systems into the platform. Instead of manually trusting a server, you use a token to enroll it, which helps Kasm verify and manage that system before users can access it.</p>

<p><strong>Pools</strong> — Where you group and manage compute resources that sessions can launch on, whether that’s Docker agents, server pools, or autoscaled infrastructure. Kasm can tie into VM platforms like <strong>Nutanix, Proxmox, VMware, and OpenStack</strong>, as well as numerous cloud providers, so it can automatically spin up and manage virtual machines behind the scenes to scale workloads as needed.</p>

<p><strong>Managers</strong> — The orchestration layer that coordinates the platform behind the scenes, handling things like session placement, communication with agents, and overall control of how workspace resources get assigned. Essentially the control plane that ties the whole environment together.</p>

<p><strong>Deployment Zones</strong> — How you define where sessions should run based on location, infrastructure, or other operational requirements.</p>

<p><strong>Connection Proxies</strong> — Manages the proxy layer that brokers traffic between users and remote resources like RDP, VNC, or SSH targets.</p>

<p><strong>Egress Providers</strong> — How you control how a workspace accesses the internet. Instead of every session using whatever default route the host has, you can send traffic through a different gateway or exit point. Want a specific workspace to use a VPN connection to access the internet? Set up that connection, assign it to your workspace, and any internet browsing in that workspace will use only that connection. This is incredibly useful — if you want to test something from a different location in the world, set up your favorite VPN as an egress provider and launch a browser workspace configured to use it.</p>

<hr />

<h2 id="settings">Settings</h2>

<p><strong>Global Settings</strong> — Platform-wide defaults that affect the whole environment rather than just one user or group.</p>

<p><strong>Banners</strong> — Place notices inside user sessions so people can immediately see things like user identity, workspace context, or security and classification warnings while they’re working.</p>

<p><strong>Web Filters</strong> — Control what websites users can and cannot reach from browser-based workspaces using allow lists, block lists, category filtering, and safe search controls.</p>

<p><strong>Branding</strong> — Customize the look and feel of the platform to match your company’s branding. A paid feature focused towards businesses.</p>

<p><strong>Storage Providers</strong> — Configure backend storage that users and admins can map into workspace sessions: Google Drive, Dropbox, OneDrive, Nextcloud, S3, or custom volume-backed storage.</p>

<p><strong>API Keys</strong> — Generate credentials for scripts, integrations, and external systems to talk to the Kasm API without using a user login.</p>

<p><strong>Logging</strong> — All of your Kasm Workspace logs and events, filterable and searchable.</p>

<p><strong>System Info</strong> — The admin view of the platform’s health and identity, showing details about the deployment, installed version, licensing, and other status information.</p>

<hr />

<h2 id="deploying-your-first-workspace">Deploying Your First Workspace</h2>

<p>Alright, let’s run through a quick deploy of a workspace so you can see how fast and easy it is. From the Admin dashboard, expand <strong>Workspaces</strong> on the left and click on <strong>Registry</strong>.</p>

<p>The list of available workspaces is large. Let’s add a <strong>Chrome Browser</strong> workspace. Select it in the list and click <strong>Install</strong>. Kasm will download the workspace and prepare it for use in the background — swing over to the Workspaces tab to monitor the progress.</p>

<p>You’ll notice a small red alert triangle in the top right corner of the new workspace tile. Hovering over it shows a message that it’s still downloading. Give it a moment, refresh the page, and once the alert icon is gone, click on it to launch.</p>

<p>Before it launches, you’re asked how you want to open the session: current tab, new tab, or new browser window. Click <strong>Launch Session</strong>.</p>

<p>Spawning new sessions is incredibly quick, and once it starts, you have a fresh Chrome browser in a sandboxed container — ready to test risky links, browse the web safely, or check out your favorite YouTuber’s latest videos. Not biased or anything.</p>

<p>Every session window also includes the <strong>Control Panel widget</strong> on the left side. Inside it, you can pass through a webcam, enable/disable sound and microphone, go fullscreen, copy/paste text, redirect printers, upload and download files, manage multiple displays, adjust stream quality, share your session, manage advanced settings, leave a workspace, log out of Kasm, or delete a session entirely.</p>

<hr />

<h2 id="autoscaling-beyond-the-homelab">Autoscaling: Beyond the Homelab</h2>

<p>For your homelab, a single Kasm deployment is likely all you’ll need to run a few concurrent sessions. But the instant you need to scale beyond that, Kasm has a great answer: <strong>autoscaling</strong>.</p>

<p>Kasm can fully integrate with a ton of different on-prem and cloud virtualization platforms. Recently, Kasm Technologies kicked off a partnership with <a href="https://www.nutanix.com/partners/technology-alliances/kasm-technologies">Nutanix</a> to help businesses running Nutanix get all the modern isolated, ephemeral, zero-trust workspaces Kasm brings — fully automated and orchestrated in their own virtualization stack.</p>

<p>Autoscaling integrations live in the <strong>Pools</strong> section under Infrastructure. In my own configuration, I have Kasm set up to start up to four separate servers as needed to handle demand. Docker Agents shows the spun-up worker nodes in real time, and in Nutanix Prism Central, those worker VMs appear automatically — named <code class="language-plaintext highlighter-rouge">kasm-dynamic-agent</code> with a unique identifier — cloned from a <code class="language-plaintext highlighter-rouge">kasm-ubuntu-template</code> VM. All of this happens automatically based on demand. It’s a great example of just how capable this platform is at scale.</p>

<hr />

<h2 id="final-thoughts">Final Thoughts</h2>

<p>Alright, let me just say it plainly: <strong>Kasm Workspaces is fantastic</strong>, and that’s true regardless of the sponsorship.</p>

<p>Let’s start with the company. Kasm Technologies gives Kasm Workspaces away free, almost entirely unrestricted, to homelabbers. I’ve had conversations with them, and they not only care about their product — they’re homelab geeks to the core, and that shows. This is how you build a business. You share it with the geeks and nerds, engineers, and IT pros to use at home, and then they go straight into work and say “I know how to solve our VDI problem!” or “Here’s how we can let people test suspicious links without touching company assets!” and that’s meaningful.</p>

<p>I’ve been using Kasm for a long time. I use it to RDP into my Windows systems, spin up disposable browsers to safely check links and sites I don’t fully trust, and regularly spin up Linux distributions on demand without having to build and tear down VMs. It’s one of those indispensable tools that everyone should have running in their stack. If you aren’t using it, you should be.</p>

<p>One additional thing worth calling out: <strong>Kasm has one of the best knowledge bases I’ve seen for any software product</strong>. If you get stuck on something, or want to figure out how to configure Kasm to use your VPN as an egress provider, it’s all there and incredibly easy to understand. If you want to see how to set up autoscaling for your hypervisor of choice, this is where to go.</p>

<p>And then there’s the business side. Kasm is a mature platform that can solve real problems in a meaningful way. Its ability to integrate into your company’s IAM for single sign-on, its support for autoscaling worker nodes across practically every virtualization platform (on-prem or in the cloud, including Kubernetes), its browser isolation capabilities for protecting endpoints, and its ability to deliver all of it through nothing more than the web browser you already use — makes it a genuinely compelling platform for businesses of any size.</p>

<p>So the next time someone tells you that remote workspaces and VDI are just expensive enterprise headaches, send them this video. Because Kasm Workspaces is proof that it doesn’t have to be that way.</p>

<hr />

<p>Special thanks to <strong>Kasm Technologies</strong> for sponsoring this video and for building a platform that’s as at home in a homelab rack as it is in an enterprise data center. Head over to <a href="https://www.kasmweb.com/">kasmweb.com</a> to get started — it’s free for the homelab, and the enterprise tiers are there when your use case grows into them.</p>

<p>Thanks for watching!</p>]]></content><author><name>2GT_BK</name></author><category term="Infrastructure" /><category term="Homelab" /><category term="Kasm" /><category term="VDI" /><category term="RemoteDesktop" /><category term="Containers" /><category term="Security" /><category term="BrowserIsolation" /><category term="Sponsored" /><summary type="html"><![CDATA[A sponsored deep-dive into Kasm Workspaces — the free, open-platform VDI and browser isolation tool that's equally at home in your homelab and your enterprise stack.]]></summary></entry><entry><title type="html">OpenMetal.io: Your Hosted Private Cloud Alternative to AWS, Azure, and GCP</title><link href="https://2guystek.tv/cloud/infrastructure/2026/03/13/openmetal-hosted-private-cloud-review.html" rel="alternate" type="text/html" title="OpenMetal.io: Your Hosted Private Cloud Alternative to AWS, Azure, and GCP" /><published>2026-03-13T00:00:00+00:00</published><updated>2026-03-13T00:00:00+00:00</updated><id>https://2guystek.tv/cloud/infrastructure/2026/03/13/openmetal-hosted-private-cloud-review</id><content type="html" xml:base="https://2guystek.tv/cloud/infrastructure/2026/03/13/openmetal-hosted-private-cloud-review.html"><![CDATA[<p><img src="//youtu.be/8CLw9tEs8oc" alt="" /></p>

<p>We talk a lot on this channel about on-premises virtualization and building your own private clouds. But there’s a whole other side of the world that doesn’t live in a rack in your company’s server room — it lives in public cloud and hosted private clouds.</p>

<p>Late last year, I had the opportunity to meet the company <strong>OpenMetal.io</strong>, whose entire focus is on helping organizations regain control of their cloud infrastructure, especially when their public cloud costs start drifting towards unaffordable, and the benefits of their closed-source system start to look more like risks. OpenMetal delivers dedicated hosted private clouds built on OpenStack and Ceph, designed for teams that want performance consistency, architectural clarity, and cost boundaries they can actually plan around.</p>

<p>So, in this video, we’re going to take a look at what that model looks like in practice, and how to recognize when your organization has reached the tipping point where elasticity stops being the advantage and predictability becomes the priority. Let’s get to it!</p>

<hr />

<h2 id="what-is-openmetal">What Is OpenMetal?</h2>

<p>Hey there homelabbers, self-hosters, IT-pros, and engineers. Rich here! When OpenMetal reached out to see if I’d be interested in checking out what life is like in a real hosted private cloud — where the servers, storage, and networks you’re running on are dedicated to only you, and not shared with other tenants like in a public cloud — I couldn’t turn it down!</p>

<p>When I think about the “cloud,” I immediately think of being one of many users, all sharing the same hardware. But with OpenMetal, dedicated means dedicated to only you, down to the root-level of your environment.</p>

<p>Before we get into the good stuff, let’s get the formalities out of the way. <strong>OpenMetal.io is sponsoring this video</strong>, and I have to say, I’m excited they have, because it’s been a really eye-opening experience — not only to see what they’ve created, but also to learn how their hosted private cloud solves a lot of problems companies in public clouds face.</p>

<p>OpenMetal’s Infrastructure-as-a-Service platform delivers a hosted private cloud built on <strong>OpenStack and Ceph</strong>, running on dedicated, single-tenant servers. They provide all of the cloud-native components you’d expect — compute, networking, block storage, APIs, and automation — with a focus on:</p>

<ul>
  <li><strong>Fixed capacity</strong> and predictable performance</li>
  <li><strong>Transparent costs</strong> with no per-resource billing surprises</li>
  <li><strong>Full root access</strong> to workloads, cluster configuration, and networking</li>
  <li><strong>Hardware visibility</strong> — you know exactly what’s running where and how it’s performing</li>
</ul>

<p>Unlike a traditional hyperscaler where every resource you deploy has an individual cost, OpenMetal provides your own private cloud on a hardware stack that isn’t shared with anyone. This approach simplifies budgeting, planning, and forecasting of your cloud spend. It’s infrastructure for businesses with steady workloads who want all the benefits of being in the cloud, but with ownership, consistency, and the ability to actually plan.</p>

<p>OpenMetal launched its IaaS platform in 2022 and has datacenters across three continents — US, Europe, and Asia — with key regions including Los Angeles, Ashburn, Amsterdam, and Singapore.</p>

<hr />

<h2 id="the-inflection-point-when-public-cloud-stops-making-sense">The Inflection Point: When Public Cloud Stops Making Sense</h2>

<p>There’s an inflection point that almost every growing engineering team hits with their cloud infrastructure. In the beginning, the public cloud makes perfect sense. You move fast, you don’t think about hardware, you spin things up, experiment, and iterate. But after that point, things change.</p>

<p>Workloads stabilize, usage and production move towards a steady-state, and you realize you’re paying for elasticity you’re not really using. Then your management starts asking harder questions about where your workloads are running, what they’re actually consuming, and how predictable the bill is going to be.</p>

<p>That’s really the problem OpenMetal was built to solve. It’s not that AWS, Azure, or GCP are bad. Quite the opposite — they’re incredible for the early growth and experimentation phase. But there’s a stage at which predictability, visibility, and cost boundaries matter more than raw elasticity.</p>

<p>Here’s a good example: in the public cloud, every single resource you deploy has a cost associated with it. Each VM, virtual disk, network interface, and each gigabyte of data sent and received. Something goes sideways, and all of a sudden you have a massive unexpected bill you didn’t account for. In contrast, in a hosted private cloud, you’re buying defined capacity on dedicated hardware, so your costs are going to be the same regardless of utilization in your stack. For those with more steady-state infrastructure, it’s financially smarter to repatriate those workloads out of the public cloud and into a hosted private cloud.</p>

<h3 id="the-openstack-advantage">The OpenStack Advantage</h3>

<p>Then there’s the broader shift happening in the closed-source ecosystem — like we saw with the acquisition of VMware by Broadcom. For years, many teams built their operational muscle memory around VMware-style infrastructure. Those recent shifts have prompted a lot of organizations to re-evaluate their long-term strategy and rethink what platform stability actually means.</p>

<p>Moving towards a more open-source platform like OpenStack gives you the flexibility to move your workloads anywhere you please. OpenStack supports common VM image formats such as <strong>VMDK, QCOW2, and more</strong>, so you can migrate your workloads instead of rebuilding them.</p>

<p>Building on a widely adopted, API-driven, community-supported foundation where no single vendor controls your roadmap means greater certainty, future stability, and flexibility for your workloads — with no rug-pulls.</p>

<hr />

<h2 id="openmetals-three-offerings">OpenMetal’s Three Offerings</h2>

<p>OpenMetal has three offerings that allow companies to choose the right way to serve their business.</p>

<h3 id="1-hosted-private-cloud">1. Hosted Private Cloud</h3>

<p>The flagship offering — and the one we’re going to dig into here — is their <strong>Hosted Private Cloud</strong>. Built on top of OpenStack and Ceph on dedicated servers that only serve your workloads, you pay for the service you want, you decide how to size your VMs, and you control workloads while OpenMetal takes complete care of the physical infrastructure.</p>

<p>OpenMetal has a <a href="https://openmetal.io/cloud-deployment-calculator/">Private Cloud Deployment Pricing Calculator</a> to help you choose various flavors of the cloud, attach compute and storage, and see what your costs can look like.</p>

<h3 id="2-bare-metal">2. Bare Metal</h3>

<p>OpenMetal’s bare metal offering gives you raw, dedicated hardware with full access down to BIOS/IPMI, predictable pricing, high-bandwidth networking, and a big list of modern server hardware options — everything from lots of RAM and multi-core CPUs to NVMe-heavy storage configurations.</p>

<p>It’s built for heavy workloads like virtualization clusters, big data, high-performance computing, or anything that needs consistent I/O and no noisy-neighbor interference. OpenMetal also offers dedicated GPU infrastructure you can deploy standalone or integrate with your private cloud.</p>

<h3 id="3-ceph-storage-clusters">3. Ceph Storage Clusters</h3>

<p>OpenMetal’s storage clusters are standalone Ceph storage built for performance and scale — not an abstracted blob store or S3 bucket with hidden limits. You get a distributed storage cluster with block, object, and file capabilities that scales horizontally and keeps data replicated and highly available. You can tune performance and redundancy, add capacity on demand, and connect it back to your private cloud or bare metal environment over high-speed networking.</p>

<h3 id="building-blocks-not-silos">Building Blocks, Not Silos</h3>

<p>The best part is these aren’t siloed products — they’re building blocks. You can start with Hosted Private Cloud, add Storage Clusters when you need more capacity, and drop in Bare Metal for things like edge services, specialized appliances, or heavy compute — all interconnected over OpenMetal private networking so it feels like one unified infrastructure footprint.</p>

<hr />

<h2 id="diving-into-openmetals-horizon-interface">Diving Into OpenMetal’s Horizon Interface</h2>

<p>One of the big things that has always annoyed me about hyperscalers is how disconnected we’ve become from what’s actually happening behind the scenes. As an infrastructure architect in my day job, it’s always eaten at me that, as customers, we’ve given up the control of choosing what hardware and platforms our cloud workloads run on.</p>

<p>Up until I met with OpenMetal, I thought that’s just the way it is. Turns out, that’s not the case — and that’s a big value add. When you know exactly what hardware your workloads are running on and how your networking is built, you can actually model performance and capacity. You’re not guessing how many hidden metered services are attached to your architecture; you’re running inside a defined infrastructure boundary.</p>

<p>Since OpenMetal’s hosted private cloud is built on OpenStack, they give you direct access to <strong>Horizon</strong> — OpenStack’s native management interface. You can also manage your environment through the OpenStack APIs, CLI tools, SDKs, and infrastructure-as-code workflows. For those familiar with OpenStack, you’re going to feel right at home — other than some simple branding, you’re getting direct access to Horizon with no abstracted UI to mess with.</p>

<p>Let’s run through the key sections.</p>

<h3 id="compute">Compute</h3>

<p><strong>API Access</strong> — This is where you work with OpenStack from the outside using CLI tools, SDKs, or automation. It lists the service endpoints for your project and gives you the option to download your OpenStack RC file so you can authenticate from the command line or scripts.</p>

<p><strong>Overview</strong> — Your project’s status dashboard. It gives you a concise summary of how many instances you’re running, how much vCPU, RAM, and storage you’re consuming, and how that usage compares to your quotas.</p>

<p><strong>Instances</strong> — The primary view you’ll live in. Here you manage virtual machines — their status, flavor, IP addresses, and power state. From this page you handle day-to-day lifecycle tasks: launching new instances, starting and stopping them, rebooting, connecting to the console, creating snapshots, and deleting workloads you no longer need.</p>

<p><strong>Images</strong> — Your catalog of boot sources for new instances, including base OS templates, golden templates, and snapshots you’ve turned into reusable images. This is also where you’d manage workload migrations from other platforms. If you’re moving off VMware or another platform, you can import converted images, turn them into templates, and redeploy your workloads inside your private cloud in a controlled, phased way.</p>

<p>OpenMetal has a solid list of pre-made images to start building from right out of the gate — CentOS Stream, Rocky, Debian, Fedora CoreOS, and Ubuntu. You can also create your own images and import a variety of image formats.</p>

<p><strong>Key Pairs</strong> — Where you manage SSH keys used to log in to your instances without passwords. Create a new key pair to generate a private key for download, or import an existing public key. When you launch an instance, you attach one of these key pairs so the public key is injected into the guest.</p>

<p><strong>Server Groups</strong> — Where you define placement policies for related instances. Create affinity and anti-affinity rules that govern where the scheduler places instances based on what groups you place them in. For people who’ve been using on-premises virtualization, affinity is one of those things that is incredibly important, and I’m glad to see that feature exists here.</p>

<h3 id="volumes">Volumes</h3>

<p><strong>Volumes</strong> — Where you manage block storage for your deployed instances. Create, resize, and delete volumes, view which instances they’re attached to, and create snapshots for backup or cloning purposes.</p>

<p><strong>Backups</strong> — Where you manage block storage backups for your volumes. These backups are stored in a separate Ceph pool. Ceph natively has a replication of 3, so there are always 3 copies of the data, giving you redundancy in case of node failure.</p>

<p><strong>Snapshots</strong> — Point-in-time copies of your volumes. View, edit, and delete existing snapshots, use them as a source to create new volumes, or launch new instances from them. If you’ve spent any time in virtualization, you know the value of snapshots as a means of short-term recovery when testing updates or system changes.</p>

<p><strong>Groups</strong> — Manage logical collections of related volumes so you can operate on them as a unit. Useful for applications that use multiple volumes and need consistent handling, such as a database and its log volume. Create group snapshots to capture consistent point-in-time copies across all volumes at once.</p>

<h3 id="containers-kubernetes">Containers (Kubernetes)</h3>

<p>OpenMetal’s OpenStack platform also features container infrastructure, which means you can bring your Kubernetes right into your hosted private cloud and manage everything in one place.</p>

<p><strong>Clusters</strong> — Manage your container orchestration clusters. Create new clusters based on predefined templates, scale worker nodes up or down, and download kubeconfig credentials.</p>

<p><strong>Cluster Templates</strong> — Define the blueprint for how container clusters are built. OpenMetal provides a ready-to-use Kubernetes cluster template so you can spin up clusters quickly. This is native OpenStack container integration — not a custom OpenMetal layer. OpenMetal publishes <a href="https://openmetal.io/docs/manuals/kubernetes-guides">Kubernetes guides on their documentation site</a> for those who want to dig deeper.</p>

<h3 id="networking">Networking</h3>

<p>The networking section is where the “private cloud” part gets real. You’re not just clicking buttons — you’re designing networks the same way you would on-prem: segmentation, isolation, routing, and controlled exposure, but in software.</p>

<p><strong>Network Topology</strong> — A visual map of how your OpenStack networking is wired. It shows routers, networks, and subnets along with connected instances and floating IPs. You can quickly see how traffic flows in and out, and the graph tab creates a clear network diagram that helps you get the full picture. Love it.</p>

<p><strong>Networks</strong> — Manage virtual networks available to your project. Create and edit networks and subnets, choose whether they are shared or project-scoped, and control IP ranges, gateways, and DHCP settings. Want a typical setup with a public-facing network for load balancers and a private application network behind it? You can model that cleanly.</p>

<p><strong>Routers</strong> — Manage Layer 3 routing for your project’s networks. Create and configure virtual routers, attach internal subnets as interfaces, and connect those routers to an external network for north-south traffic. These act as a simple NAT so that internal networks can access the Internet without burning through public IP addresses.</p>

<p><strong>Security Groups</strong> — Virtual firewall rules that control traffic to and from your instances and ports. Rules are enforced at the virtual NIC level, letting you define clear, reusable access policies for different workload types. OpenMetal includes a few pre-created security groups right out of the gate as great examples.</p>

<p><strong>Load Balancers</strong> — Configure and manage L4 and L7 load balancing for your applications. Create load balancers, define listeners on specific ports and protocols, and build pools of member instances. Perfect for exposing a service behind a single virtual IP and distributing traffic across multiple instances.</p>

<p><strong>Floating IPs</strong> — Manage public IP addresses that can be mapped to instances or ports. Allocate new floating IPs from an external network, associate them with specific instances or ports, and release them when no longer needed.</p>

<p>OpenStack also includes <strong>VLAN trunking</strong> for carrying multiple segmented networks over a single interface, <strong>Network QoS</strong> for shaping traffic policies, and built-in <strong>VPN support</strong> for creating secure IPsec tunnels between your cloud networks and external environments.</p>

<p>The depth and flexibility in networking is something I have to admit I was pretty impressed by. In hyperscalers, every layer of networking feels like a separate product with a separate billable line item. In OpenMetal’s private cloud, the tooling is part of the platform. And on the bandwidth side, OpenMetal’s outbound traffic is typically billed using a <strong>95th percentile model</strong> rather than per-gigabyte micro-metering — making bandwidth costs far more predictable as you scale.</p>

<h3 id="orchestration--object-store">Orchestration &amp; Object Store</h3>

<p><strong>Orchestration (Heat)</strong> — Manage infrastructure as code. Work with templates that define instances, networks, volumes, and other resources as a single stack. Launch, monitor, update, and delete those stacks — and inspect individual resources and events when something goes wrong.</p>

<p><strong>Object Store (Swift)</strong> — Work with object-based storage instead of block volumes. Create and manage containers (like buckets), upload, download, and organize objects, and control access by making containers private or public. Ideal for durable, scalable storage for files, backups, logs, or application assets.</p>

<h3 id="admin--identity">Admin &amp; Identity</h3>

<p><strong>Admin</strong> — The control plane for the OpenStack cloud as a whole rather than a single project. Watch overall usage, manage global resources, work with hypervisors and host aggregates, define shared flavors and images, and review quotas and usage across projects.</p>

<p><strong>Identity</strong> — Control who can access the cloud and what they’re allowed to do. Manage domains, projects, users, and groups, and assign roles that define permissions for each. Work with application credentials and, in some deployments, federation or external identity sources.</p>

<p><strong>Workflow (Mistral)</strong> — Manage multi-step operational task automation. Define workflows for common or complex operational procedures — provisioning, maintenance actions, or integrations that touch several OpenStack services — and run them in a controlled and auditable way.</p>

<hr />

<h2 id="final-thoughts">Final Thoughts</h2>

<p>I gotta say, I’m really impressed with what OpenMetal has built here. The way they’ve combined ease of deployment and the performance of their hardware with the power of OpenStack makes for a very compelling offering.</p>

<p>The public cloud feels like it’s getting less and less reliable as time goes on, while getting more and more expensive to operate in — which feels like it’s moving in the wrong direction. OpenMetal’s real value is giving you back the control, flexibility, and freedom that we all gave up by moving into the public cloud. You get your workloads in the cloud and the flexibility that provides, without losing any of the infrastructure primitives, root access, network segmentation, and more — while gaining predictable performance on dedicated hardware. To me, that’s the perfect cloud scenario.</p>

<p>Then there’s the cost structure. I know from personal experience that every single thing in the public cloud has a charge. The VM costs money, the network interface for the VM costs money, the disk costs money, the virtual network you connect the VM to costs money — and that’s just one workload, not counting ingress and egress fees, VPN connections, and everything else. Hosted private cloud is the answer to that. You’re operating inside defined infrastructure capacity on dedicated hardware. Performance is consistent, budgeting is predictable, and scaling decisions are deliberate, not reactive to billing alerts.</p>

<p>As an engineer whose heart is in infrastructure, this is the closest you can come to having the best of both worlds. You control every part of how you provision and deploy your private cloud. You understand exactly what hardware your environment runs on, so you can model capacity and make decisions with a lot more confidence. And on top of that, you get all of the benefits of being in the cloud, engineering support from OpenMetal when you need it, and no worries about the physical hardware.</p>

<p>And finally, being based on OpenStack means you have all of the API-driven Infrastructure as Code functionality, automation, true multi-tenancy, and — the biggest value — a standard open platform with no vendor lock-in.</p>

<hr />

<h2 id="closing">Closing</h2>

<p>Special thanks to <strong>OpenMetal</strong> for giving me the opportunity to really dig into their hosted private cloud platform and see the possibilities for myself. It’s really changed my views on how the cloud can actually benefit your business!</p>

<p>If you’re a business that’s heavily invested in the cloud and looking for how you can take back control of costs, avoid public cloud lock-in, and build the private cloud that works best for your business, head over to <a href="https://openmetal.io/">openmetal.io</a> and get started today! They have also provided a generous offer just for my channel — link and details in the description below.</p>

<p>Thanks for watching!</p>]]></content><author><name>2GT_BK</name></author><category term="Cloud" /><category term="Infrastructure" /><category term="OpenMetal" /><category term="OpenStack" /><category term="PrivateCloud" /><category term="Cloud" /><category term="Ceph" /><category term="Infrastructure" /><category term="Sponsored" /><summary type="html"><![CDATA[A sponsored deep-dive into OpenMetal's hosted private cloud platform built on OpenStack and Ceph — and why it's a compelling alternative to AWS, Azure, and GCP for businesses with steady-state workloads.]]></summary></entry><entry><title type="html">Is Proxmox Datacenter Manager Dead? PegaProx May Have Just Killed It</title><link href="https://2guystek.tv/virtualization/2026/03/12/pegaprox-kills-pdm.html" rel="alternate" type="text/html" title="Is Proxmox Datacenter Manager Dead? PegaProx May Have Just Killed It" /><published>2026-03-12T00:00:00+00:00</published><updated>2026-03-12T00:00:00+00:00</updated><id>https://2guystek.tv/virtualization/2026/03/12/pegaprox-kills-pdm</id><content type="html" xml:base="https://2guystek.tv/virtualization/2026/03/12/pegaprox-kills-pdm.html"><![CDATA[<h2 id="watch-the-full-video">Watch the Full Video</h2>

<p><img src="//youtu.be/qnq0Y9mJgXA" alt="" /></p>

<hr />

<h2 id="introduction">Introduction</h2>

<p>Proxmox Datacenter Manager might already be dead, and an open source project may have just killed it. Buckle up, friends, cause this is going to be wild!</p>

<p>Hey there homelabbers, self-hosters, IT-pros, and engineers. Rich here! I get a lot of emails from vendors, software makers, and people who are hoping I’ll create a video about their product, and as much as I appreciate the emails, I don’t usually do so because, well it’s either not in my niche or it’s not something I want to put my name next to if you get what I mean. Earlier this week I got an email from the creators of an open source project called <strong>PegaProx</strong> introducing me their work.</p>

<p>I did want I normally do, I read the email, then I went to take a cursory look at their site. What I found was something that truly blew me away. PegaProx may single handedly just not only have killed PDM, but also also changed the game for what a modern PVE GUI should be. Enough hyperbole, let’s dig into PegaProx and it’s features, see if this is something you should be running if you’re using Proxmox, and at the end I’ll show you how to deploy it. Let’s get to it!</p>

<hr />

<h2 id="what-is-pegaprox">What is PegaProx?</h2>

<p>PegaProx is an open-source, web-based management platform for Proxmox environments that gives you a single plane of glass for multiple clusters, nodes, and workloads. It provides:</p>

<ul>
  <li><strong>Centralized Monitoring</strong>: Overview of all your clusters and workloads from one dashboard</li>
  <li><strong>VM and Container Management</strong>: Complete lifecycle management of your virtual machines and LXC containers</li>
  <li><strong>Automated Load Balancing</strong>: Distribute workloads across your cluster automatically</li>
  <li><strong>Cross-Cluster Migrations</strong>: Move workloads between clusters seamlessly</li>
  <li><strong>High Availability Orchestration</strong>: Advanced HA functionality beyond standard PVE capabilities</li>
  <li><strong>Role-Based Access Control</strong>: Fine-grained permissions management through a unified dashboard</li>
</ul>

<p>All this is designed to reduce operational complexity and give administrators better visibility and control over distributed infrastructure.</p>

<h3 id="pegaprox-vs-proxmox-datacenter-manager">PegaProx vs Proxmox Datacenter Manager</h3>

<p>If you thought this sounds a lot like what Proxmox Datacenter Manager does, you’d be right. But unlike PDM’s limited feature set, PegaProx, with the exception of PBS visibility, can do practically everything a Proxmox user would want to do and more.</p>

<p><strong>Key Advantages Over PDM:</strong></p>

<ul>
  <li>Create and customize VMs and containers directly in the interface</li>
  <li>Manage PVE host configurations</li>
  <li>Set up automated load balancing across your cluster</li>
  <li>Carry out cross-cluster migrations</li>
  <li>Beautiful, modern user interface</li>
  <li>No “No Valid Subscription” warnings (it’s open source!)</li>
</ul>

<hr />

<h2 id="pegaprox-feature-walkthrough">PegaProx Feature Walkthrough</h2>

<h3 id="dashboard-and-overview">Dashboard and Overview</h3>

<p>This is the login screen for PegaProx. Once logged in, we land on the <strong>All Clusters Overview</strong> page. Which, can we just take a moment to take in the UI here? Seriously, beautiful!</p>

<p>This page is full of cards that provide you with macro details about all of the hosts and clusters you’ve added to PegaProx, including:</p>
<ul>
  <li>All clusters added</li>
  <li>Top resource consumers in a scrollable list</li>
</ul>

<p>You can also sort this information by name, health, nodes, VMs, CPU, and RAM.</p>

<h3 id="cluster-overview">Cluster Overview</h3>

<p>Clicking on a cluster lands you on the overview page, with:</p>
<ul>
  <li>Cards for each node in your cluster in the middle left</li>
  <li>Overall cluster health and last migrations cards on the right</li>
</ul>

<p>Clicking ‘show all’ in the card gives you the full details of the current state of your node’s resource usage.</p>

<h3 id="resources-tab">Resources Tab</h3>

<p>The Resources tab is where we get a complete look at all of the workloads running. Both VMs and LXC containers are listed here, and while the cards view is nice, PegaProx provides helpful filters:</p>

<ul>
  <li><strong>State Filters</strong>: All, Active, Stopped</li>
  <li><strong>Type Filters</strong>: VMs, LXC containers</li>
  <li><strong>View Options</strong>: Cards view, list view, and the awesome compact view</li>
</ul>

<p>In the compact view, once you select one of your workloads, you get:</p>
<ul>
  <li>Macro details (CPU, RAM, Disk, Uptime)</li>
  <li>Quick actions (shutdown, reboot, console access)</li>
  <li>Configuration modification - all configurable options available in PVE are accessible here</li>
</ul>

<p>You can:</p>
<ul>
  <li>Change hardware and disks</li>
  <li>Manage network settings</li>
  <li>Manage snapshots, backups, and replication</li>
  <li>View history and change options</li>
  <li>Access historical graphs</li>
  <li>Trigger migrations between nodes or across clusters</li>
  <li>Clone, force reset, or delete workloads</li>
  <li>Create new VMs and containers with the same fields as PVE</li>
</ul>

<h3 id="snapshot-management">Snapshot Management</h3>

<p>The <strong>snapshot overview</strong> is something that PVE has been missing forever. The snapshot view shows you all of the snapshots in your cluster. This is one of those quality of life features that exist in enterprise virtualization platforms to make admins’ lives easier and help you clean up wasted space due to forgotten snapshots.</p>

<h3 id="datacenter-management">Datacenter Management</h3>

<h4 id="summary">Summary</h4>
<p>The summary page gives you macro information about the datacenter including:</p>
<ul>
  <li>Cluster quorum status</li>
  <li>Node state</li>
  <li>Guests and container state</li>
  <li>Macro resource usage</li>
</ul>

<h4 id="cluster-tab">Cluster Tab</h4>
<p>Details about your cluster configuration and status.</p>

<h4 id="options">Options</h4>
<p>View and edit all of your datacenter options configured.</p>

<h4 id="storage">Storage</h4>
<p>Detailed view of all storage configured, including:</p>
<ul>
  <li>Name, type, content, and path</li>
  <li>Connected nodes</li>
  <li>Shared storage status</li>
  <li>Storage operations (add, modify, delete)</li>
  <li>Multipath redundancy configuration with a single click</li>
</ul>

<h4 id="sdn-software-defined-networking">SDN (Software-Defined Networking)</h4>
<p>Manage all your software-defined networking:</p>
<ul>
  <li>Add, change, and remove zones and VNets</li>
  <li>Apply settings to all nodes in your cluster</li>
</ul>

<h4 id="backups">Backups</h4>
<ul>
  <li>Manage configured backups</li>
  <li>Create new backup jobs</li>
  <li>Delete backup jobs</li>
</ul>

<h4 id="replication">Replication</h4>
<p>Same functionality as backups but for replication jobs.</p>

<h4 id="proxmox-native-ha">Proxmox Native HA</h4>
<ul>
  <li>Manage and control your PVE HA functionality</li>
  <li>Add and remove resources to HA</li>
  <li>Create HA groups</li>
</ul>

<h4 id="cpu-compatibility">CPU Compatibility</h4>
<p>Change the default CPU compatibility mode for your datacenter.</p>

<h4 id="firewall">Firewall</h4>
<p>Create and manage firewall rules for your datacenter.</p>

<h3 id="datastore-management">Datastore Management</h3>

<p>The <strong>Datastore</strong> tab allows you to browse your provisioned datastores and see what’s held within them:</p>
<ul>
  <li>VM and container disks</li>
  <li>Backups</li>
  <li>ISOs</li>
</ul>

<h4 id="storage-balancing-experimental">Storage Balancing (Experimental)</h4>
<p>Enable automated balancing of your workloads across storage for better performance. (<em>Use at your own risk!</em>)</p>

<h3 id="automation-features">Automation Features</h3>

<p>The <strong>Automation</strong> tab allows you to:</p>
<ul>
  <li>Configure scheduled actions</li>
  <li>View tags and labels</li>
  <li>Create email alerts when thresholds are exceeded</li>
  <li>Create affinity and anti-affinity rules for workloads</li>
  <li>Configure and run custom scripts on cluster nodes</li>
</ul>

<h3 id="reports">Reports</h3>

<p>Get usage information over time, selectable by hour, day, and week, to see historically who the heavy-hitters are.</p>

<h3 id="settings">Settings</h3>

<p>Configure:</p>
<ul>
  <li>Workload balancing for your cluster</li>
  <li>Node updates</li>
  <li>Rolling cluster updates</li>
  <li>And more</li>
</ul>

<hr />

<h2 id="ui-and-design-philosophy">UI and Design Philosophy</h2>

<p>I’ve said this before, and I’ll say it again - I’m not a fan of the PVE UI. I think it’s dated, difficult to find the actual settings you want, and it’s downright ugly. In contrast, PegaProx has kinda blown away by its entire reimagining of what the Proxmox user experience could be like.</p>

<h3 id="important-notes">Important Notes</h3>

<p>I don’t mean to paint PegaProx as a production-ready replacement for all your PVE and PDM activities. At the time this video was made, PegaProx is considered <strong>beta</strong>, and there are things that may behave differently from the production version.</p>

<p><strong>Known Issues:</strong></p>
<ul>
  <li>Some UI elements may appear in German even with English selected</li>
  <li>Still missing PBS (Proxmox Backup Server) visibility</li>
</ul>

<p><strong>Additional Features Not Covered:</strong></p>
<ul>
  <li>Multiple user-selectable themes</li>
  <li>Extensive security and user management</li>
  <li>AD domain integration</li>
  <li>OIDC for single sign-on</li>
  <li>Two-factor authentication</li>
  <li>Granular permissions for users and groups</li>
  <li>Tenant creation for logical segmentation</li>
</ul>

<hr />

<h2 id="how-to-deploy-pegaprox">How to Deploy PegaProx</h2>

<h3 id="step-1-create-the-container-in-pve">Step 1: Create the Container in PVE</h3>

<ol>
  <li>Log into your PVE host and click the <strong>Create CT</strong> button in the top right corner</li>
  <li>Give the container a hostname (e.g., “PegaProx”) as an FQDN</li>
  <li>Set a root account password</li>
  <li>Select your preferred Linux template (Ubuntu 24.04 recommended)</li>
  <li><strong>Disk Space</strong>: Default 8GB is fine, but 16GB provides room for OS updates</li>
  <li><strong>CPU Cores</strong>: Minimum 1 core for small homelab, but 4 cores recommended</li>
  <li><strong>RAM</strong>: Minimum 1GB required, but 4GB recommended</li>
  <li><strong>Network</strong>: Set a <strong>static IP address</strong> (don’t use DHCP for servers)</li>
  <li><strong>DNS</strong>: Configure as needed or use host settings</li>
  <li>Check <strong>“Start after created”</strong> to have the container ready</li>
  <li>Click <strong>Finish</strong></li>
</ol>

<h3 id="step-2-prepare-the-container">Step 2: Prepare the Container</h3>

<ol>
  <li>Open the container console in PVE</li>
  <li>Log in with root and your set password</li>
  <li>Install curl: <code class="language-plaintext highlighter-rouge">apt install curl -y</code></li>
  <li>Clear the console: <code class="language-plaintext highlighter-rouge">clear</code></li>
</ol>

<h3 id="step-3-download-and-run-the-install-script">Step 3: Download and Run the Install Script</h3>

<ol>
  <li>Visit https://pegaprox.com/</li>
  <li>Click any of the 3 download buttons</li>
  <li>Copy the install script link</li>
  <li>In the container console, paste and run the install command</li>
  <li>Wait for supporting packages to install</li>
</ol>

<h3 id="step-4-configure-installation">Step 4: Configure Installation</h3>

<p>When prompted (Step 6), choose your port configuration:</p>
<ul>
  <li>Option 1: Default ports</li>
  <li><strong>Option 2: Professional setup</strong> (standard SSL ports) - Recommended</li>
  <li>Option 3: Custom ports</li>
</ul>

<p>The installation will complete and provide you with the web interface URL.</p>

<h3 id="step-5-access-and-login">Step 5: Access and Login</h3>

<ol>
  <li>Copy the web interface URL from the installation completion message</li>
  <li>Open it in a browser</li>
  <li>Accept the self-signed certificate warning (you can add a valid cert later)</li>
  <li>Login with:
    <ul>
      <li><strong>Username</strong>: <code class="language-plaintext highlighter-rouge">pegaprox</code></li>
      <li><strong>Password</strong>: <code class="language-plaintext highlighter-rouge">admin</code></li>
    </ul>
  </li>
  <li>Change the default password when prompted</li>
</ol>

<h3 id="step-6-add-your-proxmox-cluster">Step 6: Add Your Proxmox Cluster</h3>

<ol>
  <li>Click <strong>“Add Cluster”</strong> in the top right</li>
  <li>Enter a cluster name (e.g., “2GT-PVE”)</li>
  <li>Enter the IP address or DNS name of your Proxmox cluster/host</li>
  <li>Enter login credentials (root@pam recommended with root password)</li>
  <li>Click <strong>“Add Cluster”</strong></li>
</ol>

<p>You can now drill down into your cluster’s usage, health, and manage all your VMs and containers!</p>

<h3 id="sizing-recommendations">Sizing Recommendations</h3>

<p>I’m using generous resources in this walkthrough, but PegaProx has a comprehensive docs site with minimum sizing requirements and prerequisites. <strong>Highly recommend checking their documentation</strong> to properly size PegaProx for your environment.</p>

<hr />

<h2 id="final-thoughts">Final Thoughts</h2>

<p><strong>Holy shit.</strong> PegaProx effectively replaces both PVE and PDM’s user interfaces, and does it in style! I can’t say enough nice things about what PegaProx is building here.</p>

<h3 id="message-to-proxmox-server-solutions">Message to Proxmox Server Solutions</h3>

<p>Proxmox Server Solutions, listen up… This is the GUI I want for PVE, PDM, and PBS. Reach out to these people, pay them a ton of money, and adopt this as your next-generation UI, <strong>seriously</strong>.</p>

<h3 id="pegaprox-as-a-pdm-replacement">PegaProx as a PDM Replacement</h3>

<p>PegaProx looks like it effectively kills PDM as it exists at this time entirely. There’s no need to use PDM when PegaProx does everything it’s doing, and does it so much more beautifully. But saying it’s a PDM replacement doesn’t really do it justice. <strong>PegaProx kills the PVE GUI as well</strong> since you can do practically every single thing you’d do there, here.</p>

<h3 id="real-world-impact">Real-World Impact</h3>

<p>I’ve done test deploys of Proxmox in business settings, and one of the biggest issues I have to overcome with non-technical people who have to administer PVE is the UI of Proxmox. One of the biggest barriers to entry is the dated interface, and after seeing PegaProx, I realize my previous suggestions for UI redesign were insufficient.</p>

<h3 id="current-limitations">Current Limitations</h3>

<p><strong>Eye candy aside</strong>, PegaProx brings you cluster load balancing as a standard feature - something that is <strong>not available in PVE</strong>, still…after all of these years. And gives you PDM features like cross-cluster migration functionality as well.</p>

<p><strong>Missing Features:</strong></p>
<ul>
  <li>PBS (Proxmox Backup Server) support</li>
</ul>

<p><strong>Beta Considerations:</strong></p>
<ul>
  <li>Some features may break</li>
  <li>Continue to monitor for updates</li>
</ul>

<h3 id="bottom-line">Bottom Line</h3>

<p>It’s <strong>open source and free to use</strong>, and you won’t see anymore ‘No Valid Subscription’ warnings! Spin up a container, deploy PegaProx right now, kick the tires on it, use it, find bugs, and report your findings to the community.</p>

<hr />

<h2 id="closing">Closing</h2>

<p>Thanks for watching this video, folks, and thank you to the fine people who support us through Patreon and the YouTube Membership program. If you’d like to support what we do here, consider checking those out. Join our community Discord and chat with me and like-minded homelabbers, geeks, and nerds, and as always, we’ll see you on the next one!</p>]]></content><author><name>2GT_BK</name></author><category term="Virtualization" /><category term="Proxmox" /><category term="PegaProx" /><category term="PDM" /><category term="Virtualization" /><category term="Homelab" /><summary type="html"><![CDATA[Exploring how PegaProx, an open-source management platform, is revolutionizing Proxmox environments and potentially replacing PDM entirely.]]></summary></entry><entry><title type="html">VergeOS Installation Walkthrough: From ISO to Your First VM</title><link href="https://2guystek.tv/virtualization/infrastructure/homelab/2026/02/17/vergeos-installation-walkthrough.html" rel="alternate" type="text/html" title="VergeOS Installation Walkthrough: From ISO to Your First VM" /><published>2026-02-17T00:00:00+00:00</published><updated>2026-02-17T00:00:00+00:00</updated><id>https://2guystek.tv/virtualization/infrastructure/homelab/2026/02/17/vergeos-installation-walkthrough</id><content type="html" xml:base="https://2guystek.tv/virtualization/infrastructure/homelab/2026/02/17/vergeos-installation-walkthrough.html"><![CDATA[<p><img src="//youtu.be/El-11ydk-Jo" alt="" /></p>

<p>Back in October 2024, I introduced you to <strong>VergeOS</strong> as a compelling alternative to VMware, covering its architecture, user interface, and overall user experience. This time, we’re going to walk through the installation of VergeOS — step by step — from burning the ISO and installing the OS all the way to spinning up your first virtual machine. Let’s get to it!</p>

<hr />

<p>Hey there, homelabbers, self-hosters, IT-Pros, and Engineers. Rich here! It had always been my plan to take you through the installation process, start to finish, for VergeOS so you could try it out for yourself. Thanks to this video’s sponsor, <strong>Verge.IO</strong> — the makers of VergeOS — I’m able to bring this to you right now!</p>

<p>Before we get into the step-by-step how-to, let’s get a quick recap of VergeOS out of the way first.</p>

<hr />

<h2 id="what-is-vergeos">What Is VergeOS?</h2>

<p>VergeOS is essentially a data-center operating system that collapses compute, storage, networking, backup, and even DR into one unified platform. Unlike traditional HCI or the classic VMware stack, VergeOS is one unified codebase with one UI and one lifecycle — and is not a cobbled-together pile of different products. That’s what makes it special: it’s a true private cloud operating system, which massively simplifies operations and slashes costs. And in a post-VMware world, that actually matters.</p>

<p>VergeOS is compelling because it lets you:</p>

<ul>
  <li>Modernize your private cloud</li>
  <li>Reuse existing hardware</li>
  <li>Migrate off VMware at your own pace</li>
  <li>Run the whole data center like a single system instead of babysitting a stack of products</li>
</ul>

<p>This install how-to is going to walk you through every part of the installation and setup process, so by the end you should have VergeOS completely set up and ready to use. Our first stop on the installation journey is hardware requirements.</p>

<hr />

<h2 id="hardware-requirements-for-vergeos">Hardware Requirements for VergeOS</h2>

<p>Let’s get the minimum and recommended hardware requirements out of the way first.</p>

<h3 id="cpu">CPU</h3>

<p>VergeOS requires a <strong>64-bit x86 CPU</strong> with a minimum clock speed of <strong>2.7 GHz</strong>. Both AMD and Intel CPUs are supported, and hardware virtualization must be enabled in the BIOS.</p>

<h3 id="memory">Memory</h3>

<p>VergeOS requires a <strong>minimum of 16GB of RAM per node</strong> dedicated to VergeOS itself. You’ll need an additional <strong>1.5GB of RAM for each terabyte of raw vSAN storage</strong>, and on top of the base requirements, you’ll also need additional RAM for your virtual workloads.</p>

<h3 id="storage">Storage</h3>

<p>VergeOS requires <strong>NVMe or SSD disks</strong> for metadata and workload storage, with NVMe direct-attached storage recommended for optimal performance. For systems with storage controllers, Verge.io recommends a dedicated HBA configured in IT mode or JBOD mode for direct disk access. Mechanical disks are only recommended for use in an archive tier and never for primary storage.</p>

<h3 id="networking">Networking</h3>

<ul>
  <li><strong>Minimum 1 GbE per node</strong> for management UI access and guest VM external network access</li>
  <li><strong>Minimum 1 x 10 GbE per node</strong> for the internal core network used for storage replication and synchronization</li>
</ul>

<p>One of the nice things about VergeOS is that it’s designed to run on vanilla x86 hardware and doesn’t require a specific vendor’s HCL — meaning you can often deploy VergeOS on servers you already own and it’ll work great.</p>

<p>For those who don’t have hardware available, <strong>Verge.io offers a free test-drive lab</strong> where you can explore VergeOS without needing physical hardware or committing to an install: <a href="https://www.verge.io/the-ultimate-test-drive/">verge.io/the-ultimate-test-drive/</a></p>

<p>Alright, prereqs out of the way. Before we kick this off, make sure you have a <strong>USB stick of at least 8GB</strong> in size. Let’s get to it!</p>

<hr />

<h2 id="step-1-download-the-vergeos-iso">Step 1: Download the VergeOS ISO</h2>

<p>Open your browser and head over to <a href="https://updates.vergeos.com/download/">updates.vergeos.com/download/</a> and press enter. The download should start automatically. Once the download is complete, we’ll move on to creating the bootable USB stick.</p>

<hr />

<h2 id="step-2-create-a-bootable-usb-stick-with-rufus">Step 2: Create a Bootable USB Stick with Rufus</h2>

<p>There are a variety of tools for building bootable USB sticks from ISOs. On Windows, my go-to is the free tool <strong>Rufus</strong>. Head over to <a href="https://rufus.ie/en/">rufus.ie</a> and click Download at the top, then select the top link in the list to start the download.</p>

<p>Once Rufus is downloaded:</p>

<ol>
  <li>Open Rufus and plug in your 8GB or larger USB stick</li>
  <li>Click the <strong>Select</strong> button and choose the VergeOS ISO you just downloaded, then click <strong>Open</strong></li>
  <li>Click <strong>Start</strong> to begin</li>
  <li>When prompted, select <strong>“Write in DD image mode”</strong> and click <strong>OK</strong></li>
  <li>Rufus will warn you that all data on the USB stick will be wiped — if you’re good with that, click <strong>OK</strong></li>
</ol>

<p>Sit back and let Rufus build the bootable USB. This can take some time to complete. Once it’s done, click <strong>Close</strong> and we’re ready to install!</p>

<hr />

<h2 id="step-3-install-vergeos">Step 3: Install VergeOS</h2>

<p>Now it’s time to install VergeOS on hardware. Booting from a USB stick is typically straightforward, though the exact method depends on your system’s manufacturer. I’ll be using a Supermicro server in this walkthrough, but the general process is the same across most systems.</p>

<p>Insert the bootable USB stick, power on the host, and tell the system to boot from the USB. Once the initial boot completes, you’ll see the <strong>VergeOS boot loader screen</strong> — wait the 10 seconds or press Enter to continue. The installer will take a moment to start up depending on your hardware.</p>

<h3 id="node-type">Node Type</h3>

<p>The first step is choosing the type of node you’re deploying. Regardless of whether you’re deploying a single system or a multi-node cluster, <strong>the first system must be a controller</strong>. In a multi-node cluster, your first two nodes must both be controllers for redundancy and fault tolerance. Press Enter to select <strong>Controller</strong>.</p>

<h3 id="new-install-or-join-existing">New Install or Join Existing?</h3>

<p>You’re asked if this is a new install or if you’re joining an existing system. Since we’re deploying a new cluster, leave this on <strong>Yes</strong> and press Enter.</p>

<h3 id="timezone-configuration">Timezone Configuration</h3>

<ol>
  <li><strong>Region</strong> — Select your region (e.g., America) and press Enter</li>
  <li><strong>Timezone</strong> — Scroll to your timezone (e.g., Los_Angeles) and press Enter</li>
  <li><strong>NTP Servers</strong> — VergeOS offers official NTP servers by default, which work for most setups. If you have dedicated NTP servers on your network, enter them here. Otherwise, leave the defaults and press Enter</li>
  <li><strong>Date</strong> — Verify the year, month, and day are correct and press Enter</li>
  <li><strong>Time</strong> — The system expects 24-hour clock format. Adjust if necessary and press Enter</li>
</ol>

<h3 id="cluster-name-and-admin-account">Cluster Name and Admin Account</h3>

<ul>
  <li><strong>Cluster Name</strong> — This is the name of the cluster itself, not an individual node. Choose accordingly (e.g., <code class="language-plaintext highlighter-rouge">2gt-cluster</code>) and press Enter</li>
  <li><strong>Admin Username</strong> — VergeOS defaults to <code class="language-plaintext highlighter-rouge">admin</code>, which is fine for most setups. Press Enter to accept</li>
  <li><strong>Admin Password</strong> — Set and confirm your admin password, then press Enter</li>
  <li><strong>Admin Email</strong> — Enter an email address for the admin user and press Enter</li>
</ul>

<h3 id="network-configuration">Network Configuration</h3>

<p>Your host needs at least two network interfaces. The first configuration is for the <strong>core network</strong>, used for inter-node storage replication and cluster health management.</p>

<p>On my system, the first NIC is connected to my LAN and the second is on a dedicated layer-2 VLAN for the core network. To select the correct interface:</p>

<ol>
  <li>Press <strong>Spacebar</strong> to deselect the first NIC</li>
  <li>Arrow down to the second NIC and press <strong>Spacebar</strong> to select it</li>
  <li>Press <strong>Enter</strong> to continue</li>
</ol>

<p>Next, configure the physical switch settings for the core network interface. I recommend giving switches descriptive names. Press Enter, clear <strong>Switch 1</strong>, type <strong>Core Network</strong>, and press Enter. Arrow to <strong>Finish</strong> and press Enter.</p>

<p>VergeOS will check the interface for an existing core network, then move on to the remaining network interface. Select it and similarly rename <strong>Switch 2</strong> to <strong>LAN</strong>. Then, arrow down to the <strong>Core-Network</strong> field, press Enter to edit it, clear <strong>yes</strong>, type <strong>no</strong>, and press Enter. Arrow to <strong>Finish</strong> and press Enter.</p>

<h3 id="external-network">External Network</h3>

<p>VergeOS will now ask which physical network provides external access to the UI, LAN, and WAN. Since we named our switch <strong>LAN</strong>, VergeOS can easily identify the right one — leave it set to <strong>LAN</strong> and press Enter.</p>

<p>Now configure the external network interface:</p>

<ul>
  <li><strong>VLAN ID</strong> — If your external connection doesn’t use a tagged VLAN, leave this blank and press Enter (most home/lab setups fall here)</li>
  <li><strong>IP Address</strong> — I strongly recommend a static IP. Enter your address in CIDR notation (e.g., <code class="language-plaintext highlighter-rouge">172.24.1.50/24</code>) and press Enter</li>
  <li><strong>Default Gateway</strong> — Enter or confirm your default gateway (e.g., <code class="language-plaintext highlighter-rouge">172.24.1.1</code>) and press Enter</li>
  <li><strong>DNS Servers</strong> — Verify and add any additional DNS servers for redundancy, then press Enter</li>
  <li><strong>IPMI</strong> — If your server has IPMI and you’d like to configure it here, do so. Otherwise, press Enter to skip</li>
</ul>

<h3 id="disk-configuration">Disk Configuration</h3>

<ul>
  <li><strong>Disk Encryption</strong> — Choose whether to enable disk-level encryption for vSAN storage based on your security requirements. We’ll leave this set to <strong>No</strong> and press Enter</li>
  <li><strong>Storage Layout</strong> — VergeOS automatically builds storage tiers based on the disks in your system. You can accept the default configuration (which is what I’ll do) or customize. Press Enter to continue</li>
  <li><strong>Manual Tier Assignment</strong> — If you want to reorganize disks into different tier groups, say Yes here. I’m happy with the VergeOS suggestions, so I’ll arrow to <strong>No</strong> and press Enter</li>
  <li><strong>Over-Provisioning Tier</strong> — Select a storage tier to use for over-provisioning and failover in the event of memory overcommitment. I’ll select my Tier 3 (largest capacity) and press Enter</li>
  <li><strong>Over-Provisioning Per Drive</strong> — Define the storage space per drive to allocate. The default of 4GB per drive is fine, so press Enter</li>
  <li><strong>Confirm Total</strong> — Review the total over-provisioning space and press Enter to accept</li>
</ul>

<p>VergeOS will now format the drives, prepare them for vSAN, and complete the vSAN implementation. This can take a while depending on how many disks you have, so give it time. Once vSAN is built, VergeOS will install and prepare the OS packages — also potentially time-consuming.</p>

<h3 id="final-steps">Final Steps</h3>

<p>When installation completes, you’ll be asked if you want to register the EFI boot partitions with your system’s BIOS. Leave this on <strong>Yes</strong> and press Enter.</p>

<p><strong>Congratulations!</strong> You’ve completed the initial installation of VergeOS. Remove the bootable USB stick and press Enter to reboot. After the system restarts and VergeOS starts up, you’ll land on the VergeOS console screen. Great work!</p>

<hr />

<h2 id="step-4-access-the-management-ui">Step 4: Access the Management UI</h2>

<p>Pop open a browser and navigate to the IP address you configured during setup. For me, that’s <code class="language-plaintext highlighter-rouge">172.24.1.50</code>.</p>

<p>Fresh installs use a self-signed certificate, so your browser will alert you. Click <strong>Advanced</strong>, then <strong>Proceed to the website</strong>. You’ll land on the VergeOS login page. Enter the admin username and password you set during installation and click <strong>Sign In</strong>.</p>

<p>Welcome to the VergeOS web management UI! By default it’s in Light Mode — head to the top right corner, click the sun icon, and select the <strong>Dark Theme</strong>. Much better.</p>

<h3 id="ui-overview">UI Overview</h3>

<p>The VergeOS UI is broken into three main sections:</p>

<ul>
  <li><strong>Top navigation</strong> — Organizes the major sections of the platform</li>
  <li><strong>Main content window</strong> — The primary working area</li>
  <li><strong>Left sub-menu</strong> — Subcategories, options, and configurations for the current section</li>
</ul>

<p>The <strong>main dashboard</strong> is your high-level overview of the entire VergeOS cluster, broken into easy-to-understand cards covering VMs, networks, nodes, alerts, storage health and performance, and logs.</p>

<p>Here’s a quick tour of the major sections:</p>

<ul>
  <li><strong>Virtual Machines</strong> — Build, run, and operate workloads end-to-end. Spin up VMs from ISOs or templates, assign CPU, RAM, and storage tiers, attach networks, and manage lifecycle actions like snapshots, cloning, live migration, and HA.</li>
  <li><strong>Files</strong> — Upload data including ISOs, manage downloaded marketplace images, create and share data hosted in the cluster.</li>
  <li><strong>Tenants</strong> — True multi-tenancy with completely isolated logical tenants — their own users, quotas, networks, and workloads. Ideal for MSPs, service providers, or teams needing separated environments.</li>
  <li><strong>NAS</strong> — Leverage cluster storage for shared network storage — Windows file shares, NFS shares, and more, just like a dedicated NAS appliance.</li>
  <li><strong>Networks</strong> — Define VLANs, bridges, and network segments and attach them to VMs, tenants, and services.</li>
  <li><strong>Backup / DR</strong> — Configure snapshots and replication for VMs and storage. Platform-level, agent-free recovery and availability focused on replication to another VergeOS system.</li>
  <li><strong>Infrastructure</strong> — Manage the underlying cluster: nodes, disks, networks, capacity, and hardware health.</li>
  <li><strong>Import/Export</strong> — Move workloads in and out of VergeOS. Import from external platforms or export existing VMs as portable images.</li>
  <li><strong>Repositories</strong> — Store install media and images like ISOs and VM templates. Add catalogs or manage third-party repos from the marketplace.</li>
  <li><strong>AI</strong> <em>(New in v26)</em> — Private AI features built into the platform. Pick a model, start chat sessions, and use an OpenAI-compatible API so existing tools can point at your VergeOS system.</li>
  <li><strong>Logs</strong> — All platform logging combined and searchable in one place.</li>
  <li><strong>System</strong> — Configure the platform itself: users and roles, licensing, updates, time settings, notifications, certificates, and system-wide behavior.</li>
</ul>

<p>VergeOS has a lot of depth, and I highly recommend digging into their public docs for any features you want to explore further. VergeIO has also added an AI assistant right inside the platform that can answer real questions like <em>“How can I configure my VM to access the Internet?”</em> — quick, accurate, and helpful.</p>

<hr />

<h2 id="step-5-license-the-cluster">Step 5: License the Cluster</h2>

<p>Before VergeOS will allow you to start virtual machines, you need to license the cluster. I’m using a trial license here, but the process is very similar for production licenses as well.</p>

<ol>
  <li>Head to <strong>System</strong> → <strong>Settings</strong></li>
  <li>On the left, select <strong>Updates Settings</strong></li>
  <li>Enter the username and password supplied with your license in the <strong>User</strong> and <strong>Password</strong> fields</li>
  <li>Click <strong>Submit</strong> to apply the license</li>
  <li>Verify by heading to <strong>System</strong> → <strong>Updates</strong> — you should see confirmation that your credentials were accepted and access to the update server has been granted</li>
  <li>Head back to <strong>Settings</strong> and in the License card, confirm your system is licensed and valid</li>
</ol>

<p>Right on! Let’s build our first VM!</p>

<hr />

<h2 id="step-6-create-your-first-vm">Step 6: Create Your First VM</h2>

<p>Head over to the <strong>Virtual Machines</strong> section from the main dashboard, then click <strong>New VM</strong> on the left.</p>

<p>VergeOS offers multiple VM creation methods:</p>

<ul>
  <li><strong>New VM Wizard / Advanced / Clone</strong> — Start from scratch</li>
  <li><strong>Import</strong> — From a media image, shared object, or volume</li>
  <li><strong>Catalogs / Recipes</strong> — Pre-prepared templates (my favorite)</li>
</ul>

<p><strong>Catalogs is the fastest and easiest approach.</strong> Templates are organized into sections including:</p>

<ul>
  <li><strong>Applications</strong> — Ready-to-go deployments for Docker, Grafana, Kubernetes K3S, LAMP stack, and OpenVPN-AS</li>
  <li><strong>Operating Systems Marketplace</strong> — A large list of ready-to-deploy Linux distributions</li>
</ul>

<p>Let’s deploy an <strong>Ubuntu Server 24.04</strong> VM using a marketplace template. Select it from the list and click <strong>Next</strong>.</p>

<h3 id="vm-configuration">VM Configuration</h3>

<ul>
  <li><strong>VM Name</strong> — Enter a name for the VM inside VergeOS (e.g., <code class="language-plaintext highlighter-rouge">Ubuntu Server VM</code>)</li>
  <li><strong>CPU Cores</strong> — 4 cores is plenty for our purposes</li>
  <li><strong>RAM</strong> — I’ll bump this up to 8GB</li>
  <li><strong>Hostname</strong> — The actual hostname of the VM. I’ll match the VM name for simplicity</li>
</ul>

<p><strong>User Configuration:</strong>
Create an admin user and set a password for it.</p>

<p><strong>Network:</strong>
Leave set to DHCP and select <strong>External</strong> for the network to give the VM direct access to your LAN.</p>

<p><strong>Drives:</strong>
Set the OS disk size (50GB is fine) and select your storage tier. I’ll leave it on <strong>Tier 1</strong> for the fastest disks.</p>

<p>Click <strong>Submit</strong> to kick off the VM build.</p>

<p>You’ll see the disk being initialized in the <strong>Drives</strong> section. Once complete, head up and click the <strong>Play button</strong> in the top-left of the content window to power on the VM and confirm.</p>

<p>The dashboard will start populating with CPU, RAM, disk, and network throughput stats as the VM comes to life. Click the <strong>Console</strong> button in the upper-left to open the VM console in a new browser tab.</p>

<p>Once the VM boots, log in with the user credentials you set. Verify the IP address looks correct for your LAN, then run a quick ping to confirm internet access.</p>

<p><strong>Congratulations on creating your first VM in VergeOS!</strong></p>

<hr />

<h2 id="closing">Closing</h2>

<p>Thanks for watching this video, folks, and thank you to everyone who supports the channel through <strong>Patreon</strong> and the <strong>YouTube Membership</strong> program. If you’d like to support what we do here, consider checking those out. Join our community <strong>Discord</strong> to chat with me and like-minded homelabbers, geeks, and nerds — and as always, we’ll see you on the next one!</p>]]></content><author><name>2GT_BK</name></author><category term="Virtualization" /><category term="Infrastructure" /><category term="Homelab" /><category term="VergeOS" /><category term="VMware" /><category term="Virtualization" /><category term="HCI" /><category term="PrivateCloud" /><category term="Installation" /><category term="Sponsored" /><summary type="html"><![CDATA[A complete step-by-step VergeOS installation guide — from burning the ISO and installing the OS to licensing the cluster and spinning up your first virtual machine.]]></summary></entry><entry><title type="html">Ubiquiti EFG vs UDM Pro Max: Is It Really Worth Double the Cost?</title><link href="https://2guystek.tv/homelab/networking/2026/02/10/ubiquiti-efg-vs-udm-pro-max.html" rel="alternate" type="text/html" title="Ubiquiti EFG vs UDM Pro Max: Is It Really Worth Double the Cost?" /><published>2026-02-10T00:00:00+00:00</published><updated>2026-02-10T00:00:00+00:00</updated><id>https://2guystek.tv/homelab/networking/2026/02/10/ubiquiti-efg-vs-udm-pro-max</id><content type="html" xml:base="https://2guystek.tv/homelab/networking/2026/02/10/ubiquiti-efg-vs-udm-pro-max.html"><![CDATA[<p><img src="//youtu.be/BvZWUHGZqMk" alt="" /></p>

<p>Hey there homelabbers, self-hosters, IT-pros, and engineers. Rich here! Just recently, I left pfSense for UniFi. It was a tough decision, but the right one. In a previous video, I landed on and purchased the UDM Pro Max as a replacement for my homebrewed pfSense firewall. Late last year, an opportunity came up to get my hands on an EFG at a great price, and of course, I jumped on it.</p>

<p>Now that I have it, I thought it’d be a great time to compare these two top-end products against each other. Since I don’t answer to Ubiquiti, I’ll give you my honest opinions on whether the EFG really is worth over twice the cost of the UDM Pro Max. Let’s get to testing!</p>

<h2 id="hardware--specifications-comparison">Hardware &amp; Specifications Comparison</h2>

<h3 id="cpu-performance">CPU Performance</h3>

<ul>
  <li><strong>UDM Pro Max</strong>: Quad-core ARM Cortex-A57 CPU @ 2GHz</li>
  <li><strong>EFG</strong>: 18-core Marvell/Cavium ThunderX2 ARM v8.2 CPU @ 2GHz</li>
</ul>

<h3 id="memory">Memory</h3>

<ul>
  <li><strong>UDM Pro Max</strong>: 8GB RAM</li>
  <li><strong>EFG</strong>: 16GB RAM</li>
</ul>

<h3 id="storage">Storage</h3>

<ul>
  <li><strong>UDM Pro Max</strong>: 128GB integrated SSD, dedicated eMMC, and two 3.5” hot-swappable drive bays for additional UniFi applications</li>
  <li><strong>EFG</strong>: No internal storage specifications (does not support running applications other than Network)</li>
</ul>

<h3 id="connectivity">Connectivity</h3>

<ul>
  <li><strong>UDM Pro Max</strong>: 8× 1-gig RJ45, 1× 2.5-gig RJ45, 2× 10-gig SFP+</li>
  <li><strong>EFG</strong>: 2× 2.5-gig RJ45, 2× 10-gig SFP+, 2× 25-gig SFP28</li>
</ul>

<p>Both systems support dynamic WAN assignment to any port.</p>

<h3 id="idsips-throughput">IDS/IPS Throughput</h3>

<ul>
  <li><strong>UDM Pro Max</strong>: 5 Gbps</li>
  <li><strong>EFG</strong>: 12.5 Gbps</li>
</ul>

<h3 id="redundancy--failover">Redundancy &amp; Failover</h3>

<ul>
  <li><strong>UDM Pro Max</strong>: Shadow Mode, gateway failover, DC Power Backup connection</li>
  <li><strong>EFG</strong>: Shadow Mode, gateway failover, dual hot-swappable PSUs</li>
</ul>

<h3 id="unifi-application-support">UniFi Application Support</h3>

<ul>
  <li><strong>UDM Pro Max</strong>: Network, Protect, Access, Talk, Connect</li>
  <li><strong>EFG</strong>: Network only</li>
</ul>

<h2 id="understanding-the-target-market">Understanding the Target Market</h2>

<p>The UDM Pro Max is the highest-end all-in-one gateway and controller Ubiquiti has to offer. The additional support for Protect, Access, Talk, and Connect applications, combined with good throughput and plenty of connectivity, make it a reasonable choice for a medium-sized business looking to take advantage of all of Ubiquiti’s offerings.</p>

<p>The EFG, however, is really targeting the enterprise segment with a much more powerful CPU, more RAM, significantly higher throughput, and enterprise features like built-in SSL inspection and hardware redundancy. It lacks the extra application support you get with the Pro Max—an interesting decision on Ubiquiti’s part, as I’m sure this system could support it.</p>

<h2 id="the-reality-of-throughput-claims">The Reality of Throughput Claims</h2>

<p>In my previous video, I mentioned choosing the UDM Pro Max because I have a 5-gig Internet connection and wanted to fully utilize it. Fast forward to today, and I’ve noticed some things about that 5-gig throughput rating that have made me question whether the UDM Pro Max can actually move 5 gigabits per second through itself.</p>

<p>I’ve been testing the UDM Pro Max in production, and I have the receipts to prove that it can’t actually do 5-gig—either in firewall packet filtering or in inter-VLAN routing. Now that I have the EFG for comparison, this is the perfect time to shed light on the realities of actual throughput on these systems.</p>

<h2 id="network-architecture-router-on-a-stick">Network Architecture: Router on a Stick</h2>

<p>To understand why throughput performance matters for my setup, it’s important to discuss my network configuration: a design commonly called a “Router on a Stick.”</p>

<p>The concept is simple: my firewall acts as both edge protection for the network and as a router that passes packets between VLANs. This allows me to:</p>

<ul>
  <li>Control access North to South (to and from the Internet)</li>
  <li>Control access East to West (between VLANs on my network)</li>
</ul>

<p>This design allows me to place traffic rules around network traffic flows between different VLANs. For example, I want my IoT network to access the Internet, but I also want to reach into my IoT network from my server network so my smart home services can access those IoT devices. By passing everything through the firewall, I can create rules that control these flows exactly how I want them.</p>

<h2 id="performance-testing">Performance Testing</h2>

<h3 id="internet-throughput-udm-pro-max">Internet Throughput: UDM Pro Max</h3>

<p>Starting with firewall throughput to the Internet through the UDM Pro Max:</p>

<p><strong>Result</strong>: 3.79 Gbps download / 4.71 Gbps upload</p>

<p>These aren’t bad numbers, but it’s not the 5 Gbps Internet speed I’m paying for.</p>

<h3 id="internet-throughput-efg-fresh-installation">Internet Throughput: EFG (Fresh Installation)</h3>

<p>Testing the EFG before deploying it into my network with clients:</p>

<p><strong>Result</strong>: 5.29 Gbps download / 5.51 Gbps upload</p>

<p>Clearly much better, but was this because there were no clients connected?</p>

<h3 id="internet-throughput-efg-production-with-clients">Internet Throughput: EFG (Production with Clients)</h3>

<p>After importing my configuration and putting the EFG in production with clients:</p>

<p><strong>Result</strong>: 5.08 Gbps download / 5.41 Gbps upload</p>

<p>Despite production traffic and connected clients, the EFG maintains strong throughput. The difference between these units is clearly significant.</p>

<p>I want to reiterate that I ran these tests repeatedly, and the results were consistently similar each time. The EFG is demonstrably capable of handling more throughput to the Internet than the UDM Pro Max.</p>

<h3 id="inter-vlan-routing-throughput">Inter-VLAN Routing Throughput</h3>

<p>Using iperf3 to test routing traffic between a VM on my server VLAN and a VM on my client VLAN (both connected via 10-gigabit connections):</p>

<p><strong>UDM Pro Max Result</strong>: 3.02 Gbps</p>

<p>Considering the 10-gigabit connections, that’s not great.</p>

<p><strong>EFG Result</strong>: 5.06 Gbps</p>

<p>Over 2 gigabits per second faster than the UDM Pro Max—a significant improvement.</p>

<h2 id="analysis">Analysis</h2>

<p>There’s clearly a performance difference between the two units, which I’d expect given the hardware differences and cost disparity. Fundamentally, it appears both units suffer from the same issue pfSense has with routing: they use the CPU to handle moving packets between networks, tying performance to CPU clock speeds and thread counts.</p>

<p>I had hoped that with the EFG, Ubiquiti might have added an ASIC or dedicated silicon for routing tasks, but it doesn’t appear they did.</p>

<h2 id="honest-assessment">Honest Assessment</h2>

<h3 id="the-udm-pro-max">The UDM Pro Max</h3>

<p>At $599, the UDM Pro Max’s ability to be your all-in-one controller for all UniFi products is its big selling point. For small or medium businesses, this price tier is reasonable. Having dual redundant storage for video and additional SSD storage for Ubiquiti apps makes it a solid choice on the higher end.</p>

<p>For homelabbers with slower Internet speeds, you can get a UDM Pro for $379—perfect if you have 1Gig Internet or less.</p>

<p>However, I don’t think Ubiquiti is being honest about the unit’s 5-Gig throughput capability. I’ve never been able to fully utilize my 5-gig Internet connection, and before comparing with the EFG, I thought my ISP was the bottleneck. Turns out they aren’t, and the UDM Pro Max was.</p>

<p>Since that performance rating was a key selling point for choosing the UDM Pro Max over the regular UDM Pro, I’m frankly a little upset about this. Combined with its lackluster inter-VLAN routing capability, it’s left me feeling somewhat cheated.</p>

<h3 id="the-enterprise-fortress-gateway">The Enterprise Fortress Gateway</h3>

<p>At $1,999, the EFG’s price is on another planet entirely. Ubiquiti is clearly targeting a different market segment, one I’m not entirely certain they understand how to sell to yet—and I think that shows.</p>

<p>Yes, it has server-class silicon. Yes, it has 25-gig connectivity. Yes, it’s clearly able to handle faster Internet connections. Yes, it can move more bits than the UDM Pro Max. But its inter-VLAN routing is only incrementally better and nowhere near even 10 gigabit performance.</p>

<p>It does have enterprise-level features like SSL inspection (typically only seen in enterprise firewalls) and redundant power supplies, but outside of that, I’m not really seeing anything that makes it enterprise-grade.</p>

<h3 id="the-real-story">The Real Story</h3>

<p>If you’re buying either of these systems, you’re probably not doing so because you want the best hardware performance possible. You’re buying them for the <strong>UniFi user experience</strong>. And that’s not nothing! Most of us buy into their hardware stack because of the dashboard, tight integration, and that sweet, sweet single pane of glass.</p>

<p>The biggest takeaway here is that there is more performance available the more money you spend on their hardware. But knowing that while you might be connected at 10-gig or 25-gig, your actual throughput will be significantly less.</p>

<h2 id="closing">Closing</h2>

<p>Thanks for reading, and thanks to everyone who supports this work through Patreon and YouTube Memberships. If you’d like to support what we do, consider checking those out. Join our community Discord and chat with fellow homelabbers, geeks, and nerds.</p>

<p>See you on the next one!</p>]]></content><author><name>2GT_BK</name></author><category term="Homelab" /><category term="Networking" /><category term="ubiquiti" /><category term="firewall" /><category term="efg" /><category term="udm-pro-max" /><category term="networking" /><summary type="html"><![CDATA[A comprehensive performance comparison between the Ubiquiti Enterprise Fortress Gateway and UDM Pro Max to determine if the $1,999 flagship is worth double the cost of the $599 alternative.]]></summary></entry><entry><title type="html">Veeam v13: Native Linux Appliance Review &amp;amp; Proxmox Backup Walkthrough</title><link href="https://2guystek.tv/backup/virtualization/proxmox/veeam/2026/01/12/veeam-v13-linux-appliance-review.html" rel="alternate" type="text/html" title="Veeam v13: Native Linux Appliance Review &amp;amp; Proxmox Backup Walkthrough" /><published>2026-01-12T00:00:00+00:00</published><updated>2026-01-12T00:00:00+00:00</updated><id>https://2guystek.tv/backup/virtualization/proxmox/veeam/2026/01/12/veeam-v13-linux-appliance-review</id><content type="html" xml:base="https://2guystek.tv/backup/virtualization/proxmox/veeam/2026/01/12/veeam-v13-linux-appliance-review.html"><![CDATA[<p><img src="//youtu.be/IUsywSK9Miw" alt="" /></p>

<p>When Veeam began expanding beyond just VMware and Hyper-V, I was ecstatic. Backup is the long pole in the tent for enterprise virtualization, and that expansion finally gave us real freedom in where we could run — and move — our workloads.</p>

<p>But there was still a catch. Running Veeam meant running Windows. And for many organizations, that was a hard stop.</p>

<p>On November 19th, 2025, that finally changed. Veeam version 13 removed the Windows requirement entirely, or did they?</p>

<p>Let’s dig in and take a closer look.</p>

<h2 id="understanding-veeam-v13">Understanding Veeam v13</h2>

<p>Hey there homelabbers, self-hosters, IT-pros, and engineers! Rich here. Now, before we get into feature lists or release notes, I want to level-set what Veeam v13 actually represents. This isn’t about flashy UI changes or minor performance tweaks. This is about architecture.</p>

<p>For the first time, Veeam Backup &amp; Replication no longer requires a Windows Server to run. You can now deploy Veeam as a Linux-based appliance, dramatically reducing overhead, complexity, and license costs — especially in environments that have already moved away from Windows wherever possible.</p>

<p>And that matters a lot right now. Because as organizations continue shifting away from VMware toward platforms like Proxmox, XCP-ng, HyperCore, and VergeOS, the backup layer needs to evolve right alongside them.</p>

<p>So in this article, we’re going to look at what’s new in Veeam v13, how to deploy the new Linux Backup and Replication appliance, build out a simple backup job for Proxmox users, and just as importantly, where the gaps still are.</p>

<h2 id="headline-features-of-veeam-v13">Headline Features of Veeam v13</h2>

<p>Let’s get the headline features of Veeam v13 out of the way first.</p>

<p><strong>First</strong>, the biggest feature of v13 is the new <strong>Linux-based Veeam Software Appliance</strong>. This hardened Linux appliance means you no longer need to have a dedicated Windows Server to run VB&amp;R. The appliance handles security patching and hardening updates automatically as well.</p>

<p><strong>Second</strong>, alongside the Linux appliance, version 13 also brings a <strong>new next-generation WebUI</strong>. This browser-based management UI aims to reduce platform dependencies, adds modern filtering and search, brings an accessibility-ready design, and a built-in dashboard.</p>

<p><strong>Third</strong>, Veeam now supports <strong>active/passive backup-server clustering</strong> to keep backup management available through outages and disasters. This feature only applies to the Linux appliance version and brings redundancy to critical backup operations that were unavailable to the Windows-only version.</p>

<p><strong>Fourth</strong>, since we’re focused on Proxmox here, Version 13 now has <strong>Application-aware processing support</strong> for backing up Windows VMs that run MSSQL, Oracle, and PostgreSQL. And also introduces malware detection for Proxmox VMs as well.</p>

<p><strong>Fifth</strong>, V13 sees <strong>native support for Scale Computing HyperCore</strong>. Now customers running HyperCore can natively backup and restore VMs via Veeam without needing an agent and v13 also includes vTPM support for HyperCore as well.</p>

<h3 id="additional-features">Additional Features</h3>

<p>There are a lot of additional features in v13 that are nearly too numerous to really cover in detail, including:</p>

<ul>
  <li>Full support for vSphere 9.0</li>
  <li>Universal CDP</li>
  <li>Improvements to agent-based backups</li>
  <li>Support for LTO-10 (36 terabytes per tape!)</li>
  <li>Immutability in GCP</li>
  <li>Instant recovery to Microsoft Azure</li>
  <li>And so many more</li>
</ul>

<h2 id="licensing-options">Licensing Options</h2>

<p>Before we deploy, let’s talk licensing. I’m going to be using the Veeam NFR license, which gives me a total of 20 workloads for backup. If you qualify for the NFR for your homelab, you should get it. But for those who don’t, Veeam still provides a free license for homelabs that’s good for 10 workloads only.</p>

<h2 id="minimum--recommended-requirements">Minimum &amp; Recommended Requirements</h2>

<p>Before we install v13, we should probably get the minimum and recommended requirements out of the way first, right?</p>

<ul>
  <li><strong>CPU</strong>: Multi-core x86-64 processor (Recommended: 8-16 cores)</li>
  <li><strong>Memory</strong>: 8GB RAM minimum (Recommended: 16GB + 500MB per concurrent job)</li>
  <li><strong>Storage</strong>: 120GB OS disk minimum (Recommended: 240GB)</li>
  <li><strong>Backup Storage</strong>: Additional local disk of at least 120GB required</li>
  <li><strong>Network</strong>: 1Gb Ethernet minimum (Recommended: higher throughput for large environments)</li>
</ul>

<h2 id="deploying-veeam-v13-linux-appliance-in-proxmox">Deploying Veeam v13 Linux Appliance in Proxmox</h2>

<p>Alright, we’re going to be deploying this in Proxmox, as I mentioned earlier. I’ve already downloaded v13 and uploaded the installation ISO to my ISO storage in PVE, so let’s build the VM.</p>

<p>Head over to the top right corner and click “create VM”. We need to give our VM a name — I’ll call mine “Veeam” — and hit next.</p>

<h3 id="system-configuration">System Configuration</h3>

<ol>
  <li><strong>OS Tab</strong>: Select your boot ISO for installation and hit next</li>
  <li><strong>System Tab</strong>: Change BIOS to UEFI from default, select a location to store your UEFI storage, and hit next</li>
  <li><strong>Disks Tab</strong>: We need to create two physical disks based on the software requirements
    <ul>
      <li>OS disk: 240GB (as recommended by Veeam)</li>
      <li>Backup storage disk: 1TB</li>
    </ul>
  </li>
  <li><strong>CPU Tab</strong>: Provision 16 cores (as recommended by Veeam)</li>
  <li><strong>Memory Tab</strong>: Allocate 16GB RAM (as recommended)</li>
  <li><strong>Networking</strong>: Add the VM to your server VLAN with planned static IP and DNS</li>
  <li><strong>Summary</strong>: Select “Start after created” and hit finish</li>
</ol>

<h3 id="installation-process">Installation Process</h3>

<p>Now we’ll head over to the console tab for our freshly built VM to continue the installation. We want Veeam Backup &amp; Replication, so we’ll hit enter to kick that off, and we’ll choose “install - fresh install” and hit enter.</p>

<p>Now we wait while the installer starts up. This can take a bit depending on your hardware. We’ll get a warning stating the installer will irreversibly wipe any data on the drives attached to this VM. Yes to continue.</p>

<p>Now comes the best part of this installer — <strong>if you’ve met the minimum requirements for Veeam v13 all the way through, the installer will automatically build the VM</strong>. Seriously, just sit back, relax, and wait for the installer to complete. This will take time depending on your hardware, so chill out and let it complete.</p>

<p>Once the automated installation is complete, the VM will reboot, and we’ll get to configuring the basics:</p>

<ol>
  <li><strong>EULA</strong>: Read it or not — tab to Accept and press Enter</li>
  <li><strong>Hostname</strong>: Set to match your VM name (e.g., “veeam” as an FQDN), then tab to Next</li>
  <li><strong>Networking</strong>: Don’t leave it DHCP by default. Tab to “static” on the right, press Enter, fill in the addressing, then tab down to “apply”</li>
  <li><strong>Timezone &amp; NTP</strong>: Set your timezone (e.g., America/Los Angeles) and run a sync, then tab to Next</li>
</ol>

<h3 id="security-configuration">Security Configuration</h3>

<p>Now let’s talk passwords. <strong>Veeam is serious about passwords for this hardened Linux appliance</strong>. They’ve aligned their password policy to the DISA STIG password ruleset, which means you’re going to be forced to set a very complex password for the veeamadmin account.</p>

<p>You’ll struggle trying to find an acceptable complex password that works. The requirements are pretty strict — you can’t use more than four of the same characters in a row. Once you find one that works, you’ll move on.</p>

<p>After the password hurdle, Veeam requires you to setup <strong>MFA (Multi-Factor Authentication)</strong> for this account as well. While I appreciate the emphasis on security and agree it’s important, I do have bones to pick about this, which we’ll discuss later.</p>

<p>In any case, grab your MFA app of choice, scan the QR code, and enter the 6-digit PIN in your app, then tab down to OK.</p>

<p>Veeam v13 also has the concept of a <strong>Security Officer</strong> for approvals of admin actions in line with Zero Trust principles. If your organization requires this or actually has a designated security officer, you can set this up. Thankfully, you can choose to skip this function and use veeamadmin as the sole account. We’ll check “skip setting up Security Officer” and tab to next.</p>

<p>Finally, we’re shown the summary of our configuration. Tab to finish and press Enter. Veeam will now apply the settings, restart services, and start the web interfaces.</p>

<h3 id="the-installation-experience">The Installation Experience</h3>

<p>You know, I find it kinda funny that the actual v13 installation process was so magically hands-off in the first half of the deployment, and then you’re hit with the password requirements and having to repeatedly fumble through that, only to get hit with a mandatory MFA, completely erasing the efficiency gains in the first half of the install. But as an old English friend of mine used to say, “Into life a little rain must fall.”</p>

<p>The Veeam appliance provides two different webUIs that have very different functions and purposes: the <strong>Host management console</strong> (used to manage the VM itself, updates, configurations) and the <strong>Veeam Backup &amp; Replication webUI</strong> (for backup operations).</p>

<h2 id="host-management-console">Host Management Console</h2>

<p>You can find the URL to the host management console by looking at the actual console of the v13 Linux VM itself. Once you toss that URL into a browser, you’ll be greeted with a login. Since we opted to only have the veeamadmin account, that’s the account we’ll use. Enter your one-time PIN that you set up during the install.</p>

<p>The Host Management Console is refreshingly straightforward. Let’s get through this quickly:</p>

<ul>
  <li><strong>Overview</strong>: Details on Remote Access, Network configuration, and time settings</li>
  <li><strong>Network</strong>: Manage hostname, join Active Directory Domain, add DNS suffixes, modify the IP address of the Veeam host</li>
  <li><strong>Time</strong>: Change timezone and manage NTP for your host</li>
  <li><strong>Remote Access</strong>: Control access to local SSH and to the Host management GUI itself</li>
  <li><strong>Users and Roles</strong>: Create local users and manage their roles in a granular RBAC fashion; apply roles to domain-joined sources</li>
  <li><strong>Backup Infrastructure</strong>: Configure remote monitoring, configure high-availability, enable lockdown mode, and run configuration restore</li>
  <li><strong>Updates</strong>: Manage updates (requires valid license; more on this later)</li>
  <li><strong>Logs and Services</strong>: Status of running services, host configuration files, installed components, audit trail of events, and ability to create a support bundle</li>
</ul>

<p>Host management is incredibly straightforward, and outside of troubleshooting, you’ll spend very little time there.</p>

<h2 id="veeam-backup--replication-webui">Veeam Backup &amp; Replication WebUI</h2>

<p>Similar to the Host management console, you can find the URL for VBR on the console of the active v13 VM. Unlike the management console that uses port 10443, the VBR console uses the default SSL port of 443.</p>

<p>This site, like the last, has a self-signed certificate, so click through it to get to the login page.</p>

<p>Once logged in, we land on the overview page and are greeted with a rather crucial message:</p>

<blockquote>
  <p>“If you notice features missing, it likely hasn’t made its way into the Web UI just yet. For now, feel free to use the Windows-based console whenever you need to manage settings that aren’t available in the Web UI. We will iterate quickly and bring more features over to the new UI with every minor release, prioritizing workloads and features based on their actual usage.”</p>
</blockquote>

<h3 id="the-reality-of-the-webui">The Reality of the WebUI</h3>

<p>I’m going to jump in here and get to the point: <strong>the WebUI is not feature-complete to their full ‘fat’ client</strong>. Right now, you can really only manage VMware and Hyper-V backups from the WebUI. You need to download and install the full client, <strong>ON WINDOWS ONLY</strong>, to configure and back up Proxmox, Nutanix, Scale Computing, and any agent-based backup jobs.</p>

<p>The next popup alerts you that you don’t have a valid license installed in Veeam. Go ahead and take care of that by clicking “Install” on the left and double-clicking on your license file.</p>

<p>Veeam will import your license and start checking for updates. Once done, you land on the Overview page which provides detailed information like:</p>

<ul>
  <li>Resiliency Overview</li>
  <li>Threat hunting</li>
  <li>Infrastructure Health</li>
  <li>Protected Workloads</li>
  <li>Protection Overview</li>
  <li>Top Repositories</li>
</ul>

<p>Since this is a fresh install, it’s all looking very blank.</p>

<h3 id="webui-sections">WebUI Sections</h3>

<p><strong>Jobs</strong>: See your currently configured backup jobs and backup copy jobs. Keep in mind that currently only VMware and Hyper-V jobs will show in this view. You can add new jobs by clicking “Add” in the middle.</p>

<p><strong>Backups</strong>: See a list of backed up jobs that you can restore from — instant VM recovery, regular restore, access guest files, or delete.</p>

<p><strong>Repositories</strong>: Create and manage your backup repositories. In Veeam language, a repository is a storage location for backups. You’ll have a single default repository auto-generated during install. You can also manage Scale-out repositories and configure Veeam’s Data Cloud Vault (Veeam’s cloud storage offering).</p>

<p><strong>Proxies</strong>: In Veeam, a Proxy is used to move data between VMs and the backup storage. Proxies increase concurrent backup capacity and distribute load across multiple systems. By default, the Linux system is also a VMware backup proxy. The current WebUI only supports VMware and Hyper-V proxy management.</p>

<p><strong>Managed Servers</strong>: See your deployed Veeam server. This section is where you’d add virtual hosts and Linux hosts. Currently, the WebUI only supports adding Hyper-V, VMware, and individual Linux and Windows hosts.</p>

<p><strong>Logs and Events</strong>: Full audit trail of backup job status, Authorization events for change control requests, and any discovered malware events. It’s nice to have these easily accessible via the web.</p>

<p><strong>Veeam ONE</strong>: Veeam ONE is Veeam’s monitoring suite for their software. At this time, VeeamONE requires a Windows host to install, so it’s behind on Veeam’s VB&amp;R track. You can integrate it into your Linux v13 deployment here.</p>

<h3 id="the-webui-limitation">The WebUI Limitation</h3>

<p><strong>I’m just going to come out and say it now: the Veeam WebUI is woefully lacking feature completeness at this time.</strong> Now it makes sense why they hit us with that pop-up on first login. To say that I wasn’t disappointed by the sheer lack of any backup management support for anything BUT VMware and Hyper-V was a pretty huge letdown, frankly.</p>

<p>So what do we do now? We need to install the v13 fat client on a Windows system and backup Proxmox since that’s the only way to do it currently.</p>

<h2 id="installing-the-windows-console">Installing the Windows Console</h2>

<p>To download the Windows v13 console, head over to configuration on the top right, then down to About, and then click on “Download Windows-based backup console.” Once that’s downloaded, run through the install.</p>

<p>The install takes some time to complete, but it’s basically a next, next, finish sort of installation.</p>

<h2 id="using-the-windows-client">Using the Windows Client</h2>

<p>Once you’ve installed the fat client and launched it, you’ll need to enter the IP address or hostname of your VBR Linux and click Connect.</p>

<p>Now enter the veeamadmin user and complex password, and click sign-in below.</p>

<p>For existing Veeam users, this is going to look and function essentially the same as version 12, so we won’t spend a ton of time going through all the nuances of the full client. But let’s do a quick once-over for completeness.</p>

<h3 id="interface-overview">Interface Overview</h3>

<p>When we log in, we first land on the <strong>Home</strong> section, which gets straight to business with backup jobs and their status. Obviously, this is a fresh install with no backup jobs setup yet.</p>

<p><strong>Inventory Tab</strong>: Shows discovered infrastructure and protected objects. Finally, we can see all of the virtual platforms that version 13 supports — from VMware and Hyper-V to Nutanix, Proxmox VE, and SC HyperCore.</p>

<p><strong>Backup Infrastructure Tab</strong>: Where you design, register, and manage the components that do the backup work. If the Inventory tab is what exists, the Backup Infrastructure tab is how backups happen.</p>

<p><strong>History</strong>: A history of the events that have occurred on your VBR host.</p>

<h2 id="adding-proxmox--creating-backup-jobs">Adding Proxmox &amp; Creating Backup Jobs</h2>

<p>Let’s get our Proxmox host added to Veeam. Head back to the Inventory tab, then to the Proxmox VE section. Click “Add Server” in the top left.</p>

<p>First, tell Veeam the DNS name or IP address of your Proxmox server. I’ll add “Proxinator” and hit next.</p>

<p>Veeam needs root credentials to interact with the Proxmox host. Click “add” on the far right, enter your root user and password, give it a description, and click OK. Note: starting in version 13, you can add non-root accounts that can do privilege escalation when needed.</p>

<p>Click “Next” to continue. Veeam will call out and connect to your host. Since this is the first time, you’re asked if you wish to trust the SSH RSA key. Say yes to continue.</p>

<p>Next, you’re asked if you’re willing to trust the SSL certificate for your Proxmox host. If you’re using a self-signed SSL cert (as I am), this message will appear. If you’re using a publicly trusted cert, it won’t. Click Continue.</p>

<p>Now select a storage location to save snapshots in case the original VM’s storage doesn’t support snapshotting. You can leave this set as-is or manually select a storage location. Once done, click Apply.</p>

<p>Sit back and let Veeam complete the server add process. The next screen shows a summary, and finish completes the wizard.</p>

<h3 id="deploying-the-worker-node">Deploying the Worker Node</h3>

<p>Veeam requires a worker node to move data from the PVE host to the backup system. A worker node is a lightweight VM that will be added to your PVE host. Click Yes to kick this off.</p>

<ol>
  <li>Give your worker a name</li>
  <li>Choose which storage location on your PVE host this VM will live</li>
  <li>Click next</li>
  <li>Choose a network to attach the worker VM to. Click “add,” select your PVE SDN network, leave the network config set to DHCP, and click OK</li>
  <li>Click Finish to kick off the deployment</li>
</ol>

<p>This can take a bit to complete. And boom, done!</p>

<p>For reference, this is essentially identical to how we added PVE to Veeam in v12, so no surprises at all.</p>

<h3 id="building-the-backup-job">Building the Backup Job</h3>

<p>Now we see the newly added “Proxinator” under the Proxmox VE section. If we click on it, we see the full list of virtual machine workloads running on the box.</p>

<blockquote>
  <p><strong>Important Note:</strong> Veeam v13 still does not support backing up LXC containers. Even in version 13, there’s no way to backup LXC containers in Proxmox. PBS can backup LXC containers all day long, but Veeam still doesn’t have complete support for all workloads.</p>
</blockquote>

<p>Let’s build our backup job for Proxmox. Head over to Home, then at the top, click “Backup Job” and select “Virtual Machine.”</p>

<p>Give your backup job a name (e.g., “proxmox backup”) and head down to Next.</p>

<p>Now select the VMs you want to backup. Click “add” on the right, expand your PVE host, and select the backups you wish to add. Once done, click OK. Everything looks good, so click Next.</p>

<p>Choose where you want to store these backups. We only have one backup repository that was provisioned automatically during deployment. Next, decide how long to retain your backups — I’m a fan of 15 days, so I’ll enter in 15 and then click next.</p>

<p>Next is <strong>guest processing</strong>. V13 brings guest processing to PVE backups, which is a big deal. Administrators of SQL and others know the importance and sensitivity around being able to have the backup system reach into a VM and tell it to do something before backups run to guarantee the data being backed up is valid and correct. This is fantastic for PVE. If you don’t have any applications requiring application awareness, click Next.</p>

<p>Now set up a schedule for the backup job to run automatically. I’ll set this to run nightly at 4 AM. The default retries are fine as-is, so click Apply.</p>

<p>The final page is just a summary of your settings. There’s a checkbox to kick off the first backup when you click finish. Check that as well, then hit finish to complete this work.</p>

<p>Now we can see the backup job was created and because we checked the box, it’s running the backup job now.</p>

<p>This is all very much typical of the PVE experience in version 12 of Veeam Backup and Replication.</p>

<h2 id="final-thoughts-the-good-and-the-bad">Final Thoughts: The Good and the Bad</h2>

<p>Let’s get into the good and bad and final thoughts about Veeam v13 and its new native Linux appliance.</p>

<h3 id="the-good">The Good</h3>

<p><strong>Veeam is doing the right thing here.</strong> Divesting of their dependence on Windows is a smart thing to do, and I applaud them for making the effort. Version 13 really does raise the bar for Veeam as the premier enterprise backup platform of choice.</p>

<p><strong>Getting application awareness into the backup process for PVE backups is huge.</strong> SQL needs transaction log truncation to happen or the backups created are worthless. This opens the option to move workloads that require application awareness into PVE, which I’m all for.</p>

<p><strong>The biggest deal here is what Veeam is becoming with v13.</strong> Backups, yes, all day long. But I don’t think many people realize the add-on advantage of using Veeam and the freedom it gives you.</p>

<p>Let me explain: VMware blows up, people leave — for good reason — and they all land somewhere, but their workloads were still in VMware. Every single alternative hypervisor on the market has an import tool or system to get your workloads OUT of VMware and into their platform, easy. But none of these alternative platforms import from any platform other than VMware. See the problem?</p>

<p>Where I think Veeam’s true future is, isn’t just backup — it’s also being a universal workload migration platform for virtualization. One of the big things we learned with the Broadcom disruption of VMware was that we’d all become complacent and trusted that our infrastructure platforms would stay the same. It didn’t. These days, the smart decision is to have a diverse virtualization plan. Maybe have Nutanix and Proxmox, or XCP-ng and VergeOS, but the point is don’t have all your eggs in one basket.</p>

<p><strong>Veeam becomes the universal bridge between all of these different hypervisors, and ultimately gives you the freedom to move your workloads anywhere. And that’s a really big deal.</strong></p>

<h3 id="the-bad">The Bad</h3>

<p>First off: <strong>I don’t consider the webUI production-ready yet.</strong> At least not for anything but VMware and Hyper-V. I’m not sure if it was just easier for Veeam to knock out those two or if there was more customer pressure behind those platforms. Still, the tailwinds are not in VMware’s or Hyper-V’s favor.</p>

<p>All of the effort going into natively supporting Proxmox, Nutanix, SC HyperCore, and in the near future, VergeOS and XCP-ng — I feel it would have been better to release the webUI when you could support ALL of the current hypervisor platforms.</p>

<p><strong>Next, specifically a gripe for PVE: it’s not a full-featured backup solution for PVE until Veeam can support backing up and restoring LXC containers</strong>, which it still can’t. If you’re a PVE shop that has a lot of LXC containers and you’re looking for backup solutions, PBS is still your best bet — hands down. Veeam, do both, please.</p>

<p><strong>Finally, a minor gripe: I was very annoyed with Veeam’s presumption that I need this level of password security and mandatory MFA for the Linux appliance.</strong> I understand the reasons behind it, and I’m not bagging on increased security — I’m advocating for customer choice. Ask me if my environment requires enhanced security configurations and let me make that choice. This isn’t a problem with the Windows server version of Veeam, so don’t make it one for the Linux appliance.</p>

<h2 id="final-verdict">Final Verdict</h2>

<p>All of this said, <strong>Veeam is on the right track and they’re doing great things.</strong> I’ve said it a thousand times now, but backup really is the long pole in the tent. With Veeam’s native support for VMware, Hyper-V, Proxmox, and SC HyperCore — with even more coming online very soon — it really is becoming the backup and workload migration tool for business!</p>

<hr />

<p>And that’ll do it. A special thank you to a certain Discord member for the continual push to get this v13 article out. If you have feedback on anything I’ve said here, leave a comment below, subscribe to the channel, and join our free Discord.</p>

<p><strong>YouTube link</strong>: <a href="https://youtu.be/IUsywSK9Miw">https://youtu.be/IUsywSK9Miw</a></p>]]></content><author><name>2GT_BK</name></author><category term="Backup" /><category term="Virtualization" /><category term="Proxmox" /><category term="Veeam" /><category term="Veeam" /><category term="Proxmox" /><category term="Linux Appliance" /><category term="Backup" /><category term="Review" /><summary type="html"><![CDATA[Complete walkthrough of Veeam v13's new Linux appliance, deployment in Proxmox, and honest assessment of features and limitations.]]></summary></entry><entry><title type="html">Setting Up MicroCloud at Home is Easier Than You Think!</title><link href="https://2guystek.tv/2025/11/21/setting-up-microcloud-at-home-is-easier-than-you-think.html" rel="alternate" type="text/html" title="Setting Up MicroCloud at Home is Easier Than You Think!" /><published>2025-11-21T00:00:00+00:00</published><updated>2025-11-21T00:00:00+00:00</updated><id>https://2guystek.tv/2025/11/21/setting-up-microcloud-at-home-is-easier-than-you-think</id><content type="html" xml:base="https://2guystek.tv/2025/11/21/setting-up-microcloud-at-home-is-easier-than-you-think.html"><![CDATA[<p><img src="//youtu.be/Pna4QINqo_Y" alt="" /></p>

<h1 id="canonical-microcloud-setup-and-walkthrough-summary">Canonical MicroCloud Setup and Walkthrough Summary</h1>

<h2 id="-overview">🧩 Overview</h2>
<ul>
  <li><strong>MicroCloud</strong> is a lightweight, self-hosted private cloud by Canonical (the makers of Ubuntu).</li>
  <li>It combines <strong>LXD</strong>, <strong>MicroCeph</strong>, and <strong>MicroOVN</strong> into one automated stack.</li>
  <li>Designed for <strong>edge computing</strong>, <strong>homelabs</strong>, and <strong>small enterprise environments</strong>.</li>
  <li>Provides high-availability compute, storage, and networking — without the complexity of OpenStack or Kubernetes.</li>
</ul>

<hr />

<h2 id="️-system-requirements">⚙️ System Requirements</h2>

<h3 id="minimum">Minimum</h3>
<ul>
  <li>8 GB RAM per node</li>
  <li>One local disk (no partitions)</li>
  <li>One network interface</li>
  <li>Ubuntu 22.04 LTS or newer</li>
</ul>

<h3 id="production-recommended">Production Recommended</h3>
<ul>
  <li>3 physical nodes for HA</li>
  <li>32 GB RAM per node</li>
  <li>3 disks per node (OS, local storage, distributed storage) — NVMe recommended</li>
  <li>2x 10Gb network interfaces per node (cluster + uplink)</li>
</ul>

<blockquote>
  <p>💡 For production: deploy on <strong>bare metal</strong>, separate networks for Ceph and OVN.</p>
</blockquote>

<hr />

<h2 id="-planning-the-deployment">🧠 Planning the Deployment</h2>
<ol>
  <li>Prepare a worksheet noting each node’s:
    <ul>
      <li><strong>IP address</strong></li>
      <li><strong>Network interface assignments</strong></li>
      <li><strong>Disk setup</strong></li>
    </ul>
  </li>
  <li>Configure your nodes’ base OS and networking before installing MicroCloud.</li>
</ol>

<hr />

<h2 id="-installation-steps">🚀 Installation Steps</h2>

<h3 id="pre-work">Pre-Work</h3>
<p>Your installation targets must have at least Ubuntu 22.04 LTS or newer installed and updated before attempting to install MicroCloud</p>

<h3 id="1-install-microcloud-components">1. Install MicroCloud Components</h3>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>snap <span class="nb">install </span>lxd microceph microovn microcloud <span class="nt">--cohort</span><span class="o">=</span><span class="s2">"+"</span>
</code></pre></div></div>

<h3 id="2-prevent-auto-updates">2. Prevent Auto-updates</h3>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>snap refresh lxd microceph microovn microcloud <span class="nt">--hold</span>
</code></pre></div></div>

<h3 id="3-initialize-the-first-node">3. Initialize the First Node</h3>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>microcloud init
</code></pre></div></div>
<ul>
  <li>Configure internal network.</li>
  <li>Select local and distributed storage disks.</li>
  <li>Assign Ceph subnets (internal/public).</li>
  <li>Wait for initialization to finish.</li>
</ul>

<hr />

<h2 id="-expanding-the-cluster">🧩 Expanding the Cluster</h2>

<ol>
  <li>On first node:
    <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>microcloud add
</code></pre></div>    </div>
  </li>
  <li>On each new node:
    <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>microcloud <span class="nb">join</span>
</code></pre></div>    </div>
  </li>
  <li>Provide cluster passphrase, select network interfaces, and assign disks.</li>
  <li>Wait for nodes to sync and complete setup.</li>
</ol>

<hr />

<h2 id="️-accessing-the-gui-lxd-ui">🖥️ Accessing the GUI (LXD UI)</h2>
<ul>
  <li>Access via: <code class="language-plaintext highlighter-rouge">https://&lt;node-IP&gt;:8443</code></li>
  <li>Accept self-signed certificate.</li>
  <li>Authenticate using <strong>certificate-based identity</strong>:
    <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>lxc auth identity create tls/lxd-ui <span class="nt">--group</span> admins
</code></pre></div>    </div>
  </li>
  <li>Copy the generated token and paste it into the web UI.</li>
</ul>

<hr />

<h2 id="-lxd-ui-tour">🔍 LXD UI Tour</h2>

<ul>
  <li><strong>Dashboard Layout</strong><br />
Left navigation panel with detailed sections per service area.</li>
</ul>

<h3 id="core-sections">Core Sections</h3>
<ul>
  <li><strong>Instances:</strong> Manage containers and VMs.</li>
  <li><strong>Profiles:</strong> Define hardware and configuration templates.</li>
  <li><strong>Networks:</strong> Manage bridges, VLANs, and ACLs (acts as firewalls).</li>
  <li><strong>Storage:</strong> Manage pools, volumes, and S3-compatible buckets.</li>
  <li><strong>Images:</strong> Base OS templates for launching workloads.</li>
  <li><strong>Clustering:</strong> View nodes, groups, operations, and warnings.</li>
  <li><strong>Permissions:</strong> Manage identities, groups, and IDP integrations.</li>
  <li><strong>Settings:</strong> Control global options (including dark mode).</li>
</ul>

<hr />

<h2 id="-creating-your-first-instance">🧱 Creating Your First Instance</h2>

<ol>
  <li>Click <strong>“Create Instance.”</strong></li>
  <li>Name and optionally describe it.</li>
  <li>Choose an image (e.g., Ubuntu 24.04 LTS).</li>
  <li>Set as <strong>container</strong> or <strong>VM</strong>.</li>
  <li>Select desired cluster node or group.</li>
  <li>Apply a profile (e.g., custom disk size).</li>
  <li>Click <strong>“Create and Start.”</strong></li>
</ol>

<p>You can then:</p>
<ul>
  <li>Manage configuration (disks, network, GPU passthrough).</li>
  <li>Use <strong>Terminal</strong> or <strong>Console</strong> to access the instance.</li>
  <li>View logs and snapshots within the GUI.</li>
</ul>

<hr />

<h2 id="-final-thoughts">💬 Final Thoughts</h2>

<h3 id="-pros">👍 Pros</h3>
<ul>
  <li>Easy to deploy — runs with a few commands.</li>
  <li>Intuitive, clean LXD UI.</li>
  <li>Huge catalog of Linux images (even Windows supported).</li>
  <li>Supports GPU, USB, PCI passthrough.</li>
  <li>Great for homelab and small-scale production environments.</li>
</ul>

<h3 id="-cons">👎 Cons</h3>
<ul>
  <li>Documentation clarity needs improvement.</li>
  <li>Lacks built-in monitoring — requires Prometheus/Grafana.</li>
  <li>GUI only supports certificate-based login (no basic user/password option).</li>
</ul>

<hr />

<h2 id="-learn-more">🔗 Learn More</h2>
<p>Visit <a href="https://canonical.com/microcloud">Canonical MicroCloud</a> for details, documentation, and setup guides.</p>

<hr />

<p><strong>Summary:</strong><br />
Canonical’s MicroCloud is a simplified yet powerful private cloud platform. It provides automated setup for compute, storage, and networking using familiar Ubuntu tools, making it ideal for homelabbers and IT professionals seeking private cloud efficiency with minimal complexity.</p>]]></content><author><name>2GT_Rich</name></author><category term="microcloud" /><category term="canonical" /><category term="ubuntu" /><category term="private cloud" /><category term="homelab" /><category term="edge computing" /><category term="cluster computing" /><category term="virtualization" /><category term="containers" /><category term="LXD" /><category term="LXC" /><category term="Ceph" /><category term="OVN" /><category term="Sponsored" /><summary type="html"><![CDATA[]]></summary></entry><entry><title type="html">Am I a TRAITOR to pfSense?!</title><link href="https://2guystek.tv/2025/11/03/Am-I-a-TRAITOR-to-pfSense.html" rel="alternate" type="text/html" title="Am I a TRAITOR to pfSense?!" /><published>2025-11-03T00:00:00+00:00</published><updated>2025-11-03T00:00:00+00:00</updated><id>https://2guystek.tv/2025/11/03/Am-I-a-TRAITOR-to-pfSense</id><content type="html" xml:base="https://2guystek.tv/2025/11/03/Am-I-a-TRAITOR-to-pfSense.html"><![CDATA[<p><img src="//youtu.be/qG-Borg0wAY" alt="" /></p>

<h1 id="am-i-a-traitor-to-pfsense-switching-to-the-unifi-udm-pro-max">Am I a TRAITOR to pfSense?! Switching to the UniFi UDM Pro Max</h1>

<p>For years, I’ve been one of the loudest pfSense advocates around. I’ve built firewalls, tested hardware, and shown you every trick I’ve learned along the way. But sometimes, even the most loyal of us have to stop, take a breath, and ask: is it time to move on?</p>

<p>That’s what this post is about — evaluating where pfSense stands today, and whether the UniFi UDM Pro Max has finally earned a place in my homelab rack.</p>

<hr />

<h2 id="the-backstory-my-long-term-relationship-with-pfsense">The Backstory: My Long-Term Relationship with pfSense</h2>

<p>If you’ve followed 2GuysTek for a while, you know I’ve been running pfSense for years. My trusty Sophos SG330, which I picked up off eBay and upgraded myself, has been my daily driver for over three and a half years.</p>

<p>It’s been <strong>solid, reliable, and powerful</strong> — but it’s also <strong>getting old</strong>. The SG330 came out in 2014 and it sounds like a jet engine under load. Between the age, the noise, and the growing list of newer alternatives, I decided it was time to evaluate what’s next.</p>

<p>Before jumping to conclusions, I started like I always do — with a <strong>requirements analysis</strong>.</p>

<hr />

<h2 id="defining-my-needs-and-my-wants">Defining My Needs (and My Wants)</h2>

<p>When it comes to network infrastructure, I break things down into two categories: <strong>Necessary</strong> and <strong>Nice-to-Have</strong>.</p>

<h3 id="-necessary">✅ Necessary</h3>
<ul>
  <li>5 Gbps Internet throughput for firewalling, packet filtering, and IDS/IPS</li>
  <li>10Gig network support with <strong>SFP+</strong> connectivity</li>
  <li>Must handle <strong>router-on-a-stick</strong> Layer 3 routing</li>
  <li>Needs to support <strong>IPsec</strong>, <strong>WireGuard</strong>, or similar site-to-site VPN</li>
  <li>Must be <strong>rack-mountable</strong></li>
  <li><strong>Quiet or fanless</strong> design</li>
</ul>

<h3 id="-nice-to-have">💡 Nice-to-Have</h3>
<ul>
  <li>A modern, easy-to-use UI</li>
  <li>DHCP &amp; DNS handling for multiple VLANs</li>
  <li>Built-in URL/ad filtering</li>
  <li>GeoIP blocking capabilities</li>
  <li>Native <strong>Tailscale VPN</strong> integration</li>
</ul>

<p>With that list in hand, I started exploring what the market had to offer.</p>

<hr />

<h2 id="pfsense-hardware-options-from-netgate">pfSense Hardware Options from Netgate</h2>

<p>I first looked at <strong>Netgate’s official hardware line</strong>. Prices range from:</p>
<ul>
  <li>$190 for the entry-level <strong>Netgate 1100</strong>,</li>
  <li>up to $3,600 for the powerhouse <strong>Netgate 8300</strong>.</li>
</ul>

<p>But once I factored in my need for <strong>10Gig SFP+</strong>, the realistic contenders narrowed to:</p>
<ul>
  <li><strong>Netgate 6100</strong> – $850</li>
  <li><strong>Netgate 8200</strong> – $1,500</li>
  <li><strong>Netgate 8300</strong> – $3,600</li>
</ul>

<p>The <strong>8200</strong> caught my eye immediately — it checks every box.</p>

<h3 id="netgate-8200-specs">Netgate 8200 Specs:</h3>
<ul>
  <li>8-core Intel Atom C3758R (2.4GHz)</li>
  <li>128GB NVMe M.2 storage</li>
  <li>16GB DDR4 RAM</li>
  <li>Dual 10Gig SFP+, four 2.5Gig, and two 1Gig combo ports</li>
  <li>Up to 18.6Gbps L3 forwarding and 3.24Gbps IPsec throughput</li>
</ul>

<p>It’s a beast. But it’s also expensive. So before committing, I wanted to see what “the dark side” had to offer.</p>

<hr />

<h2 id="the-challenger-ubiquiti-unifi-udm-pro-max">The Challenger: Ubiquiti UniFi UDM Pro Max</h2>

<p><strong>Ubiquiti</strong> has exploded with new offerings recently. Their <strong>UDM Pro Max</strong> immediately stood out — not just for specs, but for how it could simplify my life.</p>

<h3 id="udm-pro-max-specs">UDM Pro Max Specs:</h3>
<ul>
  <li>Quad-core ARM Cortex-A57 (2.0GHz)</li>
  <li>8GB RAM</li>
  <li>128GB SSD + 32GB eMMC</li>
  <li>Dual 10Gig SFP+, one 2.5Gig, and eight 1Gig ports</li>
  <li>Supports IPsec, OpenVPN, and UniFi’s own <strong>Site Magic VPN</strong></li>
  <li>Built-in UniFi Network and Protect controllers</li>
</ul>

<p>Ubiquiti claims up to <strong>5Gbps of IDS/IPS throughput</strong>, which perfectly fits my requirements — and at <strong>$600</strong>, it’s far more affordable than the pfSense options.</p>

<hr />

<h2 id="pfsense-vs-unifi-breaking-down-the-experience">pfSense vs UniFi: Breaking Down the Experience</h2>

<h3 id="user-interface">User Interface</h3>
<p>Both pfSense+ and UniFi have modern, clean UIs — but UniFi wins on polish and user experience. Its dashboard is gorgeous and intuitive.</p>

<h3 id="dhcp--dns">DHCP &amp; DNS</h3>
<p>Tie. Both systems handle this natively and effectively.</p>

<h3 id="dns-filtering">DNS Filtering</h3>
<p>pfSense wins here, hands down. <strong>pfBlockerNG</strong> is still the gold standard for free, configurable ad and spam blocking. UniFi’s built-in filtering works, but can be overly aggressive.</p>

<h3 id="geoip-blocking">GeoIP Blocking</h3>
<p>Both platforms handle it well.</p>

<h3 id="tailscale-vpn">Tailscale VPN</h3>
<p>pfSense supports Tailscale natively. UniFi doesn’t — though you can work around it with a small Proxmox VM running Tailscale.</p>

<p>Overall, both systems satisfy the core needs, but the <strong>price-to-feature ratio</strong> begins to tilt in Ubiquiti’s favor.</p>

<hr />

<h2 id="consulting-the-experts">Consulting the Experts</h2>

<p>Before making a decision, I reached out to <strong>Tom Lawrence</strong> — someone who’s already walked this path. His insights helped confirm my suspicions: UniFi’s innovation curve and user experience have outpaced pfSense in recent years.</p>

<p>If you want to see our full conversation, let me know in the comments or reach out — I’ll post the entire chat as a standalone video if there’s interest.</p>

<hr />

<h2 id="my-decision-joining-the-dark-side">My Decision: Joining the Dark Side</h2>

<p>After weighing everything, I made the switch.<br />
Here’s why:</p>

<ol>
  <li><strong>Unified Experience</strong> – All my production networking gear is already UniFi. Adding the UDM Pro Max completes the “single pane of glass.”</li>
  <li><strong>Cost</strong> – Even compared to the Netgate 6100, the UDM Pro Max is cheaper by around $150.</li>
  <li><strong>Innovation</strong> – Ubiquiti is moving faster in terms of features and software improvements.</li>
  <li><strong>No Licensing Fees</strong> – Once you buy it, you’re done. No annual pfSense+ renewals.</li>
</ol>

<p>Sure, there are trade-offs — like the lack of native Tailscale or pfBlockerNG-level filtering — but I can solve those with small workarounds.</p>

<hr />

<h2 id="migration-and-setup">Migration and Setup</h2>

<p>Setting up the UDM Pro Max was painless.</p>
<ul>
  <li>I racked it up, powered it on, and used the <strong>UniFi mobile app</strong> to onboard it.</li>
  <li>The setup flow felt <em>Apple-like</em> — simple and elegant.</li>
  <li>Firmware updated automatically, and within minutes, the system was live.</li>
</ul>

<p>Next came the migration from my self-hosted UniFi controller. After downloading and restoring my backup (and updating software versions to match), the import went smoothly. A quick shutdown of my old controller fixed some connection conflicts, and soon everything was talking perfectly.</p>

<p>Within about an hour, my network was fully migrated.</p>

<hr />

<h2 id="final-configuration-and-first-impressions">Final Configuration and First Impressions</h2>

<p>My <strong>UniFi dashboard</strong> now shows full visibility — LAN and WAN data unified.<br />
My devices, SSIDs, VLANs, and icons all imported perfectly. I also activated <strong>CyberSecure</strong> for regional blocking and added zone-based firewalling.</p>

<p>One quick pro tip: <strong>enable east-west traffic logging</strong> under Syslog settings — it’s off by default but essential for troubleshooting VLAN-to-VLAN communication.</p>

<p>After tuning everything, the system looked rock-solid.</p>

<hr />

<h2 id="final-thoughts-loyalty-vs-evolution">Final Thoughts: Loyalty vs Evolution</h2>

<p>Do I feel like a traitor? Maybe a little.<br />
But I remind myself that <strong>brand loyalty isn’t the goal — good engineering is</strong>. Evaluating alternatives and adapting when it makes sense is what any good technologist should do.</p>

<p>I still have a soft spot for pfSense. I hope Netgate continues to innovate because competition is healthy for all of us. But for now, the UDM Pro Max has earned its place in my rack.</p>

<p>And that’s the beauty of the homelab: we can experiment, evolve, and share what we learn with the community.</p>

<hr />

<h3 id="special-thanks">Special Thanks</h3>
<p>Huge thanks to <strong>Tom Lawrence</strong> for sharing his insights and helping me through the decision process.</p>

<p>If you’ve made the switch (or are thinking about it), drop a comment below — I’d love to hear your experience.</p>

<hr />

<p><em>Written by Rich Teslow — Founder of 2GuysTek, Senior Security Engineer, and lifelong homelabber.</em></p>]]></content><author><name>2GT_Rich</name></author><category term="pfSense" /><category term="Ubiquiti" /><category term="UniFi" /><category term="firewall" /><category term="networking" /><summary type="html"><![CDATA[]]></summary></entry></feed>