The bifurcation of the home lab
When I was coming up in tech, one of the things that made enterprise products more accessible was the relative ease with which you could experiment with them in a home lab: Windows Server, ESX/ESXi, Linux: all of these platforms were very friendly to discovery on old desktops or scavenged server hardware. This hasn't necessarily changed: these products are all still definitely usable in a home lab scenario. What has changed, however, is the enterprise story around them.
Pre-cloud, when you ran these systems at home there wasn't a whole tier of functionality that you were unable to access. If you wanted to take three old laptops and make a basic domain you absolutely could, and there wasn't a higher tier product that enterprise customers had access to. To paraphrase Warhol, a domain was a domain, and no amount of money could get you a better domain. Now, however, the industry players that determine the evolution of these products have converged on a future where the "ideal" management layer—and increasingly large amounts of new functionality—are gated behind cloud subscriptions and enterprise agreements that are fundamentally out of reach for people trying to run these technologies at home. What used to require getting your hands on some minimum level of hardware now requires some amount of committed spend (or, spoiler alert, qualified hardware) along with an LLC or other partnership.
I want to interject here and clarify that this is not a universal state of affairs: many of the increasingly fundamental, next-gen technologies like Docker and Kubernetes are very friendly to home lab environments. If you really put in the effort and developed your technical skill, you could build something roughly as capable as EKS at home (for some definition of capable, of course). There really aren't core features or capabilities that are gated behind cloud spend, and a dedicated practitioner can learn most of the stack on extremely accessible hardware. That isn't to say that EKS/AKS/GKS don't add value or that learning Kubernetes is easy, but that you don't miss out on critical features if you try to learn the platform on your own.
The contrast with technologies like Azure Local is stark. Microsoft never really invested the effort they should have on the UX side of things, so the management picture for a Hyper-V cluster is roughly the same as it was a decade or more ago: janky GUI tools or PowerShell, with incomplete feature sets between the two. Their solution to this problem has been Azure Local: fundamentally Hyper-V, but with the nice Azure tooling you're used to. Unlike AWS Outposts, it also can be a significant cost savings over running workloads in Azure. Unfortunately, though, while you can download Azure Local images and get your hardware configured with it (and even clustered, running workloads, etc) you can't onboard it into Azure without running into two critical issues: cost and compatibility.
Microsoft, when faced with the question of whether to invest in Azure or in core Windows capabilities, has chosen Azure every time. The reasons why aren't a mystery: recurring revenue, better visibility into customer workloads, more touchpoints for support to use, and so on. A product like Azure Local occupies a weird space in this hierarchy: fundamentally Windows, but the value over traditional Windows Server Hyper-V clusters is in the management and lifecycle simplicity. To achieve this and make it supportable, Microsoft dramatically limited the scope of hardware they were willing to support. Functionally, you need to purchase qualified solutions from a partner if you want to deploy something you could actually use for anything beyond toy workloads. Their home lab "solution" for Azure Local is literally to deploy VMs on your hardware and enroll those VMs as an Azure Local deployment.
Azure Local was supposed to be the culmination of the story Hyper-V had been building to for years, demonstrating Microsoft's continued commitment to on-prem workloads by significantly reducing the guesswork involved in utilizing things like Storage Spaces Direct, RDMA, GPU passthru, etc. These features are available in some form in Windows Server but using them is not always straightforward. There's not even a GUI wizard to deploy a SET (Switch Embedded Teaming, the modern vSwitch Microsoft recommends) team vSwitch, so if you aren't familiar with the command line you might not even notice that there's a preferred way to configure your cluster compared to how it was done in 2008. That's better than features like hot patching or improvements to Software Defined Networking that aren't being backported to traditional Hyper-V at all, but it is thin gruel all the same. Azure Local built from VMs gives you the bare minimum exposure to the stack required to claim that you have a lab solution at all but is grossly inadequate for actual learning.
But let's set that aside and accept the reality of those limitations. If you build an Azure Local cluster out of VMs, like Microsoft wants you to, it does not exempt you from the subscription costs. You're paying (as of time of publication) $10/core/mo after the first 30 days for the privilege of managing an environment that can't do any real work via Azure. Now, is this expensive for a professional who is probably already making decent money? Not in isolation—but comparing it to the cost someone outside of the industry pays to learn Kubernetes makes it seem towering. A minimum viable Kubernetes cluster for real learning might be as small as three Raspberry Pis, which is a low one-time cost that often reduces to free if you're willing to get eBay'd hardware or lifecycled enterprise clients. Azure Local, even as a toy deployment, is $10/core/mo forever. There's no trick like the old Windows Server evaluation reset, so unless you're willing to tear down and rebuild your environment every month you are locked into that ongoing cost.
Azure Local is just a particularly galling example of a larger trend. Almost every platform that you might expect to see in enterprise grade IT shops has a split like this now: Broadcom infamously killed the free VMware license before bringing it back, but it is locked at version 8. The free license tier never included core features like vMotion, vCenter, or virtual distributed switches (being targeted at single hosts rather than clusters), but you could use the 60-day enterprise plus grace period to experiment. There is free VCF licensing, which is a genuine improvement, but you have to have passed a professional level Admin or Architect exam to qualify for it, which is an almost sarcastic requirement given the reality of studying for those exams. You can probably rattle off a handful of other examples, including those where the "free" tier was just eliminated or severely restricted: Veeam, Jira and Confluence (yes, I know that they have a free cloud plan for small teams, but killing the server product tier remains a personal grievance), a lot of the enterprise grade firewall appliances, and so on.
Why does this matter? IT has had a pipeline problem for as long as it has existed. The space moves too quickly for there to be decades-old traditions to draw upon, which until relatively recently provided a pathway for what we might call curious and motivated independent learners. The path I took, learning on the side while in desktop roles and climbing the ladder through systems administration into systems architecture, is less realistic by the year. When every organization is a hybrid environment, the scope of what you need to learn relative to the tools you realistically have to learn it effectively is too wide for many companies to want to take the risk on internal promotions. Home labs used to be a way to overcome a lack of direct experience, but they're increasingly irrelevant for a lot of what systems administration is becoming.
This is compounded by the challenge of the junior role. Junior roles have always been...well, not controversial, but certainly contentious (juniors as a rule don't know anything and are at best a net neutral force until they've had months of seasoning), and the pressure to get rid of that rung of the ladder has always been intense. "Why do we train someone for a year or so only to see them jump ship with the skills we invested in teaching them?" has never been an easy question to answer, because "if no one does it we won't have any one to replace the senior staff with" doesn't pencil out. Home labs used to be a way to try and square the circle a bit: do some of this learning on your own time so you can demonstrate some minimal level of competence in an effort to overcome the omnipresent bias against people new to the industry or role. This issue is only getting worse with the advent of a class of tools that can replace at a much lower cost a great deal of the labor that junior employees, even good ones, could realistically be expected to contribute. If you can't learn a lot of this stuff on your own and employers are increasingly reluctant to teach you at all, how are you supposed to break into the industry?
The kid standing up a dedicated server or trying to get a DOS game working could parlay those skills into a real career in a way that is both more and less true of the space today. If they want to learn DevOps and Kubernetes, the world is largely their oyster, and the same is true if they can devote a bit of spending money to learn (parts of) the cloud world. Traditional, bread and butter infrastructure, though? That is increasingly out of reach and gated behind price tiers that preclude real learning unless you have an employer to sponsor you, are willing to devote outsized resources to the effort, or get very lucky. Vendors have been selling us the future for a couple decades now, but traditional infrastructure stubbornly remains a very immediate need for the vast majority of medium to large businesses. Our challenge is that there has never been less collective interest in building the next generation of expertise. We are creating a world where the people who will replace us might have never touched fundamental layers of the stack.