In the ever-evolving tapestry of the modern workplace, a subtle yet seismic shift is underway, quietly redrawing the lines of authority and reshaping the very architecture of organizational hierarchies. If your boss seems perpetually swamped, taking longer than usual to respond, it may not be mere inefficiency; a new analysis suggests she’s simply overseeing a dramatically expanded flock of “underlings.” This phenomenon, driven by a relentless pursuit of efficiency and, increasingly, the pervasive influence of Artificial Intelligence, is leading to what some are calling the “Great Flattening”—a quiet revolution that is seeing the ranks of middle management thin out with astonishing speed. The implications are profound, extending far beyond mere cost-cutting and threatening to redefine the very nature of work, mentorship, and productivity in ways that may prove to be a punchline for some but a far less humorous reality for others.
The Great Flattening: Managers Under Siege
The data is stark: over the past five years, people managers now oversee approximately twice as many workers as they did previously. A report by Gusto, analyzing 8,500 small businesses, reveals that the ratio of individual contributors per manager has soared from a little over three in 2019 to nearly six today. This trend, according to Gusto senior economist Nich Tremper, is “happening broadly across the economy,” particularly in small companies where attrition leads to existing managers simply seeing an “expanded scope” rather than being replaced. One can only pity these beleaguered souls during performance review season.
This “Great Flattening” is not confined to small enterprises. Big Tech, often the harbinger of future workplace trends, has been shedding middle managers for several years. Microsoft, for instance, has explicitly stated that reducing management layers is a key goal in its ongoing layoffs, even as it ramps up its AI strategy. Amazon CEO Andy Jassy has announced similar efforts, and Google reportedly cut vice president and manager roles by 10% last year. Meta has been engaged in its own “flattening” since 2023’s “year of efficiency.”
Ostensibly, this headcount reduction is a cost-cutting measure, particularly as companies pour immense sums into AI development and implementation. However, the precise role of AI in this managerial culling remains somewhat ambiguous. It’s not entirely clear that AI is directly replacing the work managers do. Instead, the technology appears to be freeing up managers’ time, as their direct reports increasingly turn to AI for assistance instead of their human supervisors. This suggests a subtle, yet significant, shift in the dynamics of problem-solving and information flow within organizations. Furthermore, supervisors themselves are reportedly using AI to automate aspects of management, though the exact mechanisms remain somewhat opaque.
The Unforeseen Consequences: Productivity, Mentorship, and the Human Element
While the allure of a “flattened” hierarchy—promising agility, reduced bureaucracy, and cost savings—is undeniable, the “Great Flattening” carries with it a significant risk of backfiring. Gusto’s own research reveals a counterintuitive truth: industries with more managers actually tend to have higher worker productivity. This suggests that the value of human oversight, guidance, and support may be far more complex than a simple headcount reduction implies.
The potential for negative consequences is particularly acute for junior employees. These nascent professionals often rely heavily on the training, mentorship, and close relationship that a dedicated manager provides. In a world with fewer managers overseeing larger teams, the quality and availability of such crucial developmental support could diminish significantly. The very human element of guidance, career development, and interpersonal problem-solving—aspects that AI, despite its burgeoning capabilities, is still ill-suited to replicate—may become casualties of this relentless pursuit of efficiency.
Nich Tremper’s observation that “Middle manager is almost a cultural joke in a lot of ways” underscores a societal tendency to dismiss the value of this crucial organizational layer. However, the potential consequences of “getting rid of them all” might not be so funny. The “Algorithmic Architect” of AI is indeed reshaping the workplace, but whether its designs lead to a more efficient and productive future, or simply a leaner, lonelier, and ultimately less effective one, remains to be seen. The ongoing experiment in flattening hierarchies, driven by technological advancement and fiscal imperative, demands careful observation, lest the pursuit of efficiency inadvertently dismantle the very human capital upon which organizational success ultimately rests.

The Digital Arachnids: AI Crawlers and the Threat to the Open Web
Beyond the internal restructuring of corporations, the pervasive influence of AI is now creating a new, more existential threat to the very ecosystem of the internet: the proliferation of AI crawlers. These “digital arachnids,” as one co-founder describes them, are automated software programs deployed by AI companies to “siphon information for AI programs,” crawling over websites with a voracious appetite.
The problem, as articulated by a growing number of organizations, including Wikipedia, Reddit, news publishers, and cultural institutions, is twofold. First, unlike traditional crawlers (like Google’s, which generally offer a mutually beneficial relationship by driving traffic to websites), many websites doubt they will benefit from AI crawlers grabbing their information for “training” or chatbot replies. Second, these AI crawlers often act like “unpredictable greedy jerks,” hammering websites with excessive traffic and costs they simply “can’t bear,” delivering little in return in terms of habitual readers or income.
Toshit Panigrahi, CEO of TollBit (a company that helps websites track AI crawlers), provided a stark example: a large sports website was visited 13 million times in a month, not by humans, but by AI crawlers. This staggering figure contrasts sharply with the mere 600 actual humans drawn to the site as a result of the AI activity. Similarly, Wikipedia reported that a “huge surge of visits from AI crawlers forced the site to spend more money and scramble to remain online for users,” with the “sheer amount of traffic generated by crawlers causing a strain on the underlying infrastructure.”
This has ignited a growing “fight between AI crawlers and the people who hate them.” Websites are employing aggressive technology to block or confuse these crawlers, and some AI companies have even begun to agree to pay for AI activity (e.g., The Washington Post’s partnership with OpenAI). Cloudflare, a major web traffic management service, now offers automatic blocking or limiting of AI crawlers, and companies like TollBit are enabling websites to erect “AI-only paywalls.”
While some AI backers argue that websites may regret blocking crawlers and should experiment with new monetization models, the immediate concern for many online information and entertainment providers is survival. The fundamental question remains whether there is “room for both AI and the websites that you rely on,” and whether a mutually beneficial solution can be found to sustain the publishers who are the lifeblood of the internet’s content. The battle between AI’s voracious data hunger and the websites struggling to survive its impact is a critical, unfolding chapter in the future of the digital commons.
The Malicious Mimicry: AI-Driven Impersonation and the Arms Race of Deception
Beyond the transformative and sometimes disruptive impacts of AI in the workplace and on the digital ecosystem, a more sinister application of the technology is rapidly gaining ground: AI-driven impersonation. The State Department has issued stark warnings to U.S. diplomats about attempts to impersonate Secretary of State Marco Rubio and potentially other high-level officials using sophisticated AI technology. This alarming development, first reported by The Washington Post, came after an impostor posing as Rubio attempted to contact at least three foreign ministers, a U.S. senator, and a governor via text, Signal, and voicemail.
This is not an isolated incident. A similar deepfake attempt was revealed in May, targeting President Donald Trump’s chief of staff, Susie Wiles, with calls and texts from someone who seemed to have gained access to her personal cellphone contacts, and whose voice may have been AI-generated. The FBI has also publicly warned about a “malicious” campaign involving text messages and AI-generated voice messages that purport to come from senior U.S. government officials, aiming to deceive other government officials, business executives, and prominent figures. Secretary Rubio himself was previously targeted this spring by a bogus video claiming he wanted to cut off Ukraine’s access to Elon Musk’s Starlink internet service, a false claim that Ukraine’s government later had to rebut.
While the hoaxes involving Rubio were initially deemed “not very sophisticated,” officials still considered it “prudent” to advise all employees and foreign governments, recognizing the increasing efforts by foreign actors to compromise information security. The core cyber threat is not direct infiltration of department systems, but rather the risk that “information shared with a third party could be exposed if targeted individuals are compromised.”
Experts warn that the misuse of AI for deception is likely to grow exponentially as the technology improves and becomes more widely available. Siwei Lyu, a professor and computer scientist at the University at Buffalo, notes a significant increase in deepfakes portraying celebrities, politicians, and business leaders. Just a few years ago, these fakes contained easily detectable flaws like “inhuman voices or mistakes like extra fingers,” but the realism and quality of AI-generated fakes are now so advanced that it is “much harder for a human to spot.” Lyu describes this as an “arms race,” where “the generators are getting the upper hand.”
Potential solutions being explored include criminal penalties for misuse and improved media literacy to equip individuals to identify deepfakes. A parallel industry is also emerging, developing apps and AI systems specifically designed to spot these phonies. However, the escalating sophistication of the deceptive AI tools means that the battle against malicious mimicry is an ongoing and increasingly challenging one, demanding constant vigilance and innovation to safeguard against the erosion of trust and the potential for widespread manipulation.The AI Interviewer: The Dehumanization of Hiring
Beyond the more overt impacts of AI, the technology is now quietly, yet profoundly, transforming the deeply human process of job interviewing. Job seekers across the country are increasingly encountering “faceless voices and avatars backed by A.I.” in their interviews, a wave of “agentic A.I.,” where agents are directed to act autonomously, generating real-time conversations and building on responses.

While aspects of job searches, such as resume screening and scheduling, have long been automated, the interview itself was considered the last bastion of the human touch. Now, AI is encroaching upon even this domain, making the often frustrating and ego-busting task of finding a job even more impersonal. Jennifer Dunn, a 54-year-old marketing professional, described her experience with a virtual AI recruiter named Alex as “hollow,” noting that Alex could not answer most of her questions about the job, leading her to hang up before finishing. Charles Whitley, a recent computer science graduate, found AI interviewers “very dehumanizing,” recalling one AI voice that tried to seem more human by adding “ums” and “uhs,” which he described as “some horror-movie-type stuff.”
This trend, which gained significant traction last year, is partly driven by tech startups like Ribbon AI, Talently, and Apriora, which develop robot interviewers to help employers screen more candidates and reduce the load on human recruiters. Arsham Ghahramani, CEO of Ribbon AI, paradoxically argues that this is a “much more humanizing experience because we’re asking questions that are really tailored to you.” Propel Impact, a nonprofit, used Ribbon AI to screen 500 applicants for a fellowship program, a significant increase from the 150 interviewed by humans the previous year, demonstrating the efficiency gains.
However, career experts like Sam DeMase of ZipRecruiter caution that humans cannot ultimately be taken out of the hiring process, as AI may contain bias and cannot be trusted to fully evaluate a candidate’s experience, skills, and fit. The experience of Emily Robertson-Yeingst, 57, who was interviewed by an AI named Eve for almost an hour but never heard back from a human or AI, left her feeling “used” and wondering if she was “just some sort of experiment.” While some, like college student James Gu, found AI interviewers less stressful and felt “freer to ‘yap'” to the AI, others, like Jennifer Dunn, prefer not to interview with AI again, finding it “isn’t something that feels real.” This emerging trend highlights a growing tension between the pursuit of efficiency in hiring and the deeply human need for connection and transparency in the job search process.
Discover more from Clight Morning Analysis
Subscribe to get the latest posts sent to your email.
