Skip to main content

2 posts tagged with "AI"

View all tags

Stack Overflow's Demise

· 5 min read
Joseph HE
Software Engineer

The Quiet Demise of Stack Overflow: More Than Just an AI Story

Remember Stack Overflow? For over a decade, it was the undisputed digital cathedral for developers, the first tab you opened when a coding problem stumped you. It was the collective brain of the programming world, a place where answers were forged through community wisdom and rigorous peer review.

But new data and a compelling analysis suggest this titan of tech support is quietly, perhaps even rapidly, fading into irrelevance. And while large language models (LLMs) like ChatGPT have undeniably played a role in its recent struggles, a deeper dive reveals a more complex truth: Stack Overflow was already on a downward spiral, a trajectory set by its own internal decisions and culture, long before AI became a mainstream threat.

The Numbers Don't Lie: A Dwindling Community

The most glaring evidence of Stack Overflow's decline is the dramatic drop in question volume. A chilling graph highlights a significant decrease, starting as early as 2014, and then accelerating sharply after the launch of ChatGPT.

The data is stark: "the volume of questions posed has almost dried up." In fact, the monthly question count is now "as low as at Stack Overflow's launch in 2009." As one observed, "whoa, that's crazy, it's so crazy," to see fewer questions today than when they first started programming. This isn't just a dip; it's a plunge.

ChatGPT: The Accelerator, Not the Sole Cause

There's no denying the immediate impact of LLMs. As soon as ChatGPT burst onto the scene, Stack Overflow's question volume plummeted. Why? Because tools like ChatGPT offer swift, polite, and eerily accurate answers. They're trained on vast datasets, "including potentially the content of Stack Overflow," providing similar quality but with a far more agreeable user experience. Unlike Stack Overflow's moderators, "ChatGPT is polite and answers all questions." It's the ultimate low-friction, high-reward information source for many developers.

The Self-Inflicted Wounds: Culture and Missed Opportunities

But let's be clear: ChatGPT wasn't the primary cause of the initial rot. The analysis strongly argues that Stack Overflow committed fundamental strategic and cultural errors well before AI entered the picture.

1. A Culture of "Toxic Gatekeeping": The site's moderation culture is described as overtly "toxic" and a breeding ground for "gatekeeping." Moderators were often perceived as aggressive, quick to close legitimate questions, even those offering valuable insights or aiding understanding. One user lamented, "Stack Overflow was a product people generally didn't like, it was more that they just had to be there." Another insightfully noted, "I stopped asking questions at that time because the site felt unwelcoming." This unwelcoming atmosphere, ironically, appears to have coincided with the start of the decline. In 2014, when "Stack Overflow significantly improved moderator efficiency," questions began to drop. More efficient moderation, it seems, meant more questions closed, alienating a large segment of its user base.

2. A Glaring Lack of Innovation (Integration is King): Perhaps the most staggering oversight was Stack Overflow's failure to innovate where it mattered most: direct integration. The document highlights a crucial missed opportunity: why did Stack Overflow never develop an official plugin for popular Integrated Development Environments (IDEs) like VS Code?

As the author points out, "They should have had this Stack Overflow plugin from, like, 2017, 2018. Why wouldn't they do that?" Developers live in their IDEs, and instant access to Stack Overflow's vast knowledge base directly within their workflow would have been invaluable. "Integration is king," and Stack Overflow simply failed to build the bridges necessary to stay relevant in the evolving developer ecosystem.

The Unseen Cost: Data and the Perfect Exit

There's also a sense of injustice expressed regarding the data. The author argues that LLMs like OpenAI's and Anthropic's models "likely stole everything" from Stack Overflow, which possessed "the richest training data ever existing for coding." This raises questions about compensation and fair use in the age of AI.

Amidst this unfolding drama, a nod must be given to Stack Overflow's founders, Jeff Atwood and Joel Spolsky. They sold the company for a whopping $1.8 billion in 2020. In retrospect, this timing was "nearly perfect," occurring just before the terminal decline became acutely apparent.

Where Do Developers Go Now? The Future of Community

So, if not Stack Overflow, then where? The analysis suggests that developers are already migrating to other platforms for help and community. "Discord servers are probably one of the biggest things right now," notes the author. Other spaces like WhatsApp and Telegram groups are also filling the void, indicating a shift towards more immediate, less formal, and often more welcoming interactions.

The Verdict: Self-Inflicted Irrelevance

Ultimately, the analysis points to a sobering truth: Stack Overflow largely authored its own decline. Its internal culture, rigid moderation policies, and critical lack of strategic innovation made it ripe for disruption. The advent of LLMs simply accelerated an inevitable process. As the author concludes, "I wouldn't say 'unfortunately,' because Stack Overflow, ultimately, Stack Overflow was making itself irrelevant."

The quiet demise of Stack Overflow serves as a cautionary tale: even established giants in the tech world are not immune to decline if they fail to adapt, innovate, and cultivate a truly welcoming community. In the rapidly evolving landscape of software development, relevance is earned, not given, and it can be lost as quickly as it was gained.

Builder AI - The "Biggest AI Scam"? Behind the Algorithm, 700 Human Engineers

· 5 min read
Joseph HE
Software Engineer

The world of tech startups is often filled with grandiose promises, but sometimes, reality is far more down-to-earth, even shocking. The Builder AI case is a striking example. This "no-code" development startup, which had managed to raise hundreds of millions of dollars and attract the support of giants like Microsoft, recently made headlines for very bad reasons. The revelation? Its flagship platform, supposedly revolutionary and powered by an AI named Natasha, was in fact... manual work carried out by 700 human engineers based in India.

This is a story that raises serious questions about the overstatement of AI capabilities in the startup ecosystem, dubious financial practices, and the increasingly blurred line between human-assisted automation and true artificial intelligence.

The Scam at the Heart of Builder AI: Natasha, the AI that wasn't

The central idea of the case is simple: Builder AI marketed a product by presenting it as an artificial intelligence marvel, when behind the scenes, client requests were handled by an army of humans. The source even goes so far as to call it the "biggest scam in the history of AI."

The promise? A platform capable of assembling software applications "like Lego bricks" thanks to an AI assistant called Natasha. The reality? "Natasha neural network turned out to be 700 Indian programmers." Each client request was sent to an office in India, where these 700 engineers wrote the code by hand. This is "absolutely incredible," as the author points out.

When Human Work Masquerades as AI: A Recurring Pattern?

Unfortunately, this is not an isolated case. The source emphasizes that this practice of masking cheap human labor behind an AI veneer is not new. Companies have been seen claiming AI capabilities when they relied on "a group of Indians that they hire on the back end and they call it and they call it AI."

This even opens up a reflection on complexity: did these Indian engineers themselves use AI tools to "prompt" and maintain the pace? The line between "AI-powered" and "human-assisted by AI" becomes dangerously porous.

Quality Sacrificed on the Altar of Deception

Despite the use of 700 engineers, the results were far from satisfactory. The delivered products were "buggy, dysfunctional and difficult to maintain." The code was described as "unreadable" and the functions "did not work." A biting irony when one claims to deliver innovation through AI. "Nice okay everything was real artificial intelligence except the uh except that none of it was," the source comments sarcastically.

The Financial Fall: 445 Million Dollars Gone

Thanks to this deception, Builder AI managed to attract 445 million dollars in investments over eight years, with prestigious names like Microsoft on its honor roll. But the house of cards did not withstand. The fall was brutal: a default on payment to the creditor Viola Credit, which seized 37 million dollars from the company's accounts, paralyzed its operations. Additional funds in India remained blocked by regulatory restrictions.

After the exposure of the deception, the startup officially went bankrupt. It's an "absolutely ridiculous" end for a company that purported to be at the forefront of technology.

The "Endgame" of AI Scams: "Fake It Till You Make It" Taken to the Extreme?

Why such an undertaking? What motivates founders to embark on such a path? Is it simply to "ride the hype" of AI and "embezzle money"? The source questions the intention.

One hypothesis is that it was initially a different product that mutated. The founders might have believed they could use developers as a "stop gap" while waiting to develop a true AI, but failed to achieve that goal. This is "fake it till you make it" pushed to its extreme, with disastrous consequences.

AI Must "Multiply Roles," Not "Replace" Them

The author of the source expresses deep skepticism towards AI companies that boast of being able to "replace all engineers." He suggests that a healthier and more realistic approach for AI is to build tools that "multiply the roles" of engineers, making them more efficient or simplifying their work, rather than seeking to eliminate them.

"Fully working independent AI sucks," he concludes, arguing that we should have understood after "3 years" that total autonomous AI is less effective than AI that assists humans.

A Connection with Versailles Innovations

Amidst this debacle, the name Versailles Innovations surfaced due to its commercial association with Builder AI from 2021. The co-founder of Versailles, who was also the former managing director of Facebook in India, denied any financial wrongdoing or irregularities in transactions with Builder AI, calling the allegations "absolutely baseless and false."

The Builder AI case is a brutal reminder of the dangers of "vaporware" and excessive "hype" around AI, especially when colossal sums are at stake. It underscores that the complete replacement of human labor by AI is still a fantasy, and that the most promising AI tools are those that augment human capabilities, rather than those that secretly claim to annihilate them. It's a costly lesson for investors and a warning for the entire tech sector.