|
|
Step back a few more steps. Maybe the web has run its course, and we need to engage with each other in other ways. Even aside from the obvious IRL options, maybe voice and real-time interaction should gain traction again. Maybe we need completely new inventions to help us share content and thoughts. Maybe the web can go the way of gopher and become the subject of future story-telling: “Man, remember back when that was how we interacted? Crazy, right?”
We should move forward, not sideways.
|
|
|
|
Given that the old web was as much of a repository of information as a way to connect, the new thing shouldn’t involve unsearchable, temporary comms that work only in real time. Forcing the new thing to be synchronous or near-synchronous would be a terrible waste of subject matter experts’ time too.
|
|
|
|
All these ways are very harvestable by AI companies. I think the only way is IRL and more people spaces where technology only exists on devices and everything is decentralized.
|
|
|
|
|
Even before AI the human element was being drown out.
The neat internet thing was neat for a while because power hadn’t worked out how to exploit it for their own ends. They have now, the genie doesn’t go back in the bottle.
|
|
|
|
A lot of the web is human. You just can’t discover it. It doesn’t rank highly in search results. It doesn’t go viral on social networks. It doesn’t get wildly upvoted on aggregator sites like this one.
That’s the fundamental dilemma of not just the web, but the Internet, as a pull medium opposed to a push medium like television or radio. A human can not remember every URL. From your blank web browser you can only go to URLs you know. Then the only web pages you will ever see are ones that are linked, directly or indirectly, from the ones you know.
Most people only know Google, Facebook, etc. Anything that isn’t linked to from those sites effectively does not exist.
But it does exist. It’s a whole forest full of trees falling and not making a sound. It’s up to you to do what you can to find it.
|
|
|
|
I wonder if registrars provide lists of bought domain names vs. trying to map IPV4 which has multiple domains pointing to same ip
would be interesting to do mapping yourself though probably pointless with how much effort/time it would take
does remind me of this fun video https://www.youtube.com/watch?v=JcJSW7Rprio
|
|
|
|
Perhaps some examples would help? The human-centric web I remember was centered on sharing things you found.
|
|
|
|
|
> Going back to forums locked behind accounts would be a good first step.
How do you ensure the accounts aren’t AI bots or people who scrap and serve it all back to the AI soup pot? The identity seems to be quite a problem online.
|
|
|
|
Invite-only, the way private torrent trackers still do it. Which has its own problems, but if you limit the number of invitees a given user can bring in and other such restrictions, it makes it practically impossible to for bots to make up a good chunk of the userbase.
|
|
|
|
|
|
hahaha god Reddit is fucking full of people who are clearly using AI to write or edit their posts, I get so many people trying to glaze me like ChatGPT does now and it’s so fucking creepy.
|
|
|
|
Sounds about right. Prove you’re a real human through some sort of identification verification process. Probably would lead to better conversations, especially if each person could have only one account.
|
|
|
|
|
|
I am quite skeptical it would help. The majority of users on forums already aren’t AI, and from my personal experience in the last couple decades on many different forums, there’s already an abundance of egotistical, dogmatic god complexes around to make the experience insufferable enough already.
|
|
|
|
And there also adversaries who are paid to post. Proper reputation system is needed to fight that.
|
|
|
|
> an abundance of egotistical, dogmatic god complexes around to make the experience insufferable enough already.
And you can find a curated list of these people on r/LinkedInLunatics, though I’m not sure the curation is necessary as it seems like pretty much the _entirety_ of LinkedIn posts are the kind that make you question whether the poster is human.
There’s marketing and building a personal brand, and then there’s whatever the heck LinkedIn in 2025 is…
|
|
|
|
I imagine people will soon have AI post here and other logged in forums on behalf of them. Try and rack up Karma or build reputation.
|
|
|
|
|
Get a job at someone pushing AI. Sabotage. You’re smart, you can do it subtly enough that it just looks like you’re kind of incompetent.
|
|
|
|
|
Yeah, the first challenge is learning how to sound like someone who’s drunk the koolaid without letting it actually affect you.
|
|
|
|
We’re missing a piece of middleware technology. Imagine a network like Reddit or IMDB that:
a) offers posting under anonymity,
b) allows users to associate with exactly one physical passport,
c) has no knowledge of who an account belongs to,
d) allows for filtering on content by passport-authenticated users.
|
|
|
|
|
What if we shifted our focus entirely from the source of information to how useful and accurate it is?
I can’t see how the prevalent value system could avoid being “sapio-supremacist” ? is “future proof” to include intelligences that are artificial but whose “sentience” is otherwise human equivalent or “greater”
|
|
|
|
Setup web forums, include ToS which indicates service is denied using automated tools.
People have been charged with telecommunication related crimes for hscking, and the tos csn indicate access denial.
This gated access won’t stop AI, but will make account usage illegal.
People have been convicted for far less. May as well use such laws to our advantage for once.
That’s the best path I xan see.
|
|
|
|
Is this an argument for AI? First, AI slop sucks. Second, even if it stopped sucking, it would need good input data, which it will need for a top 5 dish soap recommendation until it can do my dishes for me. Third, I want more than just useful information.
|
|
|
|
|
“It feels like” — the first step I would make is to try and better understand what you’re seeing. How often does this happen, compared to what you assume is human? Is it increasing? At what rate? Are there papers that confirm your assessment?
|
|
|
|
|
Virtual networks (dn42, tor, etc) or Virtual Private Networks (Tailscale, Wireguard) would be my knee-jerk recommendation. Adding a core abstraction at the network layer immediately fouls up all but the most diligently-coded AI bots out there, as does an abstraction at the Transport layer that’s different from the traditional internet. In the short-term, I expect more humans to retreat into these sorts of enclaves as scrapers and AI slop make the public internet untenable to use.
In the long run…I couldn’t tell you. This feels like the sort of schism Cyberpunk stories are made of, when a utopia of data sharing is perverted into a swamp of automated bots and agents, blindly following obsolete programming and untethered from the controls of their creators, harming whatever infrastructure is connected to the public internet without adequate security. I’d like to think smarter people than myself (shoutout to Xe Iaso for Anubis) will create tools to protect humans and our online presence from the bots, but I’m not super hopeful of their success in the face of present profit-motives for AI Companies to defeat them.
Perhaps the answer is to simply devalue the internet as an entity, and thereby destroy incentive to scrape or pollute it at such a scale. Maybe it’s yanking services offline and putting them back in the real world, or privacy laws and insurance companies making data hoarding untenable and unaffordable for companies to engage in. Maybe it’s identity validation at the point of connectivity, verifying smart cards or identification before you’re allowed online (incredibly dystopian and the stuff of Pal*ntir’s wet dreams).
I honestly couldn’t tell you right now what the long game looks like. Only to find your humans, build your digital fortresses, and help each other as best you can.
|
|
|
|
|
Go back to Facebook’s Real Names rule. Require Know Your Customer validation. No fake name postings. Maybe allow explicit anonymous posting, but readers can block.
|
|
|
|
|
|
if you can’t think of a way to reliably distinguish AIs from humans, that observation alone should raise great concerns which eclipse “spam comments on forums” or “bad results on google”
|
|
|
|
OP (and I) can definitely distinguish the two. The trouble is that I can no longer find the humans who are actually posting valuable information.
|
|
|
|
I’d be interested in building a curated and moderated web. A special browser with an address whitelist. Some kind of democratic curation of content with a small paywall to reduce the noise.
Or alternatively, improve pagerank to exclude low quality content and pages that contain ads.
|
|
|