An intellectual exercise for the sharpest minds (with I.T. training):

If I were to invent a practical social media sharing platform, with zero adverts and no gathering of information from the users, what mechanisms could I put in place to ensure only individual human beings joined and participated without allowing misinformation, propaganda, bigotry, or scams to overwhelm the bandwidth?

I am talking about a replacement for Fbook/yootoob/tweetr/etc that is not and can never become evil. It must be open source except for the mechanisms used to keep out bad actors (that must rely on statistical tests and some kind of bio-metric signature)

The First Set of Ideas:

A visual/input test that bots cant do and takes concentration and time to slow down click farms.
solve a maze with your fingertip? read a random passage aloud?

A waiting period before a second test. perhaps an hour to make sign-up blasting impractical?

Occasional random retests for human beings.

A shake test for random bio-metric frequencies. (shake your device up/down now rotate three times), (take a pic of something square or triangular)

No suggestion algorithm for groups friends or ideas

A scoring system for bad behavior and reporting it.
This will need a statistical analysis feature that can test for veracity.
Is a large cross section of the community reporting a post/poster?
Is a small group targeting a multitude of wide ranging sites?

What service can a large population base provide that a benefactor might be willing to subsidize for?
distributed computing? Voluntary opt-in surveys? opt-in mapping photos?

Is a subscription based service the only way to slow down spammers/bots?

Can a subscription fee cover the project's costs/growth? what is reasonable and how is it collected?

What else do we have to worry about if we want to finally get rid of these toxic billionaire poison platforms?

Molly J.