Skip to content
John Qase Hacker edited this page Sep 14, 2015 · 5 revisions

A standard academic library has hundreds of thousands of books. These books are the authority on nearly every subject in human thought and its history. Each book, however, is (nearly) completely separated from all the others, even though all the knowledge is linked to each other.

The Internet, of course, links concepts to each other via hypertext -- the main power of the world-wide web. The current Internet, however, is a menagerie of very personalized subjects or disconnected silos of structured, monolithic data, generally requiring registration and learning that particular website's interface into its data.

This isn't so bad, but people get fatigued with a 100 different websites and learning 100 different interfaces, so the participation is perhaps 10x less and the value of the database is 10x less than what it could be. This project creates a Unified Data Model which is capable of linking any kind of data to any other kind at any level of scale AND not break from the scaling problem that inevitably results from handling exabytes of data.

Besides the difficulty of navigating all this data, this project adds a user-ranking system to democratize adding and organizing all this data. That is, it scales the participation problem so that millions can take part in it, much like wikipedia did.

What this project also adds compared to wikipedia is per-revision voting to encourage and retain reputable users, like PhDs.

Clone this wiki locally