Search

Friends

Atomspheric CO2 (PPM)

Archives

Blather

Uptime verified by Wormly.com

19 June 2004

Google-Like Reputation-Based Research Information Distribution Thingamajig

Reading these articles has got me to thinking about distributing research outcomes to different people. Currently we don't do it very well. Getting an idea of what sort of stuff is out there requires days/weeks/months of research, and usually another paper summarising it all.

So maybe when SOAP gets a bit more stable, someone could develop a system for categorising research. And attaching some sort of reputation system, so that all the different organisations can attach ratings and reviews of papers. So you could do a search for all the papers that the top 20% most reputable left-wing organisations gave 80% or better, and filter it by "racism" or something. And you could have a network of relationships where each stakeholder records it's view of a whole lot of other organisations as to credibility and all that jazz. So you could work out who the most secular, respected organisations are. Or a web of of all the organisations that The Economist thinks are credible. Or some combination. If you put in a list of organisations you consider good and left-wing, then you'll get back a list of reputable "left-wing" organisations, and then research papers they've done. You don't even need a system for storing the papers themselves. You could just store one or more categories, and an abstract, that a Google-like something could probably generate automatically. Then there's a centralisedish (but distributed too somehow) spot where people can record their rating of it. And all the stakeholders can publish and record their responses to it. And those responses will be taken more or less seriously depending on the organisations rating.

And I think the overhead would be low. You can have a crawler that has a list of SOAP-enabled namespace things. Which are just anyone who wants to be involved, doesn't matter who. You just have some anonymous form where people stick in their special URL. And the indexer can get everything it needs from there. Each organisation that was interested could have a light-weight SOAP server, that served out the authoritative list of publications, information about the organisation and even stuff like a database of that organisation's opinion on every other related organisation (just a number I guess).

Then your Google-like crawler could be given a kick in its SOAP bum whenever stuff changed, and it could go and reindex. You don't need much of a critical mass, because organisations can use the super-duper search engine on their own site, so it's in their own interests to do it, even if there's no one else doing it yet. And people who are looking for things will see that some publisher is using this new Google-like thing, and think "Gee, that sounds good, I'll go see what it is". And then they can go somewhere and search for lots of other things as well as what they were just searching for.

Oh dear, I'm such a geek. Now I really have to go study.

Comments

No comments yet.

Leave a comment

Markdown

0.087 seconds