I will admit: This is a bit over my head. But I think it’s important. I’ve always been fascinated by anything related to emergence ever since I read this O’Reilley Article, and while part of me thinks that this must be a brilliant solution to a problem we don’t have (mostly because we’re solving the problem in a bad way, I suspect), I feel certain that software development will be headed this way.
I’m glad, though, that the Cougaar website was kind enough to define what a software agent is — it seems to be one of those slippery terms like (wait for it) web services that has a different definition depending on who’s doing the talking. I’m also glad they put a link to two other sources where you could learn more.
The little bit I did read suggests that software agents may be the solution to a problem I’d like to see solved: the slashdot effect.
I love the idea of P2P. BitTorrent is on my ‘cool’ list. But I don’t use it because the kinds of data I’m interested in are textual data. Web pages. E-books. Stuff to read.
So I’m wondering if agents are the way to solve sharing small amounts of data, and maybe create a newer (commercial free?), robust, slasdot-effect-proof web space. I envision two, maybe three kinds of agents - one is a proxy for storing stuff, and one is a seeker for querying other proxy agents for a URI, and failing that, going to the regular web to fetch the source. A possible third agent would probably need to advertise what’s in a proxy. If a group of people had these three agents installed on their machines, and any one of them were to make a URI request for a file not in their proxy, then that person’s computer’s seeker agent would make a request to everyone else’s proxy agent. Proxy agents either return the data (status 200) or return a 305 Use Proxy status if they’re too busy to send data.
Not thoroughly thought out, and I’m seeing some of the influences of BitTorrent’s design decisions written in here (which is probably ok), but I think this is workable.
Feedback, of course, is welcome.