Monday, October 1, 2007

A brain cell is the same as the universe

First, one may need speech word recognition manifold hashtables.

Secondly, an association. with an arbitrary file: image, URL, text document.
Thirdly a source file database.

The system presents files for commentary--these files may have an existing cross-reference via tags on a site such as flickr, digg, del.icio.us, Google Earth, Wikipedia, or a statistical correlation with other sources in the database, such as text files with statistically unlikely phrases, markup structures in SGML, color palette or file compression signature.

The result of this becomes not only a richer tag set, of varied files, but the transverse intelligence to display photographic examples alongside text, maps and images alongside a history, product illustrations alongside VOIP conversations or the transverse navigation interaction whereby mentioning a keyword brings onscreen files containing that and other keywords--a picture of a cat and a bird for instance--when the person describes an image of a cat, and this predator prey image appears, the person's next utterance, let's say this is "bird"--brings up pictures of birds, or the range of the ospreys seasonal migrations

Much of this data can be drawn from fairly open cheap or free sources, such as Wikipedia, Flickr, Google or Yahoo image search, maps and other apis.

The graphical visualization and the speech recognition should be sourced to technical or academic APIs and algorithms.

No comments: