Ignoring front-end clients which could come in a multitude of formats (from web browser to CAD plugin) the backend application would be divided into three major backend pieces. These pieces of functionality are divided in order to maximize deployment flexibility in a multitude of environments. The system uses two accepted methods of data transfer on the Internet: email and file transfer via a server (FTP/HTTP). By using a standard, non-proprietary means of data submission a multitude of clients can be supported or built using existing technologies. The three back-end components to the application are:
Getter Application
The Getter retrieves messages from a number of identified email accounts, web services and file servers and consolidates the information to at a single location with separate RSS feeds for the various information sources. The Getter could be acting for a single project or setup to serve a practice with numerous projects 'on the go' at one time. The principle behind the Getter is that storage and bandwidth are relatively cheap compared to the time required to dynamically harvest resources from around the web. Deployment wise the Getter could be setup to operate on each PC (in order to create local versions of the data-store) or centrally at a single location in order to create a single location where all project data is stored.
Receiver/Indexer Application
The Receiver monitors the file repositories created by the various Getters for each organisation (or person/group within). The Receiver reads the RSS feeds generated by the Getter and on submission of new information the Receiver pulls down the corresponding files and meta data for parsing and submission into the search index by the Index. The creation of the concise search index is required in order to quickly perform rich, sophisticated searches of the project data. The Indexer would create its index using the meta data supplied by the RSS feeds coupled with a full-text index of the submitted file attachments, work-logs and emails.
RSS Search Engine
The RSS enabled Search Engine would interface the search index created by the Receiver/Indexer and provide RSS feeds to any search queries submitted. The use of RSS style search feeds would enable the creation of sophisticated, time sensitive searches by a multitude of clients (from web-based tools to plugins and desktop helper applications). The search results would provide hyperlinks to the information stored in the central repository created by the Getter application. In this sense the original content can be transient but still be referenced over long periods of time. In most deployment situations the three applications would be deployed on a single office server but they could also be deployed on a laptop or spread across three hosted servers located anywhere on the Internet.
Example User Interface
The focus of this example user interface is to retain a lot of the familiarity of contemporary search engines whilst emphasizing the chronological and conversational nature of architectural design evolution. In the given example as the designer submits daily work-log entries (along with file attachments of their working drawings) and discusses design changes via email a time sensitive model is created that allows the user to browse through the evolution of the project. The user is free to expand out different conversational threads or switch the search view into a gantt chart style view where email and work-logs are tracked against project schedules. As working files are submitted to the system as work-logs are created the user can browse and download a versioned history of files in order to examine the state of design during a particular phase of the project.