The last couple of months have been difficult as I have tried to focus on the problem of satisfactory testing the Project Information Cloud concept. This process has required I nail down my aim and hypothesis so that whatever methodology was decided on would ensure the overall ambitions and specific requirements of the thesis were met. Initially my aim focused on improving the access, timeliness and relevance of the information available to project participants however this posed problems around the semantics and testability of ‘relevance’. Relevance can mean many things to many people and is also dependent on time. Something relevant to me now may not be relevant later but that same thing could be very relevant to you later on but of completely no value at the present. Consequently determining a reasonable and sound methodology for testing relevance was proving to be difficult.
The issue with testing relevance is establishing a viable control sample and locking down the constraints between the control and test samples to the point where the test becomes repeatable. Relevance is like hindsight, it is something you only know after the fact and as a consequence the relevance of data is usually dependent on a number of variables such as the person, situation, time constraints and the interface to the data. As a result I began to feel any test for relevance, be it through case studies, model based testing, prototyping or thought based experiments would end up testing these other factors more than the concept of the Project Information Cloud.
Mike and I talked about this a couple of Fridays ago during one of our weekly meetings. He raised a good point which was that one the seemingly forgotten origins of the research was that whilst the ‘Building Information Model’ approach was completely viable its timeframe for industry-wide adoption was very large. This long-term adoption process did not solve the industry’s short-medium term information management issues and if for some reason BIM failed to take hold as many felt it would AEC professionals would be left using the same basic collaboration and information management tools that they have been struggling with at the present time. Hence the primary underlying objective of the research was to propose and prove a method of improving the access and timeliness of project information that can be applied within todays current industry conditions.
Establishing ‘applicability’ as an aim instead of ‘relevance’ achieves two things. Firstly it grounds the research within the current AEC industry instead of setting it apart as some hypothetical and generalised project/information management thought experiment. This is important because after all this is an architecture thesis and should first and foremost be about an architecture problem not a series of general knowledge management concerns. Secondly by focusing on applicability rather than relevance it opens up a whole field of thought and prototype style experiments related to the Web and getting the concept to work in ‘practice’ (both in the real world and architecture sense). Following the applicability route allows the generation of far more specific and to the point conclusions, for example “this concept would work in contemporary practice given the following criteria are meet and these processes followed”.
Following this line of investigation also brings into focus the importance of not creating another Building Information Model concept in the sense that it should be immediately applicable and also not just a Web-enabled Building Information Model. Using this line of thought there are a number of Web-orientated conceptual and technical tests that can satisfactory test the immediate applicability of a Web-based, distributed Project Information Cloud. If it can be shown that the concept meets the requirements laid out by notable researchers such as Tim Berners Lee's Principles of Design then it can be argued that the concept would have a practical and immediate future within the Industry. Using this testing method it gets around the ever-present user-interface issues that exist when performing prototype or discussion group style experiments. In these scenarios often what ends up being tested is the quality of the human interface rather than the benefits or drawbacks of different practical solutions.