The project comprised a series of controlled experiments on the performance of various algorithms on a standardised set of datasets from [Blake et al, 1998, the UCI ML dataset repository]. The primary performance measure considered was accuracy - the number of correct classifications an algorithm infers based on the given set of training data with classifications3. The training time was also considered as a somewhat informal secondary performance measure.
Two different categories of algorithms were examined. These were standard non-fuzzy ML algorithms, such as C4.5, and hand-crafted fuzzy rules based on a human's examination of the training data, processing the validation dataset with the aid of a fuzzy logic software toolkit. It was hoped that a third category, that of neuro-fuzzy ML algorithms could be analysed, but time restrictions prevented this.
The Delve environment [Rasmussen et al, 1996] was considered as the experimental environment, but was rejected as it is excessively large and overfeatured for the small scale of this project, and this additional complexity would merely have hindered the project. In addition, although it does aim to be general in its goals and application areas, it isn't very well suited to the idea of testing a ``human'' algorithm.