Standard ML algorithms, as with many computer algorithms, are very well suited to performing mechanical manipulations of very large datasets very quickly. This is excellent for problems such as the www.microsoft.com web server logs, with many attributes. Only for datasets with extremely large numbers of instances, with intractable running times of standard ML algorithms would the cost of hand crafting fuzzy rules be acceptable.
In addition to this, the training time is often much shorter than for humans. Often ML algorithms can be very accurate with only a very small expenditure in terms of their training time. Again, for all but the very largest datasets, this would outweigh the effort required for a human to construct fuzzy rules.
Humans perform much better when they are able to interpret and gain meaning, understanding and information from the training data. This is when they can generalize best and draw the best conclusions. ML algorithms, however, have no such requirement, and can apply techniques such as decision trees and Bayesian inference to obtain results approaching the probabilistic optimum without any need to comprehend anything.