Dragan Gasevic's presentation about evidence-based semantic web shows that the software engineering field is beginning to adopt the paradigm of evidence-based practice (EBP) which has already been increasingly adopted in medicine, nursing, education, social work, and other human services fields.
In the evidence-based paradigm (and it indeed is a paradigm shift, especially for the medical field that birthed it), the randomized controlled trial is considered to be the study type offering the strongest possible evidence to support a hypothesis. (Note, however, that not every research question lends itself to an RCT. In software engineering, there may be other study designs which are more appropriate.) Systematic reviews and meta-analyses of many randomized controlled trials represent even stronger evidence. As its name suggests, a systematic review requires a systematic and methodical search of the literature in order to present an overall synthesis of results from the highest-quality studies that can be located. A meta-analysis goes several steps farther in that you would take the results of several related studies (all of them RCTs, or cohort studies, or case control studies, etc.) and pool the data, with the effect of creating one large study which can then be analyzed. Both systematic reviews and meta-analyses are important contributions to a field. Although we might not think of them as empirical scientific studies in themselves, they synthesize the entire body of empirical work that has been done on a topic to date. This synthesis is more than the sum of the results of the component studies.
The state of software engineering, as Gasevic and others seem to point out, is that most so-called evidence in the field consists of case studies or even simply expert opinion -- both at the very bottom of the EBP evidence hierarchy. The higher-level empirical studies that have been performed are often with small n, thereby decreasing the power of the studies to detect a statistically significant intervention effect. This is similar to the situation in other fields that have begun to adopt EBP. Gasevic shows that the study design methods, sampling methods, and data collection methods of published papers in software engineering are lacking in quality. If science is the application of rigorous methods to hypothesis testing, then is this a situation wherein computer scientists & engineers aren't practicing much science at all?