SS2: Science Happens –
Lessons Learned from Not-Quite-Successful Research Endeavours


It is most often, if not always, the case that scientific research only gets published when it results in, well, good results. This usually responds to an overabundance of novel and interesting scientific material, and an inbuilt reluctance to publish “negative results”, both from the venues (conferences, journals, workshops, etc.) and from the authors themselves.

However, there are often good lessons to be learned from research activities that, for a variety of reasons, did not deliver the expected results, be it in terms of statistical significance, viable models, performance, or whatever relevant metric applies. In many cases, experiments that were well-designed and carried out in a thorough manner deliver underwhelming results, or assumptions based on the existing literature impose spurious limits or expectations, and so valuable research activities go unnoticed. However, those very “failures” can also be rich from a “lessons learned” perspective, or they might inspire other scientists with fresh eyes to attack the problem from a different perspective, perhaps in a more successful manner.

For this special session, we aim at presenting results that were not all that their authors were expecting, and that might not be accepted in regular conference tracks, due to lack of impact, or any other of the reasons mentioned above. We look forward to submissions that can not only present interesting experiments and results, but also a thoughtful critique of what might have gone wrong.


Authors are invited to submit papers that fall into or are related to the following topic areas:

  • QoE evaluation methodologies that could not work
  • Opportunities from the unexpected results
  • Lessons learned from “failures” in the test design
  • Transparency on the report of the research activity
  • Research activities that did not deliver the expected results