A bit of modelling history from the PASSYS project
Modelling and simulation formed an important part of the PASSYS project at its commencement in 1986 and continued to do so in subsequent related projects as listed in the previous Dynastee newsletter, through to the current IEA EBC Annex 71 “Building energy performance assessment based on in-situ measurements”.
There were two strands to the original research. The first was undertaken by the Model Validation and Development (MVD) subgroup that had the remit of reviewing algorithms in detailed simulation programs and devising validation tests – at first analytical and inter-program comparisons in the PASSYS I project (1986-1989), followed by empirical validation using the operational PASSYS test cells in PASSYS II (1989-1993). This was a substantial research effort – 20 researchers from 10 countries were involved in the MVD subgroup. The developed validation methodology comprised elements of theory review, code checking, analytical validation, inter-program comparisons and empirical validation. Empirical validation itself was broken down into the validation of particular heat transfer processes and whole-model validation which tested the whole program structure. The second research strand was undertaken by the Simplified Design Tools (SDT) subgroup, which developed, as the name suggests, simplified models with correction factors (e.g. utilisation factors for solar and internal gains which became embedded in the EN ISO 13790 standard for building space heating and cooling energy calculations) based on correlation analysis from multiple simulations with a detailed simulation program. The program ESP, developed at the University of Strathclyde in Glasgow, was chosen as the European reference program for the work of both the MVD and SDT subgroups, providing a focal point for comparing, and suggesting new, algorithms that could be tested within the whole building modelling framework.
In subsequent projects (e.g. PASLINK, DAME-BC and PV-Hybrid-PAS), a procedure for calibration, scaling and replication was developed. This involved comparing modelling predictions with detailed experimental data obtained with test components mounted on the test cell, and calibrating the models, if necessary, where there were modelling uncertainties, for example with appropriate convection coefficients. Following this, one or more full-scale buildings were modelled with and without the novel test components (advanced glazings, PV-hybrid modules etc) in order to determine the energy and environmental performance (energy consumption, IAQ, thermal comfort, lighting etc) in realistic operational scenarios. Different climatic boundary conditions could also be applied to assess the performance in different climatic regions. A special issue of the journal Building and Environment (2008, Vol 43(2)) reported on some of the work undertaken, including a summary of case studies.
Recent research undertaken within IEA EBC Annexes has led to a return to empirical validation of detailed building energy simulation programs, notably in IEA EBC Annex 43 in 2003-7 and more recently within IEA EBC Annex 58 in 2011-16 to which many Dynastee participants contributed. As part of Annex 58, detailed experimental specifications and high quality datasets were obtained for validation experiments carried out at Fraunhofer IBP Twin Houses in Holzkirchen, Germany. Over 20 sets of model predictions from 15 organisations using 12 different programs were submitted and compared to the measured data in an iterative process. A paper in the Journal of Building Performance Simulation (2016 Vol 9(4)), “Whole model empirical validation on a full-scale building” summarised the outcomes from the first of these experiments. It is intended that a further validation experiment will be conducted in the recently commenced IEA EBC Annex 71, this time including not only the building envelope but also systems and synthetic occupancy. As detailed simulation programs become more routinely used for design of low energy buildings and regulations compliance, there is a continuing need to ensure that the resulting model predictions are reliable.
Paul Strachan, ESRU, University of Strathclyde