Inside The Rating Room

Welcome to The Rating Room — where we talk NatHERS, energy efficiency, and everything in between.

In this episode, Brian Haines is joined by co-hosts Matthew Graeme and Andrew Hooper for a wide-ranging conversation that hits a nerve most assessors know well:

Why can the same house end up with different ratings across different NatHERS tools — and what does that mean for industry trust, homeowners, and compliance?


A quick rewind: how we all ended up in the same room

Like many long-running industry relationships, this one started with workshops, training rooms, and the kind of “I’ve heard of you” reputation that energy assessors collect over decades.

Between the three of us, we’re talking 18–24+ years in the game — from the early planning-stage ratings, to the FirstRate 3/4 era, to the “wild west” days of early FirstRate 5 testing (yes, the button that changed the star rating is discussed…).

But the conversation quickly moves from nostalgia to the real issue: consistency.


The big question: should different NatHERS tools produce the same result?

There’s a common expectation in the industry (and from builders) that the answer should be “yes”.

In reality, it’s more like: they can align — but only if you know exactly what you’re doing, and you’re willing to model to the lowest common denominator.

Matthew explains a key truth:

  • Different tools have different features and limits

  • Some inputs exist in one tool but don’t exist in another

  • If you “simplify” the model so all tools can represent it the same way, results can converge — but it can be painful and technical


Examples of where tools diverge

A few practical examples came up that most assessors will recognise:

  • Shading limits (e.g., some tools allow many shading elements per orientation; others cap it)

  • Ceiling/attic floor areas (especially where roof forms extend over outdoor areas)

  • Subfloor and wall height controls

  • Construction detail controls that exist in Accurate/HERO but not always elsewhere

The punchline:
If one tool can’t represent something, you can’t truly compare apples with apples.


So… does the average user know how to “make them match”?

This is where things get uncomfortable.

Even if the tools can be brought into alignment, the process often requires:

  • knowing which features to ignore,

  • how to rebuild details in a different way,

  • and how to do it without violating TechNote requirements.

In other words, “parity” is possible — but it’s not guaranteed, and it’s not simple.


The uncomfortable reality: assessor input matters more than the tool

A major theme of the episode is that assessor competency and consistency is often the biggest variable.

Andrew shared an old workshop experience where 200+ accredited assessors rated the same project… and the results were staggering:

  • lowest: 0.7 stars

  • highest: 7.2 stars

That’s not a rounding error — that’s a breakdown in consistency.

Even among competent assessors, small interpretive differences (like drawing interpretation, screens/fences averaging methods, or minor geometry decisions) can cause fractional changes. But the bigger differences? Those often come down to how the model is built, not the calculator.


Have the goalposts moved? (Spoiler: yes — and sometimes for good reasons)

Another major segment of the conversation: why an old rating might not mean what people think it means today.

Brian described a real test:

  • a house that was 6 stars at the time

  • opened in the latest software version years later

  • recalculated without changing inputs

  • result: 5 stars

A full star lost purely through:

  • software changes,

  • TechNote evolution,

  • and broader improvements (or shifts) in assumptions and calculation methods.

That leads to a blunt takeaway:

Many older ratings aren’t directly comparable to modern ratings.
And anything very old may be more “historical document” than “current performance indicator.”


Future climate files: why colder zones sometimes look “better” (at first)

We also dug into future climate modelling (2050/2070 and beyond).

A surprising finding: in colder climate zones, some homes can rate better as climate warms — because heating demand drops faster than cooling demand rises (until later in the century, when cooling starts to dominate).

Meanwhile, warm climates (think Queensland) raise a harder question:

How do we keep meeting higher NatHERS targets in a future where cooling loads keep climbing?

This isn’t just a modelling curiosity — it points straight at future-proofing design and whether today’s building settings are ready for tomorrow’s climate.


CSIRO’s “Energy Rating Finder”: transparency win or privacy headache?

The episode then turns to recent NatHERS news: CSIRO piloting an online tool to display star ratings for certain assessed homes — searchable by address.

The group explored the pros and cons:

Potential upside

  • helps homeowners understand their dwelling’s design rating

  • opens the door for energy ratings to become more mainstream in property decisions

  • could eventually influence real estate markets (“compare your house to the neighbour’s”)

Big concerns

  • privacy (address-based lookup)

  • confusion between:

    • new home plan-based ratings vs real-world performance

    • renovated projects with multiple certificates and compliance pathways

  • missing data gaps (e.g., some tools/time periods not included yet)

Homeowners can opt out — but it raises a practical question:
How will people know to opt out if they don’t even know their data is listed?

The conversation lands on something practical: assessors should consider adding (or strengthening) a privacy/data-use clause in proposals and agreements, and potentially highlighting opt-out options more clearly.


Scorecard ending, and what it means for the workforce

The group also reflected on the closure of Scorecard and what happens next — especially for assessors whose business model relied on it.

One view: NatHERS for existing homes could become far bigger than Scorecard ever was, if mandatory disclosure expands. That would mean:

  • huge demand

  • major workforce needs

  • a strong case for existing assessors to upskill quickly


Integrity: when “just make it pass” isn’t an option

Finally, the episode closes on NatHERS guidance around assessor integrity — including fraud, falsified credentials, conflicts of interest, and what to do when errors are found.

There’s a real-world test that assessors face:

  • A client pushes late Friday

  • The project is just under target

  • The pressure is on to “find a way”

The consensus is simple:

Integrity isn’t theoretical. It’s what you do when the pressure hits.


Key takeaways

  • NatHERS tools can differ — sometimes because the feature sets differ, not because the maths is “wrong”.

  • Competent modelling reduces variation, but assessor interpretation still matters a lot.

  • Software and TechNote changes mean older ratings can drop when recalculated today.

  • Future climate files can produce counterintuitive outcomes (especially in colder zones).

  • Public rating lookup tools may boost transparency — but also raise privacy and perception risks.

  • Integrity guidance is tightening — and that’s good for the scheme and homeowners.


Next
Next

The changes keep coming