Australian scientists say the internet’s ‘trustability’ protocols are severely broken.

New research from CSIRO’s Data61 questions the ‘trustability’ of websites and in a world first quantifies the extent to which the trust model of today’s internet is failing.

Researchers found that around half of the Internet’s most popular websites are at risk of malicious activity because they depend on a chain of other third parties to import external resources — such as ad providers, tracking and analytics services and content distribution networks — which are often required to properly load content.

These third parties can further load resources from other domains creating a dependency chain of up to over 30 domains, underpinned by a form of implicit trust with the original website. The research found that the larger the dependency chain, the greater the threat to malicious activity.

While this is a well-known web design decision, its implications on security and privacy tend to be overlooked, according to Professor Dali Kaafar, Data 61 Information Security and Privacy research leader.

“Almost all websites today are heavily embedded with tracking components. For every website you visit, you could be unknowingly loading content from potentially malicious parties and leaving a trail of your internet activity,” Professor Kaafar said.

The research also found that 1.2 per cent of third parties linked to the top 200 thousand websites were suspicious.

Popular web resource Javascript, generally used to improve the user experience of the web, represents the greatest risk of malicious activity as they are designed to be executed undetected.

“The potential threat should not be underestimated, as suspicious content loaded on browsers can open the way to further exploits including Distributed Denial of Service attacks which disrupt traffic to websites, and ransomware campaigns which cost the world more than US$8 billion in 2018,” Professor Kaafar said.

“Worryingly, the original or ‘first party’ websites have little to no visibility of where these resources originate. This points to a lack of ‘trustability’ of content on the web, and the need to better regulate the web by introducing standardised security measures and the notion of explicit trust.”

He said resolving the security issue created by dependency chains will require additional research, the support of the World Wide Web Consortium, the predominant organisation focused on developing web standards, as well as web ‘hypergiants’.

The full report is accessible here.