Home If you don't have enough of a reason to hate Google, here's a good one!

If you don't have enough of a reason to hate Google, here's a good one!

On April 25th, 2023, Google employee Ben Wiser published an API proposal: “Web Environment Integrity”. To quote their “intentions”, as of July 22, 2023:

Users often depend on websites trusting the client environment they run in. This trust may assume that the client environment is honest about certain aspects of itself, keeps user data and intellectual property secure, and is transparent about whether or not a human is using it. This trust is the backbone of the open internet, critical for the safety of user data and for the sustainability of the website’s business.

You might notice a few similarities here with streaming service DRM and anti cheats, and that’s because it’s exactly like those!

However, the signals that are considered essential for these safety use cases can also serve as a near-unique fingerprint that can be used to track users across sites without their knowledge or control.

This is a surprise tool that will help us later!

Introduction points

Let’s go over each of the main points in the introduction section of this proposal.

Never trust the client.

Users playing a game on a website want to know whether other players are using software that enforces the game’s rules.

Rule number -2,147,483,648 of cybersecurity: never trust the client.

  • The client can, very easily, modify the software you give it. Be it runtime memory modifications, LD_PRELOAD injections, or binary replacements, there’s no guarantee that the software running is what you sent.
  • Oh, you’ll verify the checksums? Nah, there’s still no guarantee the user can’t modify the software to return the checksums they want.
  • Check process memory integrity? Well, how about if the user does it from kernel space? Then you can also be susceptible to false triggers from the OS.
  • You’ll just run your software with the most privileged permissions? Well, the user can too. Not to mention, you’re still a client of someone; be that (modifiable!) hardware or a VM. Oh, and it’s a major security vulnerability.

This doesn’t change, even with this proposal, when running in a web browser. Extensions can be used, assuming the contradiction in this proposal about verifying the content of the browser and being able to still use extensions is valid.

There’s no “trust”. Assume everything the client gives you is malicious and tainted. Otherwise you end up with the hell that is C (trusting that the developer does everything right), or the moc3ingbird vulnerability. You simply cannot trust the client, be that network-based, the user, or the developer. It will inevitably end up cracked and bypassed.

To go a bit off-topic for this section: you want “trust”? Stop violating our trust. Stop trying to get rid of user freedom. Stop trying to lock down the user experience so that you control everything.

Verify everything server-side. That’s how this works. Stop using client-side anti cheats as an excuse to not improve your software.


Users like visiting websites that are expensive to create and maintain, but they often want or need to do it without paying directly. These websites fund themselves with ads, but the advertisers can only afford to pay for humans to see the ads, rather than robots. This creates a need for human users to prove to websites that they’re human, sometimes through tasks like challenges or logins.

I think this shows a fundamental issue with how advertisements are implemented, rather than users using adblockers. If your website is so horrible that users won’t willingly allow an advertisement here and there, then you need to rethink your decisions.

Users don’t use adblockers because they don’t want to see ads at all; they they use adblockers because getting a usable web experience requires it.

Users don’t block advertisements; they block annoying advertisements. They block trackers. They block malware. They block privacy invasion.

If you can’t take the time to make it so your advertisements are non-intrusive, efficient, and not annoying, you deserve to get adblocked.

Maybe a relevant read: GoDaddy - Christopher Carfi: How Adblock trends affect web design

Malicious software

Users sometimes get tricked into installing malicious software that imitates software like their banking apps, to steal from those users. The bank’s internet interface could protect those users if it could establish that the requests it’s getting actually come from the bank’s or other trustworthy software.

We have something for that already: it’s called HTTPS. You want to ensure the binaries and data sent are secure? Use GPG and SSH. Stop trying to use malicious software as an excuse to add DRM and to downgrade the user experience.

I’ve been informed that the previous section on GPG and SSH is a bit misinformed; here’s some context:

“You can most definitely establish a much higher degree of trust with hardware attestation, the problem is just that the UX is bad

HTTPS is kinda unrelated, SSH is entirely unrelated, and GPG doesn’t actually solve this at all

Like the issue with the proposal isn’t that the goals are impossible, the issue is that the consequences of achieving those goals are bad

You basically cannot verify client side binaries without hardware attestation, but you’re better off not verifying them then having random sites have access to hardware attestation

…for the GPG reference it’s worth noting that hardware attestation would actually look exactly like GPG does: the client’s TEE signs something and the server checks the signature, akin to GPG signing something and then the signature being checked later on

The problem is that these are all kinds the same core tech as TPM signing, just used in very different ways

All of them are just asymmetric encryption (SSH via SSH keys, HTTPS via certs) or signing (GPG via GPG keys)”

Users want to know they are interacting with real people on social websites but bad actors often want to promote posts with fake engagement (for example, to promote products, or make a news story seem more important). Websites can only show users what content is popular with real people if websites are able to know the difference between a trusted and untrusted environment.

False. If the user needs to worry about this, then your website is likely untrustworthy. You don’t need to “trust” that someone’s “real” to show them content. You need to put effort into providing quality content, and making the UX good for everyone. Not only will this help users, but it can also increase user engagement and user retention.

The fun bits

Now let’s get into the actual proposal, and all of its information.

With the web environment integrity API, websites will be able to request a token that attests key facts about the environment their client code is running in. For example, this API will show that a user is operating a web client on a secure Android device. Tampering with the attestation will be prevented by signing the tokens cryptographically.

….and who provides the keys to sign these tokens? Google? You point out later that there can be several third parties, all user controlled, but it’s far more likely that Google only allows themselves into Chromium, and uses that to abuse their monopoly to be the only relevant “attester”. The idea falls flat from the start.

Websites will ultimately decide if they trust the verdict returned from the attester. It is expected that the attesters will typically come from the operating system (platform) as a matter of practicality, however this explainer does not prescribe that. For example, multiple operating systems may choose to use the same attester. This explainer takes inspiration from existing native attestation signals such as App Attest and the Play Integrity API.

You can believe that, should this be accepted, I (and many others) will not be accepting attestments from Google or anybody under their wing/paycheck. Assuming there are other relevant attesters in the first place.

There is a tension between utility for anti-fraud use cases requiring deterministic verdicts and high coverage, and the risk of websites using this functionality to exclude specific attesters or non-attestable browsers. We look forward to discussion on this topic, and acknowledge the significant value-add even in the case where verdicts are not deterministically available (e.g. holdouts).

This has already been discussed by the community. That’s a no.


Allow web servers to evaluate the authenticity of the device and honest representation of the software stack and the traffic from the device.

As stated before, fundamentally impossible.

Offer an adversarially robust and long-term sustainable anti-abuse solution.

It will be abused ASAP by Google.

Don’t enable new cross-site user tracking capabilities through attestation.


Continue to allow web browsers to browse the Web without attestation.

This goes against the idea behind the entire proposal. Fucking stop it.


Let’s go over some “non-goals” you’ve stated here.

Enable reliable client-side validation of verdicts: Signatures must be validated server-side, as client javascript may be modified to alter the validation result.

This will surely end well. Definitely. It most likely won’t be paywalled, put behind a walled garden, and abused as much as Google can until they’re done with their toys.

Enforce or interfere with browser functionality, including plugins and extensions.

It’s clear, from the points you’ve already shown, that this is an anti-feature in your eyes. You want to block advertisements, yet allow adblockers to function. You want to block piracy, yet allow users to view their content easily. You want to ensure the website and client are trustworthy, when the latter is fundamentally impossible.

Access to this functionality from non-Secure Contexts.

Yet another contradiction, and will inevitably be abused by Google’s monopoly to make only Chrome, and their puppets, to be the only allowed attesters. This definitely hasn’t been abused before.

Example use cases

  • Detect social media manipulation and fake engagement.
  • Detect non-human traffic in advertising to improve user experience and access to web content
  • Detect phishing campaigns (e.g. webviews in malicious apps)
  • Detect bulk hijacking attempts and bulk account creation.
  • Detect large scale cheating in web based games with fake clients
  • Detect compromised devices where user data would be at risk
  • Detect account takeover attempts by identifying password guessing

Again, as stated before, this is fundamentally impossible, and will fail immediately. You can already mitigate these on the server-side, don’t use client-side DRM as an excuse to not improve your website.

There are a minimum of three participants involved in web environment integrity attestation:

  • The web page executing in a user’s web browser
  • A third party that can “attest” to the device a web browser is executing on, referred to as the attester
  • The web developers server which can remotely verify attestation responses and act on this information.

In other words:

  • A shitty website that deserves the experience users give it.
  • Google.
  • A subpar server-side verification system.

I don’t need to go over the technical details. This is enough. If you’re interested, you can read it yourself.

What information is in the signed attestation?

The proposal calls for at least the following information in the signed attestation:

  • The attester’s identity, for example, “Google Play”.
  • A verdict saying whether the attester considers the device trustworthy.

We’re still discussing whether each of the following pieces of information should be included and welcome your feedback:

  • The device integrity verdict must be low entropy, but what granularity of verdicts should we allow? Including more information in the verdict will cover a wider range of use cases without locking out older devices. A granular approach proved useful previously in the Play Integrity API.
  • The platform identity of the application that requested the attestation, like com.chrome.beta, org.mozilla.firefox, or com.apple.mobilesafari.
  • Some indicator enabling rate limiting against a physical device

We strongly feel the following data should never be included:

  • A device ID that is a unique identifier accessible to API consumers

Bullshit. All of it. This will end up being the worst combination of Denuvo and platform lock-in that has ever existed, made a thousand times worse by the fact that the web is supposed to be an open platform. And the included information will, without a doubt, be used (with the malware and trackers made possible by this proposal) to track users.

How does this affect browser modifications and extensions?

Web Environment Integrity attests the legitimacy of the underlying hardware and software stack, it does not restrict the indicated application’s functionality: E.g. if the browser allows extensions, the user may use extensions; if a browser is modified, the modified browser can still request Web Environment Integrity attestation.

It has been stated previously in this proposal that this is an anti-goal. Fuck off.

“It’s only a proposal”

This being a “proposal” gives you no excuse to continue to work on and push for this. It just shows that you’re slimy bastards that want to get rid of user autonomy, and will do anything you possibly can to monopolize the open web. You already control multimedia DRM (even blocking out community projects for basic Widevine!), and that’s bad enough. Why nobody has (tried to) sue you and put laws in place to make these practices illegal, I will never know.

Don’t use the layout of this proposal to say “but it’s stated that walled gardens won’t be allowed!” yes they will. It always will be done. Google will abuse their monopoly to create a walled garden, no matter what the proposal specifies. That’s the power of a monopoly.


Fuck off, Google. You deserve the lashback the community has given you so far, and so much more. I sincerely hope you suffer from this proposal, and this is used as a chance for the community to get rid of everything your touch has tainted.

For more reading on Google and DRM, these blog post(s) may be of interest:

Contributions to this post (information updates, misinformation removal, resource additions) will be accepted at the appropriate open source repository where the website this is originally hosted. As of the time of writing, this is https://gitlab.com/OroWith2Os/orowith2os.gitlab.io.

This post is licensed under CC BY 4.0 by the author.