Artificial Intelligence and Your Data
The state's AI contract went through a company the FBI raided for bid rigging. The scoring team had zero AI expertise. Three task force members had financial ties to the vendor.
Presenting on democratic AI oversight at a national policy forum
The governor just signed a contract putting AI on 40,000 state employees' computers — touching your health records, your benefits, your children's schools. The company that brokered the deal? The FBI raided their headquarters twelve months ago for bid rigging.
Forty thousand state employees got a new AI tool on their work computers. The union that represents 15,000 of them demanded to bargain. The administration pushed ahead anyway. Nobody asked you what should happen with your data. So I pulled the contract, the vendor's bid, the scoring rubric, the procurement emails — every document I could get my hands on.
How they scored it
Look at what the state weighted most — and what is missing entirely:
| Category | Weight |
|---|---|
| Collaboration and Partnership | 25% |
| Training | 15% |
| Change Management | 15% |
| Impact and Alignment | 15% |
| Value Determination | 10% |
| Costs | 10% |
| Technical Requirements | 10% |
| AI Safety, Bias Testing, Error Rates | 0% |
There is no category for AI safety
Not at 5%. Not at 1%. It does not exist. "Collaboration and Partnership" — the most subjective category — gets 25%. Whether the AI actually works safely? That falls under "Technical Requirements" at 10%, which really just checks security compliance. Nobody scored whether the AI produces accurate results. Nobody checked for bias. Nobody compared error rates.
Who evaluated it
The scoring team: a Director of Procurement, a Secretariat CIO for Health and Human Services, IT operations staff. Good at buying servers. Not qualified to judge whether an AI system will mishandle your unemployment claim or flag the wrong person.
Thirteen vendors submitted bids. Nine were thrown out before anyone scored them. The only two that made the scoresheet? OpenAI and Anthropic. Both submitted by the same reseller: Carahsoft Technology Corporation.
That is not a competitive procurement. That is a two-product contest from a single middleman.
Who sold it
Carahsoft calls itself "The Trusted Public Sector IT Solutions Provider." Their track record:
One year after the FBI raided their office for alleged bid rigging, Massachusetts picked them. Both finalists came through this same company.
Carahsoft also sells surveillance tools to ICE
In September 2025 — while Massachusetts was finalizing this contract — ICE signed a $5.7 million deal with Carahsoft for Zignal Labs, an AI platform that scans over 8 billion social media posts per day and builds automated watchlists for investigators. The ACLU called it "black box technology" deployed "without any accountability."
Zignal Labs also works with the Israeli military. A company pamphlet from 2025 advertised its "tactical intelligence" for "operators on the ground" in Gaza.
The company selling Massachusetts your AI assistant is the same company selling ICE the tools to surveil immigrants and activists. You deserve to know who profits from this.
Financial ties on the task force
The Governor's AI Task Force recommended this contract. Three members had employer-level financial ties to OpenAI:
No independent AI researchers. No procurement specialists. No labor reps. No members of the public. And one task force member was later hired into the state technology office to oversee the very contract her task force recommended.
Your data and the "four walls"
The administration promised employee and resident data would stay "within the four walls." But the contract allows vendor support staff located outside the United States to access it. The data sits on infrastructure owned by OpenAI's largest investor. When I asked whether the state can independently verify that your data is not training their products, they could not point to a single enforcement mechanism.
A contractual promise is not a safeguard.
What I am doing about it
I pulled the procurement documents that the administration did not want to release — the scoring rubric, the bid review notes, the evaluation emails, the scoresheet. When they sent general assurances instead of records, I pushed back until they produced them.
Now I am writing AI governance legislation for Massachusetts. Not a ban. Not a moratorium. Rules that say: before AI touches your data, an independent safety evaluation happens. Before the state signs a contract, someone checks whether the vendor is under FBI investigation. Before your health records go into an AI system, your elected representatives — not a hand-picked task force with financial ties to the vendor — get a say.
This is the work. Not a press conference. Not a tweet. Legislation.
AI will change how government serves you. That change should be transparent, accountable, and on your terms.
Sources
- Massachusetts AI procurement documents (scoring rubric, bid review notes, evaluation scoresheet) — obtained via public records request by Rep. Uyterhoeven
- WBUR: Massachusetts launching ChatGPT assistant across executive branch (Feb. 2026)
- Federal News Network: FBI, DCIS raid Carahsoft headquarters (Sept. 2024)
- GSA Inspector General: VMware and Carahsoft $75.5 million settlement (2015)
- The Lever: ICE social media surveillance contract with Carahsoft/Zignal Labs (Oct. 2025) — Contract ID: 70CMSD25FR0000089
- Mass.gov: Governor Healey AI announcement (Feb. 2026)
The plan.
- Require open, competitive public bidding for all government AI contracts
- Create Community AI Councils so the people affected by AI have real power over how it is used
- Ensure workers are consulted before AI tools are deployed that change how they do their jobs
- Protect residents' data from AI systems that lack transparency or accountability
- Make Massachusetts the state that proves AI governance can be democratic