Author here. I co-founded GrackerAI, which monitors AI citations for B2B companies, so I have a bias worth disclosing upfront.
The core finding that surprised me: we looked at cross-platform citation data and the overlap between engines is tiny.
- Only 11% of domains get cited by both ChatGPT and Perplexity.
- Less than 1% for specific queries.
- Perplexity leans heavily on Reddit (47% of top citations).
- ChatGPT favors direct authoritative sources with strong recency signals.
- Google AI Overviews has 76% overlap with Google's top 10.
The practical implication for anyone building a B2B product: the AI engine your team uses for research is probably not the same one your enterprise buyer uses during procurement.
Wharton-GBK 2025 data shows ChatGPT at 67% enterprise adoption and Copilot at 58%, while Perplexity sits at roughly 18%.
The conversion data is also worth noting. Across 42 B2B sites studied, ChatGPT referral traffic converted at 15.9% vs 2.8% for traditional organic Google traffic. AI compresses the research phase, so visitors arrive further down the funnel.
Are you seeing similar patterns? And for those building developer tools or B2B products, which AI engines are you actually seeing referral traffic from?
I ran into this when building a kids' education app a few years ago. We explored a bunch of options, from asking for the last four digits of their parents' SSN (which felt icky, even though it's just a partial number) to knowledge-based authentication (like security questions, but for parents).
Ultimately, we went with a COPPA-compliant verification service, but it added friction to the signup process.
It's a trade-off between security and user experience, and there's no perfect solution, unfortunately.
Interesting project. IIRC, one of the biggest challenges with the TI-99/4A was its TMS9900 processor. It was a 16-bit CPU, but had a really awkward memory architecture that made it difficult to write efficient code.
The lack of dedicated registers meant a lot of memory access, which slowed things down considerably. This is probably why it never gained the same traction as the 6502-based systems like the Apple II or Atari.
I'm curious to see how this UNIX-like OS addresses those limitations. It's a pretty neat accomplishment if it can provide a usable environment on that hardware.
The lack of dedicated registers meant a lot of memory access, which slowed things down considerably.
It gets worse because the TI99 only has 256 bytes of RAM directly addressable on its 16-bit bus. All the other memory in the system is video RAM and is accessed 8 bits at a time through the video display processor. Oh, and you can only do this when the VDP is not accessing the memory. This is incredibly slow and severely hobbles the potential performance of the CPU.
The whole thing seems like it was designed in a parallel universe, or at least it reeks of some kind of a sunk-cost-fallacy design-by-committee thing.
Supposedly what happened is that the system was originally designed to have either an 8-bit CPU, or a 16-bit CPU with an 8-bit bus (cf. 8086/8088) like TI's own TMS9985, but at some point it was decided that they should instead cram their full 16-bit TMS9900 minicomputer CPU (!) into the thing. This decision basically tanked the whole architecture.
It was too late/too expensive to redesign the 8-bit support chips to 16-bit counterparts so they had to make some really out there decisions like "talk to the graphics chip and give it an address to read/write every time you want to use memory" and "software is written not in machine code, but in GPL (Graphic Programming Language), which is then interpreted by the CPU and turned into actual TMS9900 machine code"
Software on ROM cartridge for the system is stored in GPL and is fetched from ROM by the CPU (but wait! The ROMs are not in memory space like they would be on a sane computer; they are SERIAL ROMs read 16 bits at a time with memory mapped I/O) and interpreted to machine code. This is slow. When you write your own software in BASIC, however, this gets worse: now you're writing BASIC, which is being interpreted and turned into GPL, stored in video RAM, and then fetched back from video RAM and turned into machine code by the CPU. THIS IS EVEN SLOWER.
Needless to say, the BASIC on the TI99 is dramatically slower than the already slow implementations on other contemporary micros.
It DOES have a full 16-bit CPU which is theoretically much more powerful than a 6502 or Z80 but this wild-ass implementation of... well, everything, makes the system probably the least capable machine of the era.
RAM was very expensive then and 16 bit CPUs weren't that much faster to justify the cost if you were aiming for the home market.
Both true, which makes this an even more baffling choice -- why pick the more expensive, state of the art 16-bit CPU* that you're getting little or no benefit from + 16K of extremely slow-to-access combined video and system RAM? You could have used a cheaper 8-bit CPU and maybe for the same budget have fit 4K or 8K of system RAM on the bus + some amount of dedicated video RAM for the VDP. This would have been faster and more useful in nearly all real world applications, make for a much cleaner board design, easier development, and probably cheaper. That's what everyone else did.
Then again, what was this machine's target market?
* The reason is probably that TI wanted to show off their state-of-the-art CPU tech and be able to point to the spec sheet and say "look, it's 16 bit! All our competitors are only 8 bits -- that's half as many bits!"
The design "decisions" are easy to explain. The 9985 failed. They had a development prototype with a 9900 emulating the expected CPU. The 9918 VDP was the cheapest way to add 4K later 16K of DRAM. And that was what they shipped after the 9985 was killed.
------------------------------
From 1977 they expected a 9985 to succceed the cheap 40-pin 9981, both having an 8-bit external bus (1). It would have 256 bytes of RAM onboard. I speculate it would have the 9900 microcode optimizations seen in the military SBP9989.
Anecdotally, the 9985 failed seven tape-outs. It was killed. The Bedford UK team was tasked with starting over: eventually this produced the 9995.
But the Home Computer had been prototyped using a 9900 board. So that was forced into the 99/4 (not A) with some external 256 byte SRAM.
Memory was expensive. The 9918 VDP, made by a team in 1975 with junior engineer Karl Guttag, was the cheapest way to interface 4Ks DRAM which TI made and sold to itself. By the time it reached market, 16k in 8x 4116s was optimal.
Various efforts to cost-reduce and upgrade the 99/4A ran into the '82 price-war with Commodore.
Every design iteration that added more RAM (2 or 8 or 16K directly accessible from the CPU) was "paid for" by reducing the cost elsewhere (PALs for instance.) BOM was around $105. [3]
But in the price war, engineers were told to deploy the cost-savings without any new features: this was the 99/4A 2.2 or QI for quality improved. [3] The 99/4A was already a loss leader by Q4 1982 [5].
In 1981, Karl Guttag's new 9995 passed first silicon [2]. It used the new optimized 99000 CPU core which also famously passed on first tape-out. The 9995 was available in quantity in 1982 [3] when new consoles were started around it: 99/2, 99/8.
The 99/2 was supposed to be cheap enough to compete with Sinclair. [6]
The 99/8 was a technical beast for the high-end, having 64K of directly accessible RAM. Its fancy memory mapper drove 24 bit external addresses. It supported 512K off board, which the P-Box had been designed for. It had Pascal built-in. Yet there was no Advanced VDP for it: stuck with the same 9918A.
In early 1983, TI assembled a team of two dozen engineers to write software for it: Pascal applications, new LOGO, a database, new word processor, TI FORTH, and complete accounting package, and a rumored superior easy-to-use interface. Pascal was supposed to deliver many benefits. It would be a small business machine. (4)
Of course, in November 1983, all efforts ceased as Home Computer was cancelled--just as the consoles were to be unveiled at Winter CES.
-----------
(1) An 8-bit bus was always going to be optimal--even the IBM PC 8088 saw that. 16-bit peripheral chips were never going to be made: the package size would prohibit that.
(2) Electronics Magazine and EE Times articles
(3) Internal memos of Don Bynum, program manager
(4) TI Records, DeGolyer Library, SMU : Armadillo and Pegasus
(5) "Death of a Computer", Texas Monthly, end of 1983?
(6) BYTE Magazine June 1982-ish
Based on research for my book: _Legacy: the TI Home Computer_.
Thank you for this, this is very interesting detailed context.
Do you think there is a possible world where TI would have swallowed their pride and considered not-invented-here options like a regular 8080/Z80/6502 as the CPU?
I have a few ideas but I think they were set on using their own chip.
There was a memo asking if TI should support those other CPUs in their AMPL prototyping system (990 based tools and in-circuit emulator). That investment was rejected.
Anecdotally, Don Bynum was unhappy with slow progress on defining the Home Computer, and hacked together a Z80 based machine. The engineers redoubled their efforts... supposedly...
There's politics between the Calculator division (all consumer products), Semiconductor, and Data Systems Group.
Still, TI had a TMS8080 (and later their own 486).
I'll work on this idea, thanks...
----
As a child, I knocked some books off a garage shelf once and was plonked on the head with copies of The 8080 Bugbook. What the heck was a Bugbook? Or an 8080?
Some years later, a 9995 data sheet fell on my head and I thought how hard can it be to wire up a computer?
"Section 1.23 of the IBM Agreement states that IBM Licensed Products "shall mean IHS Products ..." The only logical conclusion is that the parties meant those IHS Products specifically identified in Section 1.2 of the Agreement. Section 1.2 does not limit the products to IBM designed products."
"Therefore, IBM has the right to act as a foundry and to make, use, lease, sell and otherwise transfer the microprocessors in question to Cyrix free of any claims of patent infringement."
We are building the infrastructure for the post-SEO era. As B2B buyers shift from traditional search to AI assistants like Perplexity, ChatGPT, and Gemini, the old playbooks for visibility are breaking. GrackerAI provides the analytics and automation layer for Answer Engine Optimization (AEO) and Generative Engine Optimization (GEO).
We help brands track their real-time AI visibility scores and automate the creation of high-intent content that these engines actually cite. We’re looking for a Senior Backend Engineer to help scale our data ingestion pipelines and LLM-based evaluation systems. You’ll be working on the core engine that benchmarks brand mentions against competitors across various model architectures.
We are a small, results-driven team focused on high-intent lead generation for SaaS. If you're interested in the intersection of search behavior and automated content strategy, we’d love to hear from you. I am the founder and will be personally reviewing and responding to all applications.
GrackerAI - A Generative Engine Optimization (GEO) platform, addressing a critical gap as businesses discover they’re invisible in AI-powered search engines despite heavy investments in traditional SEO
I’ve been looking into how AI agents and “vibe coding” are changing the way we think about digital identity. The problem is shifting from verifying who someone is to understanding who actually performs an action, especially when both humans and AI share access.
Two trends stand out:
1. Identity systems are starting to use behavioral and contextual signals instead of just passwords or keys.
2. Our current trust models break when adaptive AI systems act independently.
As AI agents become more autonomous, is it time to design identity systems that verify intent, not just identity?
The core finding that surprised me: we looked at cross-platform citation data and the overlap between engines is tiny. - Only 11% of domains get cited by both ChatGPT and Perplexity. - Less than 1% for specific queries. - Perplexity leans heavily on Reddit (47% of top citations). - ChatGPT favors direct authoritative sources with strong recency signals. - Google AI Overviews has 76% overlap with Google's top 10.
The practical implication for anyone building a B2B product: the AI engine your team uses for research is probably not the same one your enterprise buyer uses during procurement.
Wharton-GBK 2025 data shows ChatGPT at 67% enterprise adoption and Copilot at 58%, while Perplexity sits at roughly 18%.
The conversion data is also worth noting. Across 42 B2B sites studied, ChatGPT referral traffic converted at 15.9% vs 2.8% for traditional organic Google traffic. AI compresses the research phase, so visitors arrive further down the funnel.
Are you seeing similar patterns? And for those building developer tools or B2B products, which AI engines are you actually seeing referral traffic from?
reply