Rob dropped me off Monday morning at the warehouse Applazon was renting. It was immediately apparent that I was way overdressed in my suit and tie. Most everyone on the team was casually dressed, and by that I mean shorts, with halter tops for the women and tank tops for the guys. The reason for that was immediately apparent; it was uncomfortably hot inside the warehouse. The temperature had to be close to ninety degrees. My suit coat and tie were quickly set aside, my shirtsleeves rolled up and my shirt untucked and completely unbuttoned.
I also quickly learned what being in charge really meant on a major project like this. Rob had been right on all counts. Dr. Moorthy was there to lay out the specifics before he headed back to Seattle. Mr. Cooper was staying for another week to work one on one with members of the team to better understand the logistics of the prototypes his engineers would be building back in Cupertino. I should have wondered why they’d trust a sixteen-year-old boy (that they knew was really only fourteen) to have complete control of something like this, and they didn’t. None of the team members actually reported directly to me, and the chief engineer on the project had the ultimate say when it came to the important decisions. As the technical expert who best understood the science behind the server design, I was nominally in charge, but the career professionals could override my every decision. That much should have been obvious to me from the beginning, given that the teams had already been assembled before I was even offered the job.
There was another issue that quickly became apparent. The chief engineer’s son was working as an intern for the summer, and he was stunning. He was not quite sixteen, making him almost a year-and-a-half older than my real age. However, he was another late bloomer, with fine peach fuzz on his upper lip, sparse hair under his arms and dusting his arms and legs, and a voice that still cracked occasionally. He was perhaps five-foot ten and had wavy, light-brown hair that could almost be considered blond.
His name was Shaun and, gay or straight, he was gonna be trouble. He was at least a nine out of ten for sure. I wasn’t his boss, but he was my boss’s son, which meant that any sort of relationship was absolutely out of the question. Maybe. If he was straight. I just hoped he didn’t catch me staring at him. I hoped no one else did, either.
We were on a tight schedule, as construction of the data center had already started, with pouring of the concrete foundation underway. Corporate hoped that in spite of the change in server design, we’d be ready to begin server installation as soon as the building was ready in December 2019. With any luck and barring any unforeseen circumstances, the new data center would be ready to come online by the end of May 2020, with server installation continuing into early-to-mid 2021 depending on demand. With the added capacity, speed and reliability of the new server design, I expected ACR would see dramatic growth in what was already a market we dominated.
After Dr. Moorthy went over a review of the history of the project and the makeup of the teams, he introduced the chief engineer, Frank Cole, who provided a brief overview of ACR in a PowerPoint presentation and showed how our data center would be incorporated into the existing structure. The architect, Simone Walker, went over the physical layout of the facility, and she projected architectural drawings for all to see. Not only had the building been designed before the change in server design, but it was too late to make changes to its size or shape; the concrete was being poured as we spoke, thus the footprint of the data center couldn’t be changed. Likewise, the load-bearing supports were specified by the original design and were fixed in place by the foundation and couldn’t be moved. Modifications to the building’s shell were still possible but would require new architectural drawings and have to be submitted to the county for review. That would almost certainly result in delays, which were to be avoided at all cost.
Next, Frank introduced Dr. Priscilla Blackford, an electrical engineer who would be leading the effort to redesign the server racks. Apparently, many in the room were seeing my concepts for the first time as there were collective gasps when she began her series of PowerPoint illustrations. It was obvious from the degree of sophistication that she’d been working on my design since well before my lunch meeting with the men from Applazon not even three days earlier. Already there had been some decisions made regarding server arrangement that would have a major impact on everything else. Although not set in stone, any change would necessitate that every other aspect of the project be modified. The servers were to be fabricated on a single motherboard with all components soldered directly in place. That arrangement was necessary to achieve the server density we desired and to provide for effective heat management. Mr. Cooper pointed out that this was how we designed our laptops, letting us bring impossibly slim designs to market.
The motherboards would be fabricated on circuit boards that were fifteen centimeters square. They planned to put two servers on each motherboard, which meant the circuit density would be insanely tight. With a planned toroidal server rack that was one-half meter tall, allowing for thermal insulation and infrastructure, two motherboards containing four servers could be stacked vertically. What still hadn’t been decided yet was how many servers to fit around the toroidal ring, which would determine the diameter of the server racks.
Two designs were being considered, the first of which had 128 motherboards arrayed around the circle, for a total of 512 servers in each cabinet. With four toroidal cabinets stacked on top of each other, for a total height of two meters, making it a bit taller than my height, each server stack would accommodate 2048 servers. An installation of 52 stacks would provide a server capacity of a bit over 100k servers. Allowing for redundancy, we could serve up over 50 petabytes of data. Allowing two centimeters between each motherboard along the inner edge, the inner diameter of the toroid would be 81.5 centimeters, with the outside diameter of each server stack being about one-and-a-third meters. That was about a third-again as large as I’d envisioned in my original design.
The second design quadrupled the number of motherboards arrayed around the circle, turning each stack into a data center in and of itself. We’d only need twelve or thirteen stacks, significantly reducing the infrastructure needed to support them, but increasing the number of servers that would have to be taken offline at a time when the racks had to be serviced. The inner diameter of the cabinets would be three-and-a-quarter meters and the outer diameter about four meters, making each stack enormous in size. The primary difference was in how the two different designs were serviced. The first server design would make use of an overhead crane that could lift any number of the toroidal cabinets, allowing for the one cabinet in question to be serviced in place or removed for service. It was simple and elegant. Unfortunately, there wasn’t room for more than a simple robotic arm inside whose sole purpose would be to remove and insert server boards. Caching of servers would be limited and require daily servicing.
The second was more like something NASA would design for a probe to Mars, with a pair of robotic arms that could be manipulated by an external operator with greater precision than human hands. The arms would perform double duty, executing simple repairs on the backplane components, in addition to handling server replacements from within the structure. A large number of replacement servers would be cached inside the structure, so the need to break the seal to access the stack could be reduced to perhaps once a week or even once a month. Although the up-front costs of the two designs were surprisingly similar, the second design was dramatically less expensive to maintain. However, if there ever were a problem that couldn’t be fixed by the robotic arms, the only way for a human to fix it would be to power down the entire stack, bring it to room temperature and climb inside — or climb inside wearing a cryogenic suit. That sounded like fun!
I noticed the stunned expressions around the room and how everyone was just beginning to come to grips with just how radical my design concept really was. This wasn’t the entire group involved, either, I surmised, as there were hardly enough souls to work on even a small portion of the design. Undoubtedly, this presentation was only for those who needed to know the full scope of the project. Dr. Blackford’s presentation had been the most intense by far, but it was far from the last of its kind as we got into the nuts and bolts of the design.
Next up was Wang Chung, who was tasked with chip and motherboard design. He got into a lot of detail regarding bus architecture, interconnects and even different types of solders and how they needed to be modified to avoid thermal stress in a low temperature environment. Even the thermal coefficient of expansion of the polymer used in making the circuit board had to be carefully matched to that of the copper and aluminum interconnects so as to avoid fracture of the very thin layers of metal.
The SSD and RAM chips to be used were originally designed for space-based applications in communications and military satellites. Soldered directly to the motherboard, they were suitable for use with low switching voltages and had specifications that more than met our needs. The CPU chips would be custom-designed ARM processors, subcontracted for manufacture, as would be the circuit boards themselves. The interconnects between the motherboards and the server racks would involve a pressure fit rather than a conventional socket. By releasing a clamp, the board could simply be slid out into the center of the rack for replacement. I was surprised that they’d opted for such an arrangement but quickly realized how much simpler that would make the process of server replacement. These guys had much more experience than I did with such concepts, particularly when it came to manufacture.
Next up was Dasam Singh, a young Sikh man with a long beard and wearing a turban. This was my first exposure to someone of the Sikh faith. Sikhs were monotheistic and known for their pacifism. Dr. Singh was brought in specifically as an expert in heat transfer, electrical cooling and refrigeration.
One of the changes that had apparently already been made to my design was a switch from liquid nitrogen to supercooled clean dry air (CDA) as the coolant. Although not as cold, a supercooled gas would be easier to deliver to the server components than liquid nitrogen. The tradeoff was that the higher temperature wouldn’t lower the band gap as much; hence, the energy savings would be slightly less than what I’d hoped. Oxygen boils at –185°C and Nitrogen at –196°C. By cooling the incoming air to –180°C, both oxygen and nitrogen would remain as a cooling gas, and there would be no risk of asphyxiating our workers. We’d be trading a bit of energy efficiency and component degradation for the safety of the data-center technicians.
“The most efficient means of supercooling uses a reverse-Brayton cycle in which filtered air is compressed by a piston to very high pressures and temperatures,” he explained. “It’s a reverse heat engine and the same basic process used in all refrigeration systems. The pressurized gas passes through a heat exchanger, which is like a radiator in a car, allowing the heat of compression to dissipate and the gas to return to room temperature. The pressurized gas is then allowed to expand into a reservoir and as the pressure drops, the temperature drops yielding supercooled air. Water vapor, hydrocarbon vapors and carbon dioxide, all of which have boiling points much higher than those of oxygen and nitrogen, condense in the first pass and must be removed, but that leaves only clean dry air. The process is either repeated in sequential stages or iteratively, until the desired temperature is reached.”
Of course, I understood the science behind refrigeration — it was just a matter of gaseous physics — but I had a nagging suspicion there was something he wasn’t telling me. Then suddenly, I knew just what it was. He was already closing his PowerPoint presentation when I interrupted. “Excuse me, but what about the risk of oxygen condensation?” I asked.
“What about it,” he replied. “Why would you think there’s a risk of that happening, much less of it being of concern?”
“You’re gonna pump compressed CDA into the server cabinet, where it will expand rapidly and cool, keeping the electronics cool in the process. However, it will cool mere millimeters from the electronics and to a temperature just above the boiling point of oxygen. However, what if the gas drops below the boiling point of oxygen before it reaches the electronics? Is there anything to stop it from cooling further? Is there anything to keep the oxygen in the line from condensing? The nitrogen in the line would then force the oxygen out as a spray, much as with an aerosol can, and that liquid-oxygen spray would land on the electronics with all their organics, most likely reacting violently with them. Since that would be happening with all the servers in the cabinet at the same time, the result would be an explosion of epic proportions. It could take the roof off the building. I wouldn’t wanna be inside, let alone anywhere near it should that happen.”
My question was followed only by silence. I was sure everyone was wondering who this kid was who interrupted a scientific presentation and asked such an irrelevant question. I’d yet to be introduced, so it was likely everyone had assumed I was an intern. Dr. Moorthy jumped in and explained, “Everybody, this is J.J. Jeffries, our concept engineer for the project. He’s the origin of the toroidal server design as well as the low-temperature cooling method. I know he’s very young, but his youth belies his intelligence. He has full certification in data-center management and in web design. I also got word today that he passed his computer science Ph.D. qualifying exam at the University of Nebraska. I’m not sure even he was aware of that yet. He’ll be working with all of you for the simple reason that at sixteen, he hasn’t become jaded to reject new ideas, just because they’re ‘impossible’, and he has the knowledge and intelligence to back those ideas up.
“So, tell me, Dom, is this issue of oxygen condensation something we should be worried about?”
“It’s highly unlikely,” Dr. Singh replied. “The electrical components will be generating enormous amounts of heat, so any liquid oxygen would be expected to vaporize immediately.”
“One drop of liquid oxygen would be enough to fry a chip,” I interjected, “but if oxygen condensed in the line, virtually all of the circuits would be sprayed simultaneously. They’d all burst into flame, but inside a closed, sealed vessel, the whole thing would explode. It would be a bomb. With over eight thousand circuit boards, all vaporized in an instant, the explosion would be epic — perhaps an order of magnitude greater than a conventional explosive of comparable mass. If the condensation affected the coolant line serving all of the server racks and if they all exploded at once, think Oklahoma City. Cars would be blown off I-80. The delivery station would be blown off its foundation. The Walmart Supercenter across the highway might even catch fire.”
“The risk of something like that happening is minuscule,” Dr. Singh countered. “The risk of oxygen displacement from a nitrogen leak in the server room would be much worse.”
“But an explosion would happen so quickly that there’d be no chance of stopping it,” I replied. “A drop in the oxygen level from a nitrogen leak would be very easy to detect. Almost trivial, in fact. There would be hours or even days of advance warning of a nitrogen leak, more than enough time to track it down and fix it. Besides, you could use the oxygen extracted in the generation of liquid nitrogen to fill oxygen tanks that the technicians could use in the event of an emergency.”
“Dom?” Dr. Moorthy asked.
“The cost of building out a liquid-nitrogen system would be more than double,” he responded.
“And the savings in energy usage would pay for that added up-front cost many times over,” I interrupted.”
“What do you estimate the risk of oxygen condensation over the lifetime of the data center?” Dr. Moorthy asked.
“Well below a percent,” Dr. Singh answered, but everyone in the room, myself included, winced on hearing that.
“A risk of one in a million, with the addition of appropriate safeguards, might be an acceptable risk,” Dr. Moorthy countered, “but a risk of even one in a thousand could be catastrophic. If anyone ever came across records that we’d considered such a risk and ignored it, we could be held criminally negligent as well as financially responsible. Unless you can make that risk negligible, it’s not one I’m willing to take. How about the risk of a nitrogen leak?”
“That’s virtually a hundred percent,” Dr. Singh related. “It’s guaranteed to happen, just about every year, putting our workers at risk.”
“I stand by my comments,” I countered.
“Dom?” Dr. Moorthy asked.
“I think we could manage the risk,” Dr. Singh agreed. “Multiple oxygen sensors all around the room would detect even a slight displacement of oxygen by nitrogen, with more than enough time to find and fix the leak.”
“Then I suggest you explore a cooling system based on liquid nitrogen,” Dr. Moorthy responded. “It sounds like it’s the better option and by far the safer one.”
I’d probably made an enemy of Dr. Singh, but I shuddered to think what might have happened had I not been there to recognize the danger posed by his reckless disregard for non-negligible risk. I sensed that the mood in the whole room shifted as people recognized that this young kid knew at least as much as any of them. I could only hope they’d welcome my input rather than see me as a threat.
The next person to speak was Daniel Weinstein, an HVAC design specialist who was responsible for maintaining air quality and keeping us all comfortable inside. The original facility design made use of conventional air-cooled servers, with dozens of fans in hundreds of racks, circulating high volumes of air and dumping the heat from the servers into surrounding space. That necessitated having a powerful air-conditioning system to cool the data center and dump the excess heat outside the building. There was no plan to heat the building, as the servers generated more than enough heat. However, our server stacks were gonna be cooled to just over –200°C and even with an insulated, closed-loop cooling system, our problem would be more one of excess cold rather than excess heat. However, this guy was ignoring all of that, assuming he would still need to pump hundreds of kilojoules of heat out of the building.
When I couldn’t stand it anymore, I interrupted, “Excuse me, but why do you need any air conditioning at all?”
“Are you serious?” the guy responded, obviously thinking I was showing my utter ignorance of thermodynamics. Quite the contrary. He then went on a rant of all the heat the servers would be generating. I let him go on a bit before interrupting again.
“Sorry to keep interrupting you, but the servers are gonna be cooled with liquid nitrogen. You can house the liquid-nitrogen compressors and heat exchangers anywhere you want, so why not outside, where you would have had your air-conditioning compressors and heat exchangers. So, you’re already gonna be dumping all that heat from the servers outside. Inside the building, you’ll have all these server racks filled with supercooled gaseous nitrogen at nearly two hundred below zero Celsius. You’re gonna need to warm things up or you’ll have human popsicles inside. Obviously, if you plan for air conditioning, you could just as easily reverse it to pump heat back into the building, but why not just run some of the compressed nitrogen from right out of the compressors into a heat exchanger inside the building and use that heat in your HVAC system to restore a normal temperature inside. It should be pretty easy to do that, and you’ll be making use of free heat you’d otherwise be discarding anyway.”
“I hate to admit it, Dan,” Dr. Singh chimed in, “but the boy has a point there. If we collaborate, we can have a single refrigeration system that serves both the needs of providing liquid nitrogen and maintaining a comfortable temperature inside. It makes perfect sense.” Wow, maybe I hadn’t made an enemy of Dr. Singh, after all.
The final person to speak was Ibrahim Saleem, the software engineer in charge of adapting the server software for use with the new server structure. He appeared to be exceptionally young, not all that much older than I was. I was intimately familiar with the existing code, which was based on the open-source Debian server running on top of Linux, with a custom web server based on Java. Server management was accomplished with a web interface written in HTML, CSS and JavaScript. I’d already spent some time working on my own improvements to the web interface but felt certain there was much that could be improved in the basic server structure itself. Java implementations wasted far too many clock cycles interpreting code on the fly. True, there were other web servers that took a similar approach, such as Apache Tomcat, but they weren’t involved with hosting billions of websites. I would have liked to explore adapting something like Gentoo to Applazon’s custom ARM processor, but perhaps the best approach would be a custom-designed lite server, written in C++ and compiled for Applazon’s custom CPU, underlying a more extensive Java-based server.
I listened politely and then asked what I hoped were insightful questions that demonstrated my familiarity with the server software. Obviously, he was impressed ’cause he commented afterwards for all to hear, “J.J., I don’t know if you were trying to impress me, but if not, you certainly did. Most people pay lip service to the software while pretty much leaving it to us I.T. guys to make sure it all works. It’s obvious that you’ve studied the code more than just for familiarity. You’ve studied it in depth with an eye to reverse engineering it, and I’d be willing to bet you’ve already modified some of the code with an eye to eventually rewriting it entirely. It’s the sort of thing I might have done in your position.
“I was certainly a computer whiz at your age and in fact installed my first Ubuntu release when I was ten. I had my own Apache server running a year later and installed Debian on Raspberry Pi when I was twelve. I’m sure you’re wondering how old I am, but at least the software industry is used to teenage ‘geniuses’ like us. I’m nineteen and have a master’s in computer science from Cal Tech. It looks like we’re both gonna be working on our Ph.D.’s in comp sci, so I know a bit about where you’re coming from.
“What really blows me away is your ability to integrate information from so many diverse fields simultaneously. Hell, you came up with the idea for the toroidal server design on your own and conceived of liquid-nitrogen cooling, which is completely different from the nitrogen cooling some hardcore gamers are using with their gaming PCs. You’ve spoken not just intelligently, but as an expert about server design, computer cooling and HVAC systems, and I have a feeling you’re just getting started. Whatever you do, don’t be intimidated by the people around you, and even more importantly, don’t let them be intimidated by you. They’re all at the top of their field and will recognize and appreciate what you have to bring to the table. You’ve a very welcome addition to the team, but don’t let any of what I just said go to your head.” The clapping that ensued really did feel good, but I knew I must never forget that my entire life was built on lies. It was a stack of cards that could fall at any time.
And the nightmares continued.
The author gratefully acknowledges the invaluable assistance of David of Hope and vwl-rec in editing my stories, as well as Awesome Dude and Gay Authors for hosting them. © Altimexis 2021