• Original post: https://www.reddit.com/r/pinball/comments/1lwx8hu/pinball_2000_development_lore_part_3/

    These are my experiences as part of the Pinball 2000 team. Feel free to ask questions. I’ll gather up multiple answers into one comment like I did with the initial post. Now, without further ado…

    Part 3 – Satisfying artists while still making smart compromises

    Pinball machines are creative works, made by a team with different specialties. Some roles are more technical and some are more artistic, but it’s good for team members to have mutual understanding and mutual respect. Even though I wasn’t on a game team I felt it was very important to get to know the artists especially since most of them were new. We had Adam and Scott, both of whom had made dot matrix art for WPC, so that was where I started.

    15 bit colour (as described in part 2) was very helpful for their work, so that was good. We needed a way for the art to have a pixel be transparent and there were two approaches. We could’ve used a mask (whether the unused bit, or a separate image entirely) or we could use one specific colour as a ‘key colour’. Pixels of that one colour are treated as transparent when drawing the image. I don’t remember who drove this conversation, but it was easy for us to choose that second option and to use pure magenta (so 31 red, 0 green, 31 blue) as the key colour, because that’s such a harsh colour and the artists didn’t think they’d use it much anyway. They also knew that even if they did want to use it, they could use an almost-identical colour (e.g. 31 red, 1 green, 31 blue) and it would look fine. The advantage for me of using a key colour is all the existing software tools would work as-is and it would be easy for programmers or artists to understand what happened. If we had used the 16th bit you’d have an opaque and a transparent version of every colour value and it would’ve been hard to visually tell what was going on with the source art when things looked wrong on the game screen. If we’d made a separate image for the mask that would’ve made things harder for the artists because they’d have to update the art itself and its mask together. We’d also have two chunks of data in ROM that needed to be combined in RAM in order to update the display. Modern video hardware works so differently that these aren’t meaningful concerns any more, but they really mattered in 1998!

    I think it was Adam who really wanted alpha-blending, which the video hardware didn’t support natively. I could’ve done this purely in software, but that would’ve been very slow. I had to explain the basics of why and I also offered a compromise, which was that you could stipple transparent pixels with opaque ones by making alternate pixels magenta. Think of white squares vs black squares on a chessboard. It’s not great, but it’s better than nothing.

    Alpha blending is a sort of translucency. Rather than having a pixel be opaque or transparent, it lets you combine the pixel you’re drawing with whatever pixel is already in place underneath it, sort of how sunglasses stop some of the light coming through but not all of it. For example, blending pure blue and pure green at 50% alpha will give you a medium cyan tone like teal. To do this you need to multiply one pixel by the alpha, the other pixel by 100% minus the alpha, and add the results together. So pure blue becomes 50% blue, pure green becomes 50% green and when you add the two together you get 50% cyan. You have to do this separately for each colour channel.

    That works out to 6 multiplies, 6 divides and 3 additions as well as the work to separate and recombine the colours. The CPU we were using was not optimised for this sort of math, so it would’ve taken probably 20 times longer per pixel, plus the alpha values would’ve had to be stored separately. I sympathised with the artists a lot, but I wasn’t willing to do this work. Artists are passionate about making everything as beautiful as possible, so they would’ve used it in all their art and the performance would’ve been really slow. It’s unfair to expect artists to limit themselves to satisfy unintuitive, highly technical constraints when they could have a clear rule. If we’d upgraded to video hardware that could do this natively, I would’ve made it first on my list of updates.

    Since the artists had been part of the decision making process and I could show I was considering their needs, the lack of alpha blending didn’t become a contentious issue. This really helped because the next compromise would be a really, really big one. This was about image compression.

    There are lots of ways to compress images so they take less storage space. There’s no single optimal technique because there are always trade-offs. I knew we’d need something that was efficient for storage and fast to decompress. Compression could be slow because it was done offline and only needed to happen once, but decompression would happen whenever we wanted to have the image in RAM. I was worried about this and I’m sure Tom and I would’ve talked about it but I didn’t have a clear solution. We were lucky because one of the programmers working on console videogames in the San Diego office, Mark, was a big pinball fan and he showed us a really good way to solve our problem. He’d had very similar requirements for making a console port of Mortal Kombat so he could fit the huge quantities of animation into a game cartridge.

    The compression involved finding repeated pairs of pixels in an image and building a dictionary of the most common colour pairs. Then, instead of listing each pixel individually, it could say to repeat a dictionary entry (so a specific pair of colours) however many times; this is a version of run length encoding (RLE). This was easy to add to the image processing tools, and the decompression code was fast. However, in order to be effective it needed the source image to use a limited number of individual colours so there’d be plenty of instances of a smaller number of colour pairs. Mark had made two versions of this, one that allowed 64 unique colours and a 64 entry dictionary, and another that only allowed 32 colours but had a 128 entry dictionary. The latter version was great for things like icons and fonts where they’d only use a few colours anyway.

    The artists were very unhappy with this idea whenever I talked it over with them. I’d given them the ability to make lovely art with colour gradients and subtle shading and now I wanted them to limit that by making each image only have a fraction of all those possible colours! We talked it over repeatedly, including Tom getting involved, but there wasn’t an agreement between us all. The thing that settled the matter was that when Mark came to Chicago I asked him to talk to the artists directly without me or other pinball programmers present. I don’t know what he said to them, but he convinced them that this was a good solution and they’d still get to make beautiful things and the games could have lots of nice art stored in the 60MB of ROM. If I hadn’t worked to gain the artists’ trust in the beginning I’m not sure even he could’ve convinced them. By the way, Mark has long worked for Stern Pinball (and is the most senior programmer in the company, if I understand his job title correctly – if not I expect he’ll appear to set the record straight).

    It’s important to make smart technical decisions, but it’s also important to foster mutual respect. A tight-knit team with fewer resources will usually do a much better job than a fractious team with plenty of powerful features. This way of thinking came up over and over for Pinball 2000 whether just among programmers, or designers, or engineers or multiple types of colleagues. We all knew it was do or die for pinball at Williams and we all wanted to succeed and that helped us coalesce around a single vision.

  • Original post: https://www.reddit.com/r/pinball/comments/1ltkqhk/pinball_2000_development_lore_part_2/

    These are my experiences as part of the Pinball 2000 team. Feel free to ask questions. I’ll gather up multiple answers into one comment like I did with the initial post. Now, without further ado…

    Part 2 – Early decisions on how to handle graphics and the display

    This part will be quite technical because I’ll discuss some hardware capabilities and trade-offs in detail. I’ll try to keep it from getting too esoteric.

    The code already had limited support for graphics. Part of the 8MB of RAM was set aside to hold a video framebuffer. It could output the framebuffer to a monitor, and it could copy sprite data from ROM into RAM, including printing text letter by letter. I don’t remember if there was any data compression already. There was also a port of a graphics library called Allegro, which was freely available source code. Given the timing I think we would’ve been using version 2.1 or 2.2 but I’m not sure. That library had all the features the prototype needed but it would not meet all our needs. I started with a few basic assumptions:

    1. The display would be a shared space
    2. Graphical effects shouldn’t have to know about what else was being displayed
    3. We would have to work with a “low-res” monitor
    4. ROM space would be limited and precious

    These assumptions drove every choice I made in the beginning. I’m sure I discussed them with other programmers but I don’t remember in what order and with whom. Tom must’ve had input for 3 and 4 because I didn’t know exactly what the hardware could do, how big the ROMs could be, or what the constraints for video output were. It helped that I had written “graphics demos” in ARM assembly language for the computer I’d owned in Britain so I was familiar with things like sprites, masks, palettes, resolutions and so on. Incidentally the CPU inside basically every smartphone on the planet is derived from the CPU inside that computer, albeit distantly these days. Steve Furber and Sophie Wilson designed the Acorn RISC Machine CPU. That CPU was used in the Acorn Archimedes line of computers including the A3000 I’d owned. Nowadays ARM stands for Advanced RISC Machine and the instruction set is 64 bits vs the original 32 bits but without the work of those two, the world of computing would be very different.

    The display was set up for 256 colours, so 8 bits per pixel, with a colour palette of 256 entries. That meant you could have up to 256 unique colours on the screen at any time and only needed 1 byte per pixel so it was fairly fast to draw. The downside to this was that if you changed what any one palette entry looked like all the pixels that used that entry would change colour. This means you either have to divide the palette into groups (say, 8 groups of 32 entries each) and limit effects to one per group so they can change their entries without clashing, or pre-allocate a few palettes that the effects can use together. I decided not to do these things. I wanted something easier and was prepared to have lower overall performance, so I changed the setup to use 16 bits per pixel where each pixel stored its own colour value. No palette clashing! However, as in all things, there were downsides. First, we’d be using twice as much memory for the display and taking twice as much time to draw into that memory. Since the big differentiating thing for Pinball 2000 was the video display I was concerned about that. Second, we wouldn’t have quite as much control over the colours. With the palette entries you could set 256 levels independently for red, green and blue. With 16 bit colour you could only get 32 levels each (actually 15 bit RGB555 colour; 16 bit RGB565 would let you have 32 for red and blue and 64 for green but I chose not to do that for reasons we’ll get into later). Since players wouldn’t be looking at the display directly and there’d be so much ambient light I decided that wasn’t going to matter.

    We also needed to choose a screen resolution. The prototype was set up for 640×240 pixels and there was a reason for that. In the arcade industry what we called a “low-resolution” monitor meant that you could only have so many rows of pixels. CRT monitors use electromagnets to paint the screen with electrons. The monitor is driven at 60 frames per second, so if you have 240 rows that means you’re changing the strength of the vertical electromagnet 14,400 times a second. Low-res monitors can only handle up to around 15kHz for that. You need a bit of extra time because when you reach the bottom of one frame you have to move the beam back to the top for the next frame. Higher-res monitor could change more quickly so they could display more rows. There’s a similar constraint for how many columns of pixels (basically how fast can you move the electron beam along a single row), but all I remember is that 640 pixels wide was fine. Since the monitor aspect ratio was 4:3, 640×240 meant pixels would be twice as high as wide (1/160th of the width, 1/80th of the height) so I thought about dropping to 320 so the artists could work with square pixels and we’d only have to draw half as many each frame. I probably talked about it with Tom a little but I don’t think it was ever more than a vague idea. I certainly don’t remember testing it out or showing the difference to other people.

    This wasn’t the end of the decision making process though. Not only would we be using twice as much RAM for the display, but 16-bit art would take up more ROM space. We only had 64MB for ROM and 4MB of that had to hold an early version of the game code so you could boot the game directly from it. Sounds were handled differently, so they didn’t have to fit there, but 60MB for all the artwork in the game was not very much. We’d need image compression to make this feasible and that would add complexity because our image processing tools would need to do the compression, and we’d have to add decompression to the system. It did have a benefit beyond just fitting more art into the game. Reading from ROM was slower than reading from RAM. So the smaller the data in ROM, the less performance hit there’d be copying and decompressing it.

    This mostly covered my four key assumptions. Compression would come later (I’ll talk about it in part 3). We were fine using a low-res monitor. Basic performance was acceptable and we didn’t have to deal with palettes or clashing. There was still an important piece missing, however. How were different rules going to share the display easily? It occurred to me that windows in a GUI were like that. Different apps could overlap their windows, and things were drawn in a consistent order. On Unix there was a program called a “window manager” that organised all that, so I could make my own equivalent! I called it the DisplayManager and the individual things were Displayables. Game code could create a Displayable, register it with the DisplayManager and it would get drawn over and over, every frame, until it was unregistered. When a Displayable was registered it was given a Z value and the list of Displayables would be drawn in increasing Z order so things wouldn’t flicker under and over each other.

    At the beginning of each frame the display needed to be cleared. By default it would just be wiped to black but I made this overridable in case a game wanted to do something special – maybe they were always showing some static piece of art so they could copy it directly over the old pixels and save time. Similarly, Displayables needed to know how to draw themselves. I started with two kinds. One was created with a specific width and height and would allocate that much memory. The game code could update that memory and the Displayable would copy it onto the display when the DisplayManager told it to. The other was “self-draw”. Instead of allocating any memory it would just have a function that got called at the right time. This was so things that didn’t need to pre-allocate memory were easy to create. A starfield could self-draw and just update the relevant pixels. Something that wanted to process things already drawn on the display (say to tint or magnify them or something else) could read those pixels, modify them and write them back. Basic shapes like lines or rectangles or circles could be drawn directly onto the display. We didn’t use this very much, but it was easy to implement and it was helpful on several occasions.

    I got all this working pretty quickly. At that point I was coming to work really early on Monday (maybe 4am) to give me solo, quiet time for things like this. When colleagues were around I could talk to them about needs and wishes and so on, but it was hard not to get distracted. I made a demo of coloured ellipses to show how they could overlap and be moved around and shown and hidden. I was confident that my assumptions were right and that this would be flexible, easy to use and help us make games that looked great.

  • Original post: https://www.reddit.com/r/pinball/comments/1lqhtky/pinball_2000_development_lore_part_1/

    These are my experiences as part of the Pinball 2000 team. Feel free to ask questions. I’ll gather up multiple answers into one comment like I did with the initial post. Now, without further ado…

    Part 1 – Background, seeing the first prototype and joining the programming team

    In early 1998 I was working on making a WPC version of Big Bang Bar. I had a Capcom game in my office as a reference, and next to it I had a WPC cabinet with a whitewood made from Capcom playfield parts. I had almost the whole ruleset working, but there was no animation or sound at all. I had jumped at the chance to work on this because I liked the game, I hadn’t had a chance to be in charge of a game’s programming, and I agreed that we could make this game quickly and have it be successful. I was worried because whenever I asked when we’d get the assets licensed from Capcom I got non-answers, but I was really enjoying doing all the things you needed to program a full WPC game.

    It was common knowledge that pinball was not doing well as a business, and Neil Nicastro the CEO was not known for his sentimentality. Everyone was head-down on their various projects although I don’t remember exactly what everyone was working on specifically. I knew JPop (John Popadiuk’s nickname) was working on a pinball game with a CRT monitor in the backbox instead of the dot matrix display and backglass. He had a couple of programmers and an artist involved. I’d seen it a little bit and they had some simple gameplay. I thought it was interesting but I wasn’t especially spurred to new ideas by it.

    One day I got invited to see something that wasn’t upstairs in pinball engineering. I think it was across the street in the Midway 2727 building but I may be misremembering. Several of us trekked over and into the room. I forget who else I was with but I think Louis and maybe Dwight. There were two cabinets. One was Formula 1 themed with an angled backbox that was partly a mirror. Apparently this was Python’s idea. I never saw it again and I didn’t understand what it was even supposed to be demonstrating. In any case, what we were really there to see was Pat and George’s demo of their holo-pin concept. It was super, super cool even with just the static images of the mech displayed by George’s Amiga. My brain lit up with ideas. I really can’t overstate how strong an impression this made on me. I instantly knew that I wanted to work on this!

    Everyone seeing this demo for the first time was similarly excited. It was just electric. This felt revolutionary, a unique moment that the world would never be the same after. When I got back to my office I immediately started thinking of ways to do Big Bang Bar as a holo-pin. I started a text file and poured my thoughts into it. I’ll talk about those thoughts in a later post, but the important point is that I instinctively knew that nothing more was going to happen with the WPC version. Now, part of the reason I’d even been working on Big Bang Bar in the first place was because the other programmers were sceptical about it. I was also a “problem child” who’d not done myself many favours and so people were just fine having me out of their way. That’s important context because later that afternoon Tom came into my office to tell me I’d be working with him on this new thing and, as he bluntly put it there was to be “no fucking around”. I promised him I’d do good work, that I understood his concern and I wanted to earn his trust. With that, my time on what was to become Pinball 2000 had officially begun.

    I don’t remember if I had one of the JPop-style cabinets first, or just a monitor and CPU board. There would’ve been some setup but you really didn’t need that much on your Windows machine. Cygwin to run the compiler, linker etc, the RCS client (source code version control), a TFTP server you just ran in the background, and a serial cable. The CPU board had Ethernet and an ISA card to boot it. It would connect via TFTP and download whatever you’d just built. There was a button wired up to the reset pin on the CPU so you just hit the button when the thing crashed or you had new code to try. It could already scan switches and fire coils in order to support the gameplay prototypes JPop’s team were doing. We had circuit boards with a big grid of toggle switches for programmers who didn’t have or need an actual playfield, so I would’ve plugged one of those into it. I didn’t need anything else.

    This is how new projects usually start out. You get things hooked up, you get the source code, you build it and you run it. Once that’s sorted out you can start to work. Reading code, making changes, talking to teammates about what was most important. My first tasks were all about graphics, and I’ll discuss that stuff in part 2.

    (As an aside, we’d made WPC setups like this in the past. I’d had one on my desk, just a backbox with a power supply bolted to its top. You could plug it into a playfield, a test fixture of some kind, or one of those switch boards. That backbox was where one evening I’d had my first experience with the WPC-95 sound board crashing, which made it blare out a horrible sound VERY LOUDLY INDEED for several seconds. It went off right by my head and I was a bit discombobulated when Larry came to check on me. I think it happened at least once to pretty much every programmer)

« Previous Entries   Next Entries »

Recent Comments