Monday, December 15, 2025

Installing a 32k DS1386 into an SGI Indy that expects an 8k DS1386

 The TL;DR is - you can't just plug a 32k DS1386 into an SGI Indy and have it work, as a bunch of stuff is wrong.

 The longer version!

Here's a picture from the datasheet:

Now, at the outset it looks like A13 and A14 on the 32k module need to at least be grounded - if they're floated or high then the RAM will be selected when you don't want it to be, and you'll just fail to see the RTC.

 However, if you just do that your Indy won't boot because it turns out that pin 3 on the 8k module is an undocumented (in this datasheet) auxiliary Vcc input - the 5v auxiliary / always-on rail is actually there. Shorting it to ground will just make the Indy super sad. Don't do that.

So, in lieu of making up a PCB (which I think I'm going to do anyway just to be "clean" - I used some pin headers to raise the DS1386 above the Indy PCB.


 Note that pin 3 on and pin 28 don't have pins (and I'm going to put some tape over the pins just to be super clear nothing shorts out).

Then, I did a quick bodge wire job on the underside of the DS1386-32K module:


 Where pin 3 and pin 28 are tied to pin 16.

 Then, well, plug it in, align it right, and it should just work! It did for me!

Thursday, December 4, 2025

Blinking the SGI Indy Power Light

I figured out what broke in netbsd to make the R5000SC I have here poweroff during startup.

In any case, the hack for blinking the front panel LED, inspired from their BLINKY driver, shouldn't be lost when i clean up my tree.

 

So, I give you all:

+void
+blink_led(void)
+{
+       uint32_t reg;
+       uint32_t i;
+
+       reg = *(volatile uint32_t *)MIPS_PHYS_TO_KSEG1(0x1fbd9870) & 0xff;
+       reg = reg ^ 0x10; /* toggle? */
+       *(volatile uint32_t *)MIPS_PHYS_TO_KSEG1(0x1fbd9870) = reg;
+       asm(".set mips3; sync");
+
+       for (i = 0; i < 20 * 1000 * 1000; i++)
+               asm(".set mips3; nop ; nop");
+}

It's terrible but it works!

Also, amusingly, the ARC BIOS console IO was available at this point but nothing was being printed early enough for it to be seen! Debugging was quite a bit faster once I realised I was getting far enough for the ARC BIOS entrypoint to be initialised so I could printf().

Tuesday, December 2, 2025

Reverse Engineering the SGI Indy Monitor Detection, or "thank god someone added SGI indy / indigo 2 support to MAME"

 I have a bit of a soft spot in my heart for the SGI Indy and (purple, not teal, heh) Indigo 2.

So imagine my surprise when NetBSD "almost" booted just fine on the Indy I have acquired. R4600PC-100, XL8 graphics .. and wonky console colours in netbsd / wonky xorg.

The first deep dive is "why are the monitor colours so unpredictable?" and that got me into a fun deep dive into how the SGI Indy Newport graphics works, the whole SGI Indy Linux project circa 2000, hardware shortcuts and software shortcuts.

Anyway.

The TL;DR is here - https://erikarn.github.io/sgi/indy/monitor_detection  . I've listed the monitor resolution/refresh rates the internet and my reverse engineering based on what MAME was programming.

So the long version.

First up - I've put all the hardware documentation I've found so far at https://erikarn.github.io/sgi/indy/notes

The Indy was booting NetBSD in either correct colours - green kernel text, white userland console text - or incorrect colours - green kernel text, but blue console text. It was random, and it was per boot. X11 was no better - sometimes it had the correct colours, sometimes everything was wonky.

The NetBSD console code tries to setup the following things for 8 bit graphics mode (which is used for console, even for 24 bit cards):

  • Program in an 256 entry colourmap table, matching what the NetBSD RGB 332 colour scheme is;
  • Add in a 1:1 RGB ramp in another colour table (RGB2);
  • A bunch of "XMAP9 mode" lines mapping 32 entries of "something" to RGB8 pixel format, RGB2 colourmap.

I was very confused as to what was and what should be going on, and I don't want to dig into the journey I took to get here. But the TL;DR here is that everything in the NetBSD console setup path is wrong and when it "worked", it ended up with the wrong colours. And when it "didn't work", it sometimes ended up with the wrong colours.

I'll write a separate post later about how the whole newport graphics system holds together, but fixing this requires a whole lot of driver changes to correctly program the hardware, and then some funky monitor timing specific programming.

The 13W3 port on the Newport graphics boards have a 4 bit monitor ID which compatible monitors will output. There's more details available at https://old.pinouts.ru/Video/sgivideo_pinout.shtml


The "universal 13W3 interface input cable" that I bought has a bunch of DIP switches controlling this.


 

If you have the four monitor ID bits set on or off, then you still get 1024x768 @ 60Hz.

The "fun" part of this story is if I were using 1280x1024 straight off the bat then I'd likely not have seen these problems happen so often.

Anyway.

Depending upon the settings, the Indy will boot with a bunch of different possible monitor setups:

  • 1024x768, 60Hz
  • 1024x768, 70Hz
  • 1280x1024, 60Hz
  • 1280x1024, 72Hz
  • 1280x1024, 76Hz

I enumerated this list and threw them up on the monitor detection link at the beginning of the article.

So, the firmware reads these four bits at boot (via 4 IO bits on one of the CMAP chips - again, see the links at the top of the post) sets up the monitor timing and then displays stuff. But when NetBSD's console programming is getting the colours wrong when I'm using 1024x768 60Hz.

It turns out that the XMAP chips - which handle the final mapping of incoming framebuffer pixel data to what 24 bit RGB is sent to the CMAP chip and then the RAMDAC -  were being programmed inconsistently. (again, they were being programmed incorrectly too in NetBSD, but I've got a big diff to fix that. With that, they're programmed correctly inconsistently.)

There's a "display control bus" that the Newport raster chip (REX3) has to peripheral chips. The peripheral chips - the XMAPs, the VC for timing, the RAMDAC, the CMAP for 8/24 bit colour table mapping - they're all DCB peripherals. The DCB has some address lines, 8 data bits, programmable chip select line, chip select setup, hold and release timing, optional request/ACK signaling, and register auto-increment functionality.

However!

  • The REX3 chip runs at 33MHz;
  • The XMAP chips run at 1/2 the pixel clock (they're interleaved);
  • The DCB has support for explicit ACK signaling from the peripheral, but as long as the peripheral uses it;
  • The XMAP does not have an ACK line, just an incoming chip select line, and
  • When writing the XMAP mode table lines - which map the display information to pixel format / colour table selection - it's done as back to back bursts to the same register, not an auto-increment and NOT using an ACK line.

This means that if the XMAP chip is running at a speed that doesn't entirely line up with the programmed chipselect timing, the mode writes will be junk. The normal 8 bit read/writes are "mostly fine" as they just show up as multiple 8 bit read/writes to the same register and for all the OTHER registers that is just fine. But for the mode register - where the DCB needs to write 4 bytes to the same individual address - it's absolutely not fine.

And that's the kicker.

After spending some quality time with the MAME emulator and some local hacks to enable the newport peripheral IO logging and setting the monitor ID, I found out that the timing used for the XMAP chips is different for 1024x768 60Hz versus 1280x1024 76Hz.

Everything worked just fine when I adjusted it.

So ok, I went back to the Linux and X11 drivers to see what's going on there, as I know the C code wasn't doing this. And I found this gem in the Linux newport.h header file:

 static __inline__ void
xmap9SetModeReg (struct newport_regs *rex, unsigned int modereg, unsigned int data24, int cfreq)
{
        if (cfreq > 119)
            rex->set.dcbmode = DCB_XMAP_ALL | XM9_CRS_MODE_REG_DATA |
                        DCB_DATAWIDTH_4 | W_DCB_XMAP9_PROTOCOL;
        else if (cfreq > 59)
            rex->set.dcbmode = DCB_XMAP_ALL | XM9_CRS_MODE_REG_DATA |
                    DCB_DATAWIDTH_4 | WSLOW_DCB_XMAP9_PROTOCOL;
        else
            rex->set.dcbmode = DCB_XMAP_ALL | XM9_CRS_MODE_REG_DATA |
                        DCB_DATAWIDTH_4 | WAYSLOW_DCB_XMAP9_PROTOCOL;
        rex->set.dcbdata0.byword = ((modereg) << 24) | (data24 & 0xffffff);
}

It's choosing different DCB timing based on the pixel clock. It lines up with what I've been seeing from MAME and it adds a third one - WAYSLOW - which I bet I'm only going to see on the PAL/NTSC timings or if something really wants to do something like 1024x768 50Hz.

The timings are in the header file, but .. nothing is using xmap9setModeReg(). It was likely copied from some internal SGI code (the PROM? X server? Who knows!) as part of the code bring-up but it was never used.

Anyway! With this in the NetBSD console code the console finally works reliably in all the modes I've tested. I'm going to try and get my big diff stack landed in NetBSD and then I'll work on the X11 newport code so it too supports 8 and 24 bit graphics at 1024x768 reliably.

  • Read the CMAP1 register (and PROM on SGI Indy) to determine the monitor type
  • The default monitor on SGI Indy is 1024x768 60Hz, and for Indigo2 its 1280x1024 60Hz
  • Select an XMAP9 mode DCB timing set based on the pixel clock
  • 8 bit mode for console and X11 needs the colour index table programmed into the CMAP at CI offset 0, and appropriate XMAP config for the display mode table to use 8 bit pixels, PIXMODE_CI, offset 0, NOT 8 bit RGB
  • 24 bit mode for X11 needs the 24 bit RGB ramp programmed into the CMAP RGB2 table (which is not a colour index table!), and no CMAP
  • Importantly, the X11 server uses truecolour for 24 bit mode, and pseudocolour / colourmaps for 8 bit mode, so all of this need repeating in the X11 server code! 

Here's how the console looks, complete with the correct XMAP9 mode table:

And here's how x11 looks:


 

 

(And the X11 support is even more fun, because I had to fix some stuff to make acceleration in the driver work, and that's going to be a whole fun DIFFERENT post.)

Addendum:

Oh, and the sync on green? It's generated by the RAMDAC. Once this all has landed in NetBSD I'm tempted to try to add a sysctl / boot parameter to disable the sync on green bit so normal monitors work on the SGI Indy. Let me know if you'd like that! 

Tuesday, November 11, 2025

A tale of an SGI Indy, a Sony power supply, and how to keep the fan spinning

Let's not dwell on why I bought an SGI Indy. Anyway.

One of the common things that I've seen is failure due to power supplies or heat death. The irony is:

  • The Nidec power supply has dirty power, fails hilariously, but the fan is at least always spinning, and
  • The Sony power supply has clean power, less hilarious failures, but the fan only comes on when the unit is hot. Sometimes.

 The fan in the Sony PSU is a 12V fan, and it turns on based on a thermal control line from the Indy. There's been plenty of research into the behaviour of that signal, and I'm not going to go into it here. What I instead want to talk about is a quick way to actually just get the fan constantly spinning, without having the open up and modify the power supply itself.

The TL;DR is this:

  • Make a small voltage gate using two diodes - one from the 3.3v power supply rail (via a resistor, I used 100 ohms; (1Kohm was too high) but I may try something smaller like 56 ohms to make sure enough current is flowing) and the other from the Indy board;
  • That way the Sony PSU is always fed at least 3.3v into that thermal sensor input.

 


So!

  • There will always be a minimum 3.3v voltage into the Sony PSU fan control, which is enough to turn it on; 
  • The fan will always spin at a minimum voltage;
  • if it DOES get warm enough for the Indy thermal sensor circuitry to feed above 3.3v (+ diode drop) into the thermal control line, it will also increase the fan speed.

It's not too hard - tie two diodes together at the cathode side, cut the brown control wire, feed it into the power supply, and then tie the two anodes in as above.


 

(The grey wire in the image is going from one diode anode to the 3.3v (white) wire in the Indy main PSU connector; there's 100 ohm resistor at the end of said grey wire that's temporarily jammed in for testing.)

With this the Sony PSU fan is always spinning, and your SGI Indy should die less of a heat death. 

Friday, May 2, 2025

Installing FreeBSD-15 directly on an SSD, or "wait this 2007 era AMD K8 1RU server doesn't boot from USB flash?"

I .. ok let's not talk about why I'm putting freebsd-15 on an old K8 era box. (it's for network hijinx testing with old realtek PCI cards, don't ask.)

Anyway it's an award BIOS from 2007 that has the following USB options:

  • USB FDD
  • USB ZIP
  • USB CD-ROM

.. and doesn't like whatever FreeBSD is doing to the USB flash drives these days.

Anyway, that meant I needed to figure out how to bootstrap FreeBSD directly onto an SSD.

First up, creating partitions:

# gpart create -s mbr da0
# gpart bootcode -b /boot/boot0 da0


# gpart add -t freebsd da0
# gpart set -a active -i 1 da0

# gpart create -s bsd da0s1
# gpart bootcode -b /boot/boot da0s1
Then, filesystems

# gpart add -t freebsd-ufs -s 230g da0s1
# gpart add -t freebsd-swap -s 4g da0s1

Then, newfs w/ FFS journaling:
 # newfs -j  da0s1a
Then mounting it:
# mount /dev/da0s1a /mnt
Then, bootstrapping from my local pkgbase repo (see https://wiki.freebsd.org/PkgBase for more info): 
# pkg --rootdir /mnt --repository FreeBSD-base --glob '*'

Which did a great job of bootstrapping it.

Then, I rebooted it, and discovered I needed /etc/fstab, so I did some manual stuff to boot it into single user mode and get it working.

 

Anyway, that was a fun trip down 2007 era hardware lane, and I found some bugs in ethernet drivers that use miibus (see kern/286530 - https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=286530 - for more info.)