Blog

In Defense of Old Tech: Why a 10 year old Xeon could be your next computer

I had the opportunity to acquire some Enterprise hardware from a former employer. This hardware is equipment I purchased and built when I worked there, almost 10 years ago. At the time, I was trying to balance cost with performance, some of the components were not top of the line and others were performant for the day.

In all I acquired a couple LGA771 dual socket 2U systems and a 4U system with a 24 drive enclosure and LGA1366 Xeon. All systems had Adaptec 5x05Z RAID controllers with 2TB Seagate drives. The LGA1366 Xeon is/was relatively modern because it represents the first generation of the Core i series architecture. The LGA1366 E5500 Xeons have a base clock of 133Mhz with 3 memory channels, and 4.8GT/s, 5.6GT/s, or 6.4GT/s transfer rates on the QPI bus. Depending on the model number, the max memory speeds are 800Mhz, 1066Mhz, or 1333Mhz.

It’s worth discussing memory speeds a bit. The LGA1366 Xeons have some interesting memory speed behavior. The motherboard that was in the 24 drive NAS machine was a Supermicro X8SAX board. This is a Workstation class board without IPMI, but has 2 PCIe x16 slots. When the machine powers on there is an NVIDIA SLI logo displayed, it’s clear this board was targeted at dual card SLI workstations. If you installed the proper combination of Xeon CPU and memory, XMP was also enabled. This motherboard would accept up to 24GB of unbuffered ECC or non-ECC RAM, and it would run the memory at 1333Mhz with all slots populated. During system bootup the BIOS will tell you the memory is running at 1066Mhz, but when the OS starts it is actually bumped to the maximum — some Xeon boards support this.

Now let us look at Enterprise motherboards such as the X8DTN+, this has an IPMI slot [or integrated if you get the -F variant] and it has 18 DDR3 DIMM slots. That’s right, the Xeon chips were designed to access up to 3 DIMMs per memory channel, times 2 sockets per board, you could have up to 288GB of registered ECC memory in these machines, but at a cost.

I found a really excellent whitepaper by Fujitsu which sets out to quantify the limitations I’m about to lay out for you, please have a look at it here: Memory performance of Xeon 5500 (Nehalem EP) based PRIMERGY servers.

The Xeon 5500 series (LGA1366) processors allow you to access memory at 1333Mhz if you have 1 DIMM per channel (1DPC), if you have 2 DIMMS per channel (2DPC), you can access the memory at 1333Mhz on select motherboards (like the aforementioned X8SAX), however on Enterprise boards the speed drops to 1066Mhz. If you populate 3 DIMMs per channel (3DPC), the speed drops further to 800Mhz. By the numbers 800Mhz is much slower than 1333Mhz, and for some applications it matters. If you have a memory intensive application that is effectively memory bound instead of CPU or I/O bound, then the slower memory speeds will affect you more.

Raw memory speed (multi-core access) is actually better than many modern sockets like the LGA1150, LGA1151, etc, because they only have 2 memory channels and the max speed is limited by the memory technology and speed. Intel long locked DDR3 performance to 1333Mhz, or in some cases 1600Mhz.

The maximum performance of the LGA1366 socket, using 1DPC and 1333Mhz, access rate is 35.5GB/s. When you use 2DPC the throughput drops to 32.1GB/s, and when you go all out with 3DPC, performance drops to 25.5GB/s. That last figure is an effective 800Mhz DDR3 access speed. Now let’s look at a more modern enthusiast processor, the i7-4790K. This is a 4th gen Haswell processor with a base frequency of 4Ghz and single-core turbo of 4.4Ghz, the all-core turbo is 4.2Ghz. The i7-4790k is still one of the fastest per-thread processors Intel made, great for workloads that need single threaded performance. This processor has an Achilles heel and that is the dual channel DDR3 memory bus. The LGA1150 socket doesn’t have enough pins to do 3 or 4 channel memory, so these are constrained in performance. The LGA2011 is the higher TDP quad channel memory variant typically seen to extend this 4th generation architecture. Chips like the E5-2690 v3 are representative of the 4th gen LGA2011 genre. The LGA1150 socket is limited to 25.6GB/s memory throughput because of the dual channel DDR3-1333 memory, you can use DDR3-1600 memory with this socket, which will give you a slight bump, but not as much as 3 memory channels.

So, all this talk has been theoretical and based on published specs and whitepapers, why don’t I reveal some real world data that might have you deciding on an older space system vs a newish system.

My test environment is a Supermicro X8DTN+ with 2 Xeon X5570 processors and 24GB of dual rank UDIMMs (DDR3-1333Mhz 2Rx4 UDIMM). There is 1DPC, the CPUs have a 95w TDP, 133Mhz base clock, and 3.3Ghz single core turbo with 3.2Ghz all-core turbo.

The competitor is a Supermicro X10SAE with i7-4790k and 32GB of dual rank Corsair DDR3-1333Mhz memory in 2DPC configuration. The base clock is 100Mhz, 4.4Ghz single core turbo and 4.2Ghz all-core turbo, with a 95w TDP.

By the numbers the X5570 scores 5,393 on Passmark, while the 4790K scores 11,140. The raw clocks are about 25% faster on the 4790K and it is 3 generations newer. The dual Xeon board will consume about twice the power of the single i7, so power consumption will always be a loss there.

My test was simple, to compare the relative performance of the CPUs I used Handbrake to transcode an h.264 1080p video file into an equivalent h.264 1080p video file. This test accomplishes a few things at once: It loads all of the cores on the processor, it causes the processor to go to all-core turbo, it causes the TDP to saturate at maximum, and it exercises the entire motherboard.

When doing CPU intensive tests, it’s important to ensure the thermal management solution (heatsink) is capable of removing enough heat from the CPU so that it doesn’t go into thermal throttling. The passive 2U Supermicro heatsinks are rated for 95w TDP and work exceptionally well. Intel (Foxconn) produced a similar looking active cooler with 4 heat pipes, but half the fins, and that solution is incapable of properly cooling a 95w TDP. Under load the X5570 processors reached equilibrium at 75-78 degrees C. The i7-4790K has a Noctua active cooler with dual fans and never gets hotter than about 68 degrees C.

The Xeon X5570 processors were able to maintain an all-core turbo of 3.2Ghz sustained throughout the transcoding test, while the i7-4790K reached a sustained 4.2Ghz. The interesting thing to note is that the 4790K shows a CPU Max Mhz of 4.4Ghz in lscpu where the X5570 shows 2.934Ghz. It seems the rating of Enterprise and enthusiast processors is different for max frequency.

The conclusion of the test is that the dual Xeon X5570 system was able to sustain an average of 50fps during transcoding, while the i7-4790K could only muster 40fps. This same behavior was repeated when actually transcoding a raw Blu ray movie to h.264 1080p. The X5570 system has twice the threads, 3/4 the speed, a 3 generation microarchitecture handicap, 33% more memory bandwidth, and came out 20% faster than the 4th gen Haswell hotrod.

Let’s break down the costs for these systems. The Xeon X5570 CPUs are about $10-12 for a pair, the heatsinks are $24 for a pair, the motherboard was $59, and the memory would cost ~$30. The i7-4790k was $250, the cooling solution was $70 (or you could use the intel boxed cooler), the motherboard was $125, and memory cost $200 (32GB of Corsair Vengeance 1600Mhz, but only XMP and since it doesn’t have a 1600mhz JEDEC profile it’s just 1333mhz). You can buy registered ECC memory for a fraction of the cost of normal PC memory, 128GB costs from $120-$150.

In the end, if you have an EATX case, the dual Xeon X5570 system could prove to be quite the budget beater. I never added up the costs of my desktop machine before, but at $645 vs $123, I might be looking at a dual LGA2011 system for my next upgrade. The TDP of the new AMD Ryzen systems is pretty close to that of 2 Xeon E5-2690 V3 processors, but the Xeons are matched about the same on benchmarks and you could put together a system for less than the cost of the Ryzen CPU.

How To Get Your Hacked YouTube Channel Back

A YouTube channel I subscribe to was recently hacked. The owner just eclipsed the 100k subscriber mark and received an authentic looking email about the 100k subscriber plaque. He followed the directions in the email without realizing it was a phishing scheme and he subsequently lost control of his channel.

The owner of the channel was hitting many roadblocks while trying to contact YouTube to get someone to advocate for him. I too searched for advice on his behalf, but I kept coming across the same community pages with no real guidance or solution. After about a week went by I used my YouTube channel to contact Creator Support via their email feedback form. Within about a day I received an email from someone who understood the issue and was able to provide useful help.

The trick to getting your YouTube channel back is a secret contact form called Send an email to our support team to report potential account hijacking that is only available to YouTube Creators that are part of the YouTube Partner program. This is the long way of saying that only monetized YouTube channels can access this special form and get the fast track to YouTube Creator Support for hacked accounts.

The form you need to fill out to recover your stolen/hacked YouTube account is here: https://support.google.com/youtube/contact/report_youtube_hijacking

There are 3 pieces of critical information you need to collect before filling out that form, YouTube asks for a lot of information up front so they can quickly investigate the matter and return your channel to you. Here are the pieces of information:

  • YouTube Channel ID
  • ID of Adsense account you associated with your YT channel
  • New YouTube Brand Account ID

I will walk you through how to obtain these 3 pieces of information.

YouTube Channel ID

This is not your YouTube username or Channel name, this is a unique ID that does not change and is used internally to track your Channel. The easiest way for you to find this is to go to Socialblade.com and search for your YouTube channel. When you find your channel in the search results, your channel ID will appear in light colored text to the right of your channel name:

In the screenshot above you can see my channel ID is UCOLSDV-BG5Sg1SZRxntQK7Q

Google Adsense ID

To obtain your Google Adsense ID, you need to login to your Google Adsense account. Go to https://google.com/adsense and login, make sure you have ad blocking turned off or you may encounter an error logging in. Once you’ve logged into adsense, you will need to click on Account, then Account Information on the left side:

On the page shown to the right, you will see several IDs: Publisher ID and Customer ID. The Publisher ID is used for embedded ad code in your web pages, the Customer ID is a confidential ID sort of like your Social Security Number. The information bubble for Publisher ID specifically says this number may be used when communicating with Google. Use the Publisher ID for ID of Adsense account you associated with your YT channel on the contact form.

New YouTube Brand Account ID

The last critical piece of information you need is a new YouTube Brand Account ID. When YouTube was created, every account was a channel, but as the business of creating YouTube content grew, people needed to create multiple channels and to manage existing channels with teams of people. A YouTube Brand account is a channel, but it is an entity which can be managed in a collaborative way. To create a new YouTube Brand Account ID, you simply need to login to YouTube and go to this URL: https://www.youtube.com/channel_switcher

Below is a screenshot of the channel switcher page:

On the right side is a Brand Account I created called DoubleSigma, this is a secondary channel/account connected to my primary YouTube account. To create a new Brand Account, click on the Create a new channel box on the left, then follow the prompts. You do not need to customize this channel yet, just create it. Once you’ve created a new Brand Account, you need to switch to your newly created Brand Account. This is done by clicking on Switch Account in the user menu on the upper right:

When you click that, you will see another menu pop-up like this:

Select your new Brand Account.

Next you need to go to your Channel settings, click on the round user icon in the upper right, then click on Settings:

Next click on Advanced Settings on the left menu:

After clicking Advanced settings, you will see a few fields on the right, the one you are interested in is Channel ID:

Click the COPY link to copy your New Brand Account ID to the clipboard and paste it in to the New Brand Account ID field of the contact form.

The rest of the contact form contains less critical information that is best effort rather than required. Hopefully this article helps you recover your channel quickly!

Using Docker to Create Pop-Up MySQL Instances

Pop-Up shops are those short lived stores at malls and other places, often times they are kiosks. They serve as to satisfy temporal demands like nano quadcopters or engraved keychains. In this context you can create MySQL instances that are short lived, easily provisioned, and easily disposed of.

Imagine you are a developer, or the DBA who has to tell a developer when their code breaks, and you would like an easy way to validate code against the production schema, but not impact your production systems?

This recipe makes some assumptions:

  • You have a MySQL slave or a secondary Innodb Cluster instance to CLONE from
  • You are using MySQL 8.0.17 or later
  • You don’t have hundreds of gigabytes to terabytes of data

If you have a lot of data in your production environment, this won’t be a viable solution, but if your data is in the 10s of gigabytes, this could work for you.

I’m going to present 2 options: 1) A completely standalone transient instance of MySQL 2) A semi-persistent instance of MySQL that can live on an external encrypted SSD or other secured storage.

Continue reading “Using Docker to Create Pop-Up MySQL Instances”

WebCom secrets: How we hosted 70,000 domains on one Apache instance

A chief virtue of time is that it provides distance. Time is the 4th dimension we live in and it gives us the opportunity to share what once was, without fear of reprisal. It has been 12 years since I was let go from Verio, almost as much time as I worked for WebCom/Verio/NTT. I feel there is enough distance between then and now to share some secrets without fear of reprisal.

WebCom did things differently, we pioneered name-based virtual hosting and we learned how to do more with less. Back when WebCom was starting to do name-based hosting it was common for many providers to put 2,000 IP addresses on an SGI machine running IRIX. I assume that the allure of SGI had to do with decent horsepower and a BSD derived OS that could host a lot of IP addresses per NIC. Back then the BSD network stack was considered to be one of the best.

When I started we had HP PA-RISC machines, a Sun 4/330, and a Windows NT 3.51 486 running MS SQL Server (Sybase). By the end of the year we’d signed a lease on a Sun Enterprise 1000 server, a piece of “big iron” at the time. I think we had 4 SuperSPARC processors and 512MB of RAM. We looked at offering IP based hosting on Sun, but their OS only allowed up to 255 IPs per NIC. We briefly considered an inexpensive array of SCO Unix boxes, but Linux was never in the running because Chris considered it an immature OS. I spent my entire career there championing Linux, and winning.

We decided to go the Big Ole Server route with Sun, first with the S1000E, then an Enterprise 4000 in 1997. Early on we ran Netscape Enterprise Server, a commercial web server product from Netscape, written by the same people who wrote NCSA httpd. This was a modular web server with a plugin architecture and it could be expanded by writing NSAPI modules to perform actions in the chain of operations. Apache wasn’t really on the radar at this point. Chris wrote the first name-based hosting plugin for Netscape, this solution lasted us until around 20,000 domains, then the underlying architecture of Netscape became a bottleneck.

Continue reading “WebCom secrets: How we hosted 70,000 domains on one Apache instance”

MySQL 8 Network Backup Using Docker and CLONE

One of the shortcomings of MySQL GPL is that it does not come with a first party online backup solution. With the release of MySQL 8.0.17 the CLONE plugin was introduced, this essentially integrated online backup as a plugin to the MySQL Server.

The MySQL 8.0 Reference Manual describes how to use the CLONE plugin to perform local clones (backups) here: https://dev.mysql.com/doc/refman/8.0/en/clone-plugin-local.html

Doing local clones is incredibly useful and a really fast way of making an image backup. I would argue that the CLONE plugin is better for local image backups than competing solutions simply because the syntax is more brief and efforts were made to integrate CLONE into the server, thereby reducing the impact of performing CLONE operations.

The CLONE plugin can either clone to the server’s default data directory or to another directory specified in the CLONE command. I will demonstrate the latter usage for making online remote backups without modifying the data directory of the container.

Continue reading “MySQL 8 Network Backup Using Docker and CLONE”

Bona Fides: Linux Kernel

This page shouldn’t be considered a brag page, it’s just a place for me to easily categorize a Linux Kernel contribution I made eons ago. This is my original contribution of the vfork(2) system call. The current Linux kernel does not implement it in this way, however syscall 190 is still sys_vfork 😄

Subject: [PATCH] new syscall: sys_vfork
To: linux-kernel@vger.rutgers.edu (Linux Kernel Mailing List)
Date: Fri, 8 Jan 1999 10:49:54 -0800 (PST)
X-Mailer: ELM [version 2.4 PL24]
Content-Type: text
Status: RO
Content-Length: 5783
Lines: 156

Hello,

Well, I hacked in support for a traditional style vfork.  I haven't
tried actually running an application using the new vfork; I wanted
to release what I have to get feedback, as this is the first patch
I've really done.

Anyhow, some background first:

This implementation of vfork supports these features:

 - the VM is cloned off the parent
 - the parent sleeps while the vfork()ed child is running
 - the parent awakes on an exec() and exit()
 - the implementation theoretically allows for recursive vforks
 - it's executable from within a cloned thread
 - If I'm right about the flags, the sigmask is not cloned

A little bit about the 'controversial' parts:  The implementation
uses a wait queue in the task structure.  When the parent vforks,
after successful spawning, it sleeps on the vfork wait queue.  When
the child exits or execs, it does a wake_up(&current->p_pptr->vfork_sleep);
Which causes the parent to awake.  The wakeup in the exec is right
at the top of do_execve().  The wakeup in exit is right before
the time the parent gets notified of the child exit (before notify_parent);

It allows recursion because if a vforked child vforks, it just sleeps,
and as each vforked child performs an exec or exit, it percolates up
through the vfork execution stack.

Please let me know if I've done anything grossly wrong, or just wrong.
Additionally, could someone tell me how to do direct syscalls, I'm fuzzy
on that ;)

--Perry

------------------------------8<-----------------------------------------------

diff -u --recursive linux.vanilla/arch/i386/kernel/entry.S linux/arch/i386/kernel/entry.S
--- linux.vanilla/arch/i386/kernel/entry.S      Thu Jan  7 19:21:54 1999
+++ linux/arch/i386/kernel/entry.S      Thu Jan  7 20:38:18 1999
@@ -559,13 +559,14 @@
        .long SYMBOL_NAME(sys_sendfile)
        .long SYMBOL_NAME(sys_ni_syscall)               /* streams1 */
        .long SYMBOL_NAME(sys_ni_syscall)               /* streams2 */
+       .long SYMBOL_NAME(sys_vfork)            /* 190 */

        /*
-        * NOTE!! This doesn' thave to be exact - we just have
+        * NOTE!! This doesn't have to be exact - we just have
         * to make sure we have _enough_ of the "sys_ni_syscall"
         * entries. Don't panic if you notice that this hasn't
         * been shrunk every time we add a new system call.
         */ 
-       .rept NR_syscalls-189
+       .rept NR_syscalls-190
                .long SYMBOL_NAME(sys_ni_syscall)
        .endr
diff -u --recursive linux.vanilla/arch/i386/kernel/process.c linux/arch/i386/kernel/process.c
--- linux.vanilla/arch/i386/kernel/process.c    Thu Jan  7 19:21:54 1999
+++ linux/arch/i386/kernel/process.c    Thu Jan  7 20:33:23 1999
@@ -781,6 +781,19 @@
        return do_fork(clone_flags, newsp, &regs);
 }

+asmlinkage int sys_vfork(struct pt_regs regs)
+{
+       int     child;
+
+       child = do_fork(CLONE_VM | SIGCHLD, regs.esp, &regs);
+
+       if (child > 0) {
+               sleep_on(&current->vfork_sleep);
+       }
+
+       return child;
+}
+
 /*
  * sys_execve() executes a new program.
  */
diff -u --recursive linux.vanilla/fs/exec.c linux/fs/exec.c
--- linux.vanilla/fs/exec.c     Sun Nov 15 09:52:27 1998
+++ linux/fs/exec.c     Fri Jan  8 10:32:59 1999
@@ -808,6 +808,9 @@
        int retval;
        int i;

+       /* vfork semantics say wakeup on exec or exit */
+       wake_up(&current->p_pptr->vfork_sleep);
+
        bprm.p = PAGE_SIZE*MAX_ARG_PAGES-sizeof(void *);
        for (i=0 ; i<MAX_ARG_PAGES ; i++)       /* clear page-table */
                bprm.page[i] = 0;
diff -u --recursive linux.vanilla/include/linux/sched.h linux/include/linux/sched.h
--- linux.vanilla/include/linux/sched.h Thu Jan  7 19:27:44 1999
+++ linux/include/linux/sched.h Thu Jan  7 21:57:20 1999
@@ -258,6 +258,10 @@
        struct task_struct **tarray_ptr;

        struct wait_queue *wait_chldexit;       /* for wait4() */
+
+/* sleep in vfork parent */
+       struct wait_queue *vfork_sleep;
+
        unsigned long policy, rt_priority;
        unsigned long it_real_value, it_prof_value, it_virt_value;
        unsigned long it_real_incr, it_prof_incr, it_virt_incr;
@@ -298,6 +302,7 @@
        struct files_struct *files;
 /* memory management info */
        struct mm_struct *mm;
+
 /* signal handlers */
        spinlock_t sigmask_lock;        /* Protects signal and blocked */
        struct signal_struct *sig;
@@ -349,6 +354,7 @@
 /* pidhash */  NULL, NULL, \
 /* tarray */   &task[0], \
 /* chld wait */        NULL, \
+/* vfork sleep */      NULL, \
 /* timeout */  SCHED_OTHER,0,0,0,0,0,0,0, \
 /* timer */    { NULL, NULL, 0, 0, it_real_fn }, \
 /* utime */    {0,0,0,0},0, \
diff -u --recursive linux.vanilla/kernel/exit.c linux/kernel/exit.c
--- linux.vanilla/kernel/exit.c Tue Nov 24 09:57:10 1998
+++ linux/kernel/exit.c Fri Jan  8 10:34:10 1999
@@ -292,6 +292,10 @@
                kill_pg(current->pgrp,SIGHUP,1);
                kill_pg(current->pgrp,SIGCONT,1);
        }
+
+       /* notify parent sleeping on vfork() */
+       wake_up(&current->p_pptr->vfork_sleep);
+
        /* Let father know we died */
        notify_parent(current, current->exit_signal);

diff -u --recursive linux.vanilla/kernel/fork.c linux/kernel/fork.c
--- linux.vanilla/kernel/fork.c Thu Jan  7 19:27:29 1999
+++ linux/kernel/fork.c Thu Jan  7 20:24:53 1999
@@ -521,6 +521,7 @@
        p->p_pptr = p->p_opptr = current;
        p->p_cptr = NULL;
        init_waitqueue(&p->wait_chldexit);
+       init_waitqueue(&p->vfork_sleep);

        p->sigpending = 0;
        sigemptyset(&p->signal);


------------------------------8<----------------------------------------------

Reducing the Impact of YouTube’s API Quota

I started redesigning my website a several weeks ago, my objective was to create a centralized hub for sharing written information, code, video, and photography. It was rather easy to solve most of those problems, and sharing my latest YouTube video was simple at first.

I had this niggling feeling that my new website was on the heavyweight side, after all it’s WordPress based and I had a few plugins. The annoying reCaptcha logo was popping up everywhere, even when it wasn’t used. After using the Coverage tab in Chrome and installing yet more WordPress plugins to trim the fat, I tried get it down to as small a footprint as I could. Then came the Google PageSpeed Insights. Sometimes we are blissfully unaware of our problems and go through life with blinders on, PageSpeed Insights simultaneously woke me and gave me yet another obsession to chase.

Continue reading “Reducing the Impact of YouTube’s API Quota”

Adding VGA hardware palette support

VGALIB has lead a long and meandering path, development has been an exercise of leveling up each of 3 different environments: PC hardware running DOS, SDL under Linux, and SDL under emscripten. Much of the early development was done in dosbox with the Borland C++ 3.1 IDE, but once I grew past the point of basic C++, using std::string, I had to abandon the BC3.1 IDE and go strictly to makefiles. It was during this time that using the BC3.1 IDE for editing (and it’s weird Brief key sequences) started to become an exercise in patience. I really enjoyed developing on Linux, since that’s what I’ve done for the last 25 years.

Moving to makefiles under DOS was no small feat, the issue is that dosbox is a best effort emulator for running games, but compatibility with Borland C++ 4 and later is sketchy causes crashes. I ended up creating a Windows 2000 VM with Virtualbox to compile VGALIB, but even that acts peculiar and cmd.exe requires End Task. Virtualbox doesn’t have guest additions for any 16bit legacy OSes, so Win2K is the oldest usable environment. My current development environment is Eclipse for the editing (with VIM plugin), Win2K to compile the DOS programs, and dosbox to run them. For Linux and emscripten I use Eclipse with command line make.

The reason my build environment is important to this article has to do with the development target that was most feature complete: SDL running on Linux. Palettized 8bit mode on SDL is really a pain to program to, much more so than straight RGB or RGBA, but it mimics the original IBM VGA 13h mode most closely. I implemented palette support as a matter of requirement when I added SDL support, since there there is no default palette. Until this time I hadn’t added hardware palette support to the VGA driver, I simply relied on the default VGA palette (which is fine for most things).

Continue reading “Adding VGA hardware palette support”

The Sale of WebCom

The sale of WebCom was both bitter and sweet. The sale represented independence and success for many involved, but it also was the beginning of the end. WebCom was bootstrapped from what money Chris had and some surplus equipment that we got from a customer in exchange for free hosting. That equipment lasted us until late 1995 when we needed to transition from a 486 running Windows NT 3.51 and Microsoft SQL Server, to a Sun Enterprise 1000e running Sybase SQL Server.

I mentioned before that Chris and Thomas organized the company with a 67%/33% split, eventually I would have 1%, taken from Chris’ portion, and Neal [the CFO] got 10% IIRC, of which I think Chris and Thomas gave up 5% each. After we moved to 2880 Soquel Ave, Thomas started working on his exit from the company. That exit would precipitate one of the biggest threats we ever had as a company.

Continue reading “The Sale of WebCom”

Hacking CGA

This is meant to be a short post to talk about some CGA idiosyncrasies and how you can bypass them.

My video library VGALIB now supports CGA in addition to VGA, EGA support is planned too. Adding VGA was simple and that’s why I did it first; VGA implements a 320×200 linear framebuffer. A linear framebuffer is one where each pixel is represented by a simple lookup and the pixels are contiguous in the memory region. The formula width*y + x is commonly used to perform linear buffer address resolution. It is because of this simplicity that I made the internal representation of images 8 bit linear buffers. Each pixel is represented by 1 byte that can hold 1 of 256 colors.

Continue reading “Hacking CGA”