Blog

The Creative and Analytical Mind

I have worked at several disciplines throughout my life, a good while ago I made a conscious decision to pursue working in technology roles because that was my most marketable skill set. I have worked as a metal fabricator, machinist, software developer, system administrator, manager, and in a hybrid of various roles.

My current employment is highly analytical, it involves solving problems, doing research, communicating, and helping people with everything from the mundane to crises. I’ve always performed roles like this, but I’ve also pursued more creative and artistic endeavors.

Continue reading “The Creative and Analytical Mind”

Rabbit Holes and Time Sinks

This article is as much a piece of documentation as it is commentary. I recently decided to rejigger my home network after being quite comfortable in the current configuration for almost 7 years. The impetus was actually quite simple: one day I suddenly got paranoid when I realized what damage could be done if someone compromised my personal account. I am reasonably careful and competent about how I run things, but in spite of how careful I am, the services I’ve added in the last year increase the attack surface of my home network considerably. I would be foolish to ignore the increased risk these services pose.

Rabbit holes can be interesting or frustrating distractions to a relatively direct plan or process. Sometimes those rabbit holes turn from distractions into time sinks. Getting my home network upgrade completed was filled with both rabbit holes and time sinks. This isn’t the first major upgrade I’ve been involved in, I’ve moved datacenters multiple times, deployed new services, migrated services, but I’ve never had to completely duplicate all running services while also juggling new firewalls and network renumbers.

Continue reading “Rabbit Holes and Time Sinks”

Haunting Images

I was 21 when I purchased my first home. My good fortune was the byproduct of the dot com era and it afforded me the ability to put a down payment on a home in Boulder Creek California. This past month the San Lorenzo Valley experienced a hundred year event: a wildfire that tore through neighborhoods and erased much of the landscape.

I tried to sell my home twice, right before the great real estate crash, and again in 2014 when the market was fairer. In 2007 I used a Nikon D40 DSLR camera to photograph my house for the real estate listing and I happened to have some old high quality RAW photos of the house.

Continue reading “Haunting Images”

In Defense of Old Tech: Why a 10 year old Xeon could be your next computer

I had the opportunity to acquire some Enterprise hardware from a former employer. This hardware is equipment I purchased and built when I worked there, almost 10 years ago. At the time, I was trying to balance cost with performance, some of the components were not top of the line and others were performant for the day.

In all I acquired a couple LGA771 dual socket 2U systems and a 4U system with a 24 drive enclosure and LGA1366 Xeon. All systems had Adaptec 5x05Z RAID controllers with 2TB Seagate drives. The LGA1366 Xeon is/was relatively modern because it represents the first generation of the Core i series architecture. The LGA1366 E5500 Xeons have a base clock of 133Mhz with 3 memory channels, and 4.8GT/s, 5.6GT/s, or 6.4GT/s transfer rates on the QPI bus. Depending on the model number, the max memory speeds are 800Mhz, 1066Mhz, or 1333Mhz.

Continue reading “In Defense of Old Tech: Why a 10 year old Xeon could be your next computer”

How To Get Your Hacked YouTube Channel Back

A YouTube channel I subscribe to was recently hacked. The owner just eclipsed the 100k subscriber mark and received an authentic looking email about the 100k subscriber plaque. He followed the directions in the email without realizing it was a phishing scheme and he subsequently lost control of his channel.

The owner of the channel was hitting many roadblocks while trying to contact YouTube to get someone to advocate for him. I too searched for advice on his behalf, but I kept coming across the same community pages with no real guidance or solution. After about a week went by I used my YouTube channel to contact Creator Support via their email feedback form. Within about a day I received an email from someone who understood the issue and was able to provide useful help.

The trick to getting your YouTube channel back is a secret contact form called Send an email to our support team to report potential account hijacking that is only available to YouTube Creators that are part of the YouTube Partner program. This is the long way of saying that only monetized YouTube channels can access this special form and get the fast track to YouTube Creator Support for hacked accounts.

Continue reading “How To Get Your Hacked YouTube Channel Back”

Using Docker to Create Pop-Up MySQL Instances

Pop-Up shops are those short lived stores at malls and other places, often times they are kiosks. They serve to satisfy temporal demands like nano quadcopters or engraved keychains. In this context you can create MySQL instances that are short lived, easily provisioned, and easily disposed of.

Imagine you are a developer, or the DBA who has to tell a developer when their code breaks, and you would like an easy way to validate code against the production schema, but not impact your production systems?

This recipe makes some assumptions:

  • You have a MySQL slave or a secondary Innodb Cluster instance to CLONE from
  • You are using MySQL 8.0.17 or later
  • You don’t have hundreds of gigabytes to terabytes of data

If you have a lot of data in your production environment, this won’t be a viable solution, but if your data is in the 10s of gigabytes, this could work for you.

I’m going to present 2 options: 1) A completely standalone transient instance of MySQL 2) A semi-persistent instance of MySQL that can live on an external encrypted SSD or other secured storage.

Continue reading “Using Docker to Create Pop-Up MySQL Instances”

WebCom secrets: How we hosted 70,000 domains on one Apache instance

A chief virtue of time is that it provides distance. Time is the 4th dimension we live in and it gives us the opportunity to share what once was, without fear of reprisal. It has been 12 years since I was let go from Verio, almost as much time as I worked for WebCom/Verio/NTT. I feel there is enough distance between then and now to share some secrets without fear of reprisal.

WebCom did things differently, we pioneered name-based virtual hosting and we learned how to do more with less. Back when WebCom was starting to do name-based hosting it was common for many providers to put 2,000 IP addresses on an SGI machine running IRIX. I assume that the allure of SGI had to do with decent horsepower and a BSD derived OS that could host a lot of IP addresses per NIC. Back then the BSD network stack was considered to be one of the best.

When I started we had HP PA-RISC machines, a Sun 4/330, and a Windows NT 3.51 486 running MS SQL Server (Sybase). By the end of the year we’d signed a lease on a Sun Enterprise 1000 server, a piece of “big iron” at the time. I think we had 4 SuperSPARC processors and 512MB of RAM. We looked at offering IP based hosting on Sun, but their OS only allowed up to 255 IPs per NIC. We briefly considered an inexpensive array of SCO Unix boxes, but Linux was never in the running because Chris considered it an immature OS. I spent my entire career there championing Linux, and winning.

We decided to go the Big Ole Server route with Sun, first with the S1000E, then an Enterprise 4000 in 1997. Early on we ran Netscape Enterprise Server, a commercial web server product from Netscape, written by the same people who wrote NCSA httpd. This was a modular web server with a plugin architecture and it could be expanded by writing NSAPI modules to perform actions in the chain of operations. Apache wasn’t really on the radar at this point. Chris wrote the first name-based hosting plugin for Netscape, this solution lasted us until around 20,000 domains, then the underlying architecture of Netscape became a bottleneck.

Continue reading “WebCom secrets: How we hosted 70,000 domains on one Apache instance”

MySQL 8 Network Backup Using Docker and CLONE

One of the shortcomings of MySQL GPL is that it does not come with a first party online backup solution. With the release of MySQL 8.0.17 the CLONE plugin was introduced, this essentially integrated online backup as a plugin to the MySQL Server.

The MySQL 8.0 Reference Manual describes how to use the CLONE plugin to perform local clones (backups) here: https://dev.mysql.com/doc/refman/8.0/en/clone-plugin-local.html

Doing local clones is incredibly useful and a really fast way of making an image backup. I would argue that the CLONE plugin is better for local image backups than competing solutions simply because the syntax is more brief and efforts were made to integrate CLONE into the server, thereby reducing the impact of performing CLONE operations.

The CLONE plugin can either clone to the server’s default data directory or to another directory specified in the CLONE command. I will demonstrate the latter usage for making online remote backups without modifying the data directory of the container.

Continue reading “MySQL 8 Network Backup Using Docker and CLONE”

Bona Fides: Linux Kernel

This page shouldn’t be considered a brag page, it’s just a place for me to easily categorize a Linux Kernel contribution I made eons ago. This is my original contribution of the vfork(2) system call. The current Linux kernel does not implement it in this way, however syscall 190 is still sys_vfork ?

Subject: [PATCH] new syscall: sys_vfork
To: linux-kernel@vger.rutgers.edu (Linux Kernel Mailing List)
Date: Fri, 8 Jan 1999 10:49:54 -0800 (PST)
X-Mailer: ELM [version 2.4 PL24]
Content-Type: text
Status: RO
Content-Length: 5783
Lines: 156

Hello,

Well, I hacked in support for a traditional style vfork.  I haven't
tried actually running an application using the new vfork; I wanted
to release what I have to get feedback, as this is the first patch
I've really done.

Anyhow, some background first:

This implementation of vfork supports these features:

 - the VM is cloned off the parent
 - the parent sleeps while the vfork()ed child is running
 - the parent awakes on an exec() and exit()
 - the implementation theoretically allows for recursive vforks
 - it's executable from within a cloned thread
 - If I'm right about the flags, the sigmask is not cloned

A little bit about the 'controversial' parts:  The implementation
uses a wait queue in the task structure.  When the parent vforks,
after successful spawning, it sleeps on the vfork wait queue.  When
the child exits or execs, it does a wake_up(&current->p_pptr->vfork_sleep);
Which causes the parent to awake.  The wakeup in the exec is right
at the top of do_execve().  The wakeup in exit is right before
the time the parent gets notified of the child exit (before notify_parent);

It allows recursion because if a vforked child vforks, it just sleeps,
and as each vforked child performs an exec or exit, it percolates up
through the vfork execution stack.

Please let me know if I've done anything grossly wrong, or just wrong.
Additionally, could someone tell me how to do direct syscalls, I'm fuzzy
on that ;)

--Perry

------------------------------8<-----------------------------------------------

diff -u --recursive linux.vanilla/arch/i386/kernel/entry.S linux/arch/i386/kernel/entry.S
--- linux.vanilla/arch/i386/kernel/entry.S      Thu Jan  7 19:21:54 1999
+++ linux/arch/i386/kernel/entry.S      Thu Jan  7 20:38:18 1999
@@ -559,13 +559,14 @@
        .long SYMBOL_NAME(sys_sendfile)
        .long SYMBOL_NAME(sys_ni_syscall)               /* streams1 */
        .long SYMBOL_NAME(sys_ni_syscall)               /* streams2 */
+       .long SYMBOL_NAME(sys_vfork)            /* 190 */

        /*
-        * NOTE!! This doesn' thave to be exact - we just have
+        * NOTE!! This doesn't have to be exact - we just have
         * to make sure we have _enough_ of the "sys_ni_syscall"
         * entries. Don't panic if you notice that this hasn't
         * been shrunk every time we add a new system call.
         */ 
-       .rept NR_syscalls-189
+       .rept NR_syscalls-190
                .long SYMBOL_NAME(sys_ni_syscall)
        .endr
diff -u --recursive linux.vanilla/arch/i386/kernel/process.c linux/arch/i386/kernel/process.c
--- linux.vanilla/arch/i386/kernel/process.c    Thu Jan  7 19:21:54 1999
+++ linux/arch/i386/kernel/process.c    Thu Jan  7 20:33:23 1999
@@ -781,6 +781,19 @@
        return do_fork(clone_flags, newsp, &regs);
 }

+asmlinkage int sys_vfork(struct pt_regs regs)
+{
+       int     child;
+
+       child = do_fork(CLONE_VM | SIGCHLD, regs.esp, &regs);
+
+       if (child > 0) {
+               sleep_on(&current->vfork_sleep);
+       }
+
+       return child;
+}
+
 /*
  * sys_execve() executes a new program.
  */
diff -u --recursive linux.vanilla/fs/exec.c linux/fs/exec.c
--- linux.vanilla/fs/exec.c     Sun Nov 15 09:52:27 1998
+++ linux/fs/exec.c     Fri Jan  8 10:32:59 1999
@@ -808,6 +808,9 @@
        int retval;
        int i;

+       /* vfork semantics say wakeup on exec or exit */
+       wake_up(&current->p_pptr->vfork_sleep);
+
        bprm.p = PAGE_SIZE*MAX_ARG_PAGES-sizeof(void *);
        for (i=0 ; i<MAX_ARG_PAGES ; i++)       /* clear page-table */
                bprm.page[i] = 0;
diff -u --recursive linux.vanilla/include/linux/sched.h linux/include/linux/sched.h
--- linux.vanilla/include/linux/sched.h Thu Jan  7 19:27:44 1999
+++ linux/include/linux/sched.h Thu Jan  7 21:57:20 1999
@@ -258,6 +258,10 @@
        struct task_struct **tarray_ptr;

        struct wait_queue *wait_chldexit;       /* for wait4() */
+
+/* sleep in vfork parent */
+       struct wait_queue *vfork_sleep;
+
        unsigned long policy, rt_priority;
        unsigned long it_real_value, it_prof_value, it_virt_value;
        unsigned long it_real_incr, it_prof_incr, it_virt_incr;
@@ -298,6 +302,7 @@
        struct files_struct *files;
 /* memory management info */
        struct mm_struct *mm;
+
 /* signal handlers */
        spinlock_t sigmask_lock;        /* Protects signal and blocked */
        struct signal_struct *sig;
@@ -349,6 +354,7 @@
 /* pidhash */  NULL, NULL, \
 /* tarray */   &task[0], \
 /* chld wait */        NULL, \
+/* vfork sleep */      NULL, \
 /* timeout */  SCHED_OTHER,0,0,0,0,0,0,0, \
 /* timer */    { NULL, NULL, 0, 0, it_real_fn }, \
 /* utime */    {0,0,0,0},0, \
diff -u --recursive linux.vanilla/kernel/exit.c linux/kernel/exit.c
--- linux.vanilla/kernel/exit.c Tue Nov 24 09:57:10 1998
+++ linux/kernel/exit.c Fri Jan  8 10:34:10 1999
@@ -292,6 +292,10 @@
                kill_pg(current->pgrp,SIGHUP,1);
                kill_pg(current->pgrp,SIGCONT,1);
        }
+
+       /* notify parent sleeping on vfork() */
+       wake_up(&current->p_pptr->vfork_sleep);
+
        /* Let father know we died */
        notify_parent(current, current->exit_signal);

diff -u --recursive linux.vanilla/kernel/fork.c linux/kernel/fork.c
--- linux.vanilla/kernel/fork.c Thu Jan  7 19:27:29 1999
+++ linux/kernel/fork.c Thu Jan  7 20:24:53 1999
@@ -521,6 +521,7 @@
        p->p_pptr = p->p_opptr = current;
        p->p_cptr = NULL;
        init_waitqueue(&p->wait_chldexit);
+       init_waitqueue(&p->vfork_sleep);

        p->sigpending = 0;
        sigemptyset(&p->signal);


------------------------------8<----------------------------------------------

Reducing the Impact of YouTube’s API Quota

I started redesigning my website a several weeks ago, my objective was to create a centralized hub for sharing written information, code, video, and photography. It was rather easy to solve most of those problems, and sharing my latest YouTube video was simple at first.

I had this niggling feeling that my new website was on the heavyweight side, after all it’s WordPress based and I had a few plugins. The annoying reCaptcha logo was popping up everywhere, even when it wasn’t used. After using the Coverage tab in Chrome and installing yet more WordPress plugins to trim the fat, I tried get it down to as small a footprint as I could. Then came the Google PageSpeed Insights. Sometimes we are blissfully unaware of our problems and go through life with blinders on, PageSpeed Insights simultaneously woke me and gave me yet another obsession to chase.

Continue reading “Reducing the Impact of YouTube’s API Quota”