Microsoft is a new company, compared to the Evil Empire we all knew in the late 90’s that was fighting to eliminate competition. But have they changed or is their approach different, but with the same playbook?
All of these technologies seek to extend and embrace Linux, Microsoft’s once most hated competing technology. Microsoft’s Azure can’t exist without Linux, otherwise they turn away a great deal of market share. It is often said that the Internet runs on Linux, and I agree that is largely true.
I started a new job a little while ago and I’m learning technologies that I was not exposed to at my previous employer. In fact, I took this job specifically knowing that I would be working with unfamiliar technologies.
At my previous job I supported MySQL and saw the myriad issues that customer encountered while running MySQL in their environments. One relatively recent issue came with MySQL 8.0.27 and Innodb ClusterSets. This is a new feature that allows you to create interconnected Innodb Clusters, and one of the steps in doing that is to CLONE a new primary host in the child cluster. The mysqlsh tool is used for this purpose, but when executing the CLONE from the primary side on GCP the operation would fail.
The problem was that the connection being used by mysqlsh was severed, but mysqlsh was never “informed” about this. The underlying cause seems to be the software defined nature of cloud computing wearing through the thin veneer. It was found that the connection would not be terminated if mysqlsh was run on the replica server (where data was being CLONEd to). I wasn’t directly handling this particular issue, simply advising another engineer and reviewing data.
Today I encountered this problem in my new job when a benchmark was creating indexes on a large table it would fail with a Lost connection...2013 error. After scrutinizing the MySQL source code for a while, and performing some strace operations of my own, I concluded that MySQL was not at fault and was not doing anything to cause a client connection to timeout.
I decided to adjust the Linux kernel sysctl variable net.ipv4.tcp_keepalive_time to 300 seconds and see if it had an effect, it did and the outcome was exactly what I’d hoped for. I did some further testing with select sleep(900) and found that GCP silently evicts the idle TCP connection right around 900 seconds since the command was issued.
Why does adjusting the tcp_keepalive_time make a difference? The MySQL client marks the socket connection as “keepalive” by setting the option SO_KEEPALIVE, this causes the Linux kernel to start sending “keepalive” packets ater the tcp_keepalive_time expires. The default value for tcp_keepalive_time is 7200 seconds and after that expires the kernel sends keepalives every 75 seconds to keep the channel open. Setting the tcp_keepalive_time to a value lower than the GCP eviction timeout will prevent connections that are waiting on a long running task from closing, such as index creation on a large table.
It was Monday September 27th, 1993 and I was not even a month into my Junior year of High School. This day started out like most others, and as my Mother was dropping me off for school, one of the Secretaries came running out of the Administration building to eagerly greet me.
While in my Sophmore year I took a Computers elective class that was that was normally only available to Junior and Senior students, but because it was the last year they were offering it, I was able to enroll. This class was at the tail end of the 8-bit Apple computer era in schools, we had a bunch of Apple IIe, and some Apple IIgs computers. The teacher had successfully lobbied to purchase an IBM clone computer, it was a 386DX-25 with a VGA monitor, a pretty nice machine for the day.
Back when Hotmail was the biggest thing in email, WebCom deployed a secret weapon that turned the tide in the email wars. WebCom SMTP (WSMTP) was a multi-phase project to create an entirely new email server from the ground up, something that could handle thousands of emails per second and HUGE attachments. But most importantly, it was designed to allow sysadmins to sleep at night!
WebCom started out humbly using off the shelf tools for the time, Sendmail for email, NCSA httpd for web serving, PERL for our web control panel, and Sybase for our customer database. NCSA httpd was the first component that needed upgrading, it was replaced with Netscape Enterprise Server.
Memory speed isn’t often a consideration when building a system except for those seeking ultimate Overclocking performance. While OC memory exceeds the JEDEC standards, there are other considerations which may rob you of maximum performance.
I will discuss memory technologies ranging from DDR2 FB-DIMM to modern DDR4 ECC memory and how CPU memory controller limitations affect the actual performance you can expect. The TL;DR is that when you add more DIMMs per channel or more ranks, the memory frequency goes down.
Working on IT problems often requires intense focus and research to find the solution to the problem. I’ve previously written about Rabbit holes and Time sinks, this axiom is an extension of those. Sometimes you just have to know when to quit and regroup, rather than continuing to bang your head against the wall.
I’ve become familiar with Docker over the last year, using it for testing and educating myself on current technologies. My day job is working as a Principal Technical Support Engineer for MySQL, so I encounter every type of deployment you can imagine. We also have new product releases from time to time and I decided to dive into Kubernetes so that I can be knowledgeable in that domain.
I use the XFCE desktop environment and have 3 4k screens. These screens are 162.56dpi, which is a little hard to read at native 1:1 rendering. The benchmark for displays is 96dpi, I prefer somewhere around 112dpi natively. Applying a 144dpi custom multiplier will result in an effective 112.88dpi. You may ask: “Why 112dpi, where did that come from?” I have an IBM A30p laptop from 2001 that has a 1600×1200 screen which is 15.1 inches, I used this laptop for many years and prefer the native 112dpi. It’s not too tiny and not to big, it’s the goldilocks of native resolutions.
These are the changes I make to have a comfortable environment with very legible text reading. Yes, you are “throwing away” resolution, but the tradeoff is that everything is sharper.
The phrase “End of an Era” sounds cliche, but in this case it really is the end of an era. Fry’s Electronics was the last bastion of geekdom, it was the WalMart of electronics/computing/snacks. The closing of Fry’s Electronics bookends the era that I grew up in.
Fry’s Electronics wasn’t especially good at any one thing, what they lacked in specificity was made up for in grandeur and selection. If you wanted to buy an external DVD drive, they had 40 different examples in different speeds, dual layer, not dual layer, R/W, ROM, etc. If you needed some RAM for your computer, you could choose from 10 different manufacturers in different speeds and densities. What Fry’s brought to the table was an overwhelming volume of stock on hand. If you NEEDED a new hard disk right now, you could hop over to Fry’s and get one.
Fry’s was the last of the original electronics retailers in Silicon Valley, they were the biggest and outlasted the rest. Storied institutions such as WeirdStuff and Halted’s (HSC Electronic Supply) to the more obscure shops such as A-Z Surplus and Action Computer, and companies such as NCA computer tried to compete with Fry’s, but loss leaders every week are hard to compete with.
I have worked at several disciplines throughout my life, a good while ago I made a conscious decision to pursue working in technology roles because that was my most marketable skill set. I have worked as a metal fabricator, machinist, software developer, system administrator, manager, and in a hybrid of various roles.
My current employment is highly analytical, it involves solving problems, doing research, communicating, and helping people with everything from the mundane to crises. I’ve always performed roles like this, but I’ve also pursued more creative and artistic endeavors.
This article is as much a piece of documentation as it is commentary. I recently decided to rejigger my home network after being quite comfortable in the current configuration for almost 7 years. The impetus was actually quite simple: one day I suddenly got paranoid when I realized what damage could be done if someone compromised my personal account. I am reasonably careful and competent about how I run things, but in spite of how careful I am, the services I’ve added in the last year increase the attack surface of my home network considerably. I would be foolish to ignore the increased risk these services pose.
Rabbit holes can be interesting or frustrating distractions to a relatively direct plan or process. Sometimes those rabbit holes turn from distractions into time sinks. Getting my home network upgrade completed was filled with both rabbit holes and time sinks. This isn’t the first major upgrade I’ve been involved in, I’ve moved datacenters multiple times, deployed new services, migrated services, but I’ve never had to completely duplicate all running services while also juggling new firewalls and network renumbers.