Blog

Protecting Identity and Preventing Gender Bias

VendorSwag is a platform for connecting people with vendors, but potentially in an anonymous way. This makes us a middle man that must simultaneously protect our client‘s identity while allowing vendors to freely communicate with them. A simultaneous problem that arose while evaluating the first requirement was that of gender and gender bias in communication. We wanted to build a platform that was sensitive to gender and made efforts to eliminate gender bias.

Gender

The VendorSwag platform had 3 challenges to solve on the subject of Gender:

  • Ensuring we reflected a wide gamut of Gender identities
  • Incorporating appropriate pronouns and honorifics
  • Providing a list of appropriate pseudonyms which respect Gender identity

Gender identities are very personal and it’s important that you allow people to express their identity in the way they prefer. We chose a React component that offered a very flexible way to offer gender choices.

Honorifics (Mr., Dr., etc) and pronouns need to offer a range of choices that reflect what a person identifies by. We researched pronouns to find the most inclusive list and represented that in the following (JSON) format:

      "set": {
        "object": "them",
        "subject": "they",
        "reflexive": "themself",
        "possessive": "their",
        "possessive_pronoun": "theirs"
      }

Pseudonyms

We had 2 simultaneous problems to solve with pseudonyms: How do we allow users to identify themselves in a personalized way without revealing their identity?

Open Source Intelligence (OSINT) is a school of intelligence gathering from public sources to find details about a person. This could be a house number from a photograph on Facebook to a short URL identifier, or even the box of an email address. People reliably put identifying information in the pseudonyms, such as birth year, and they will often use the same pseudonym across multiple platforms, making it easy to correlate seemingly unrelated sources of data.

There is another issue to consider with pseudonyms: appropriateness and moderation. If you have to moderate every pseudonym for what is deemed business appropriate nomenclature, you will spend a lot of time moderating. There is a cross section of people that will take things like usernames and pseudonyms to an extreme because they find it amusing, we needed to prevent that.

Simultaneous to business appropriate we wanted to ensure that when a client communicated with a vendor, their pseudonym reflected their chosen identity, without telegraphing their gender identity to the vendor.

The solution to this is remarkably simply and clever at the same time: We offer clients an extensive list of gender neutral pseudonyms from a pick list.

We scoured the Internet for gender neutral names in many languages and compiled a list of 920 names in 25 different languages. We then eliminated names that had contextual spellings, such as “Aaron” and “Erin”, to prevent telegraphing of gender.

Conclusion

Offering a large selection of gender neutral pseudonyms eliminates gender bias and allows us to have pseudonyms that are business appropriate to facilitate communication between clients and vendors. I am not aware of any other platform, marketing centric or otherwise, that takes all of these considerations into account. I feel the choice to incorporate these solutions into our platform from the beginning was important in the current social era. These choices were made in 2020, well before the present political debates and battles over gender affirming care and other gender related discussions.

Fadal Archaeology: 1400-2 CPU Card

I’ve been spending more time lately diving into reverse engineering of the Fadal CNC control. I upgraded my CPU card to a 1400-2 so that my control would have look-ahead and continuous contouring. This feature was lacking in the 1400-1 CPU and caused the machine to pause briefly between linking moves.

After acquiring the 1400-2 CPU I tried using my 1460-0 memory expansion with it, and while I got it working, according to the Fadal board compatibility matrix, this is not a supported configuration. I went on eBay and looked for the 1460-1 memory expansion and was underwhelmed how simplified and cost reduced it was in comparison to the 1460-0. This reignited an idea I had during my last reverse engineering sprint: why not design and build my own memory expansion? Well, after a bit of reverse engineering and a couple board revisions and component revisions, I got a working result. I now have a single board 384KB solution just like the 1460-2 board for the later controls.

Continue reading “Fadal Archaeology: 1400-2 CPU Card”

When is Linux not Linux?

Microsoft is a new company, compared to the Evil Empire we all knew in the late 90’s that was fighting to eliminate competition. But have they changed or is their approach different, but with the same playbook?

We have VScode, an excellent code editor with some really handy integrations, I switched to it from Eclipse when I started to write JavaScript. We have WSL, a Linux like layer on Windows 10 and later. We have WSL 2, a legitimate Linux kernel running under a hypervisor, perhaps next to Windows on the stack. Now we have WSLg, the ability to run X11 applications inside WSL 2, and in preview we have WSA, Windows Subsystem for Android. WSA builds on the work done for WSL 2, and presumably WSLg.

All of these technologies seek to extend and embrace Linux, Microsoft’s once most hated competing technology. Microsoft’s Azure can’t exist without Linux, otherwise they turn away a great deal of market share. It is often said that the Internet runs on Linux, and I agree that is largely true.

Continue reading “When is Linux not Linux?”

Learning the nuances of GCP and TCP keepalive

I started a new job a little while ago and I’m learning technologies that I was not exposed to at my previous employer. In fact, I took this job specifically knowing that I would be working with unfamiliar technologies.

At my previous job I supported MySQL and saw the myriad issues that customer encountered while running MySQL in their environments. One relatively recent issue came with MySQL 8.0.27 and Innodb ClusterSets. This is a new feature that allows you to create interconnected Innodb Clusters, and one of the steps in doing that is to CLONE a new primary host in the child cluster. The mysqlsh tool is used for this purpose, but when executing the CLONE from the primary side on GCP the operation would fail.

The problem was that the connection being used by mysqlsh was severed, but mysqlsh was never “informed” about this. The underlying cause seems to be the software defined nature of cloud computing wearing through the thin veneer. It was found that the connection would not be terminated if mysqlsh was run on the replica server (where data was being CLONEd to). I wasn’t directly handling this particular issue, simply advising another engineer and reviewing data.

Today I encountered this problem in my new job when a benchmark was creating indexes on a large table it would fail with a Lost connection...2013 error. After scrutinizing the MySQL source code for a while, and performing some strace operations of my own, I concluded that MySQL was not at fault and was not doing anything to cause a client connection to timeout.

I decided to adjust the Linux kernel sysctl variable net.ipv4.tcp_keepalive_time to 300 seconds and see if it had an effect, it did and the outcome was exactly what I’d hoped for. I did some further testing with select sleep(900) and found that GCP silently evicts the idle TCP connection right around 900 seconds since the command was issued.

Why does adjusting the tcp_keepalive_time make a difference? The MySQL client marks the socket connection as “keepalive” by setting the option SO_KEEPALIVE, this causes the Linux kernel to start sending “keepalive” packets ater the tcp_keepalive_time expires. The default value for tcp_keepalive_time is 7200 seconds and after that expires the kernel sends keepalives every 75 seconds to keep the channel open. Setting the tcp_keepalive_time to a value lower than the GCP eviction timeout will prevent connections that are waiting on a long running task from closing, such as index creation on a large table.

Darkness into Light

It was Monday September 27th, 1993 and I was not even a month into my Junior year of High School. This day started out like most others, and as my Mother was dropping me off for school, one of the Secretaries came running out of the Administration building to eagerly greet me.

While in my Sophmore year I took a Computers elective class that was that was normally only available to Junior and Senior students, but because it was the last year they were offering it, I was able to enroll. This class was at the tail end of the 8-bit Apple computer era in schools, we had a bunch of Apple IIe, and some Apple IIgs computers. The teacher had successfully lobbied to purchase an IBM clone computer, it was a 386DX-25 with a VGA monitor, a pretty nice machine for the day.

Continue reading “Darkness into Light”

The Best Email Server You Never Heard of

Back when Hotmail was the biggest thing in email, WebCom deployed a secret weapon that turned the tide in the email wars. WebCom SMTP (WSMTP) was a multi-phase project to create an entirely new email server from the ground up, something that could handle thousands of emails per second and HUGE attachments. But most importantly, it was designed to allow sysadmins to sleep at night!

WebCom started out humbly using off the shelf tools for the time, Sendmail for email, NCSA httpd for web serving, PERL for our web control panel, and Sybase for our customer database. NCSA httpd was the first component that needed upgrading, it was replaced with Netscape Enterprise Server.

Continue reading “The Best Email Server You Never Heard of”

Memory speed vs capacity

Memory speed isn’t often a consideration when building a system except for those seeking ultimate Overclocking performance. While OC memory exceeds the JEDEC standards, there are other considerations which may rob you of maximum performance.

I will discuss memory technologies ranging from DDR2 FB-DIMM to modern DDR4 ECC memory and how CPU memory controller limitations affect the actual performance you can expect. The TL;DR is that when you add more DIMMs per channel or more ranks, the memory frequency goes down.

Continue reading “Memory speed vs capacity”

Know when things are solvable and when they are just entropy

Working on IT problems often requires intense focus and research to find the solution to the problem. I’ve previously written about Rabbit holes and Time sinks, this axiom is an extension of those. Sometimes you just have to know when to quit and regroup, rather than continuing to bang your head against the wall.

I’ve become familiar with Docker over the last year, using it for testing and educating myself on current technologies. My day job is working as a Principal Technical Support Engineer for MySQL, so I encounter every type of deployment you can imagine. We also have new product releases from time to time and I decided to dive into Kubernetes so that I can be knowledgeable in that domain.

Continue reading “Know when things are solvable and when they are just entropy”

Setting up HiDpi on XFCE

I use the XFCE desktop environment and have 3 4k screens. These screens are 162.56dpi, which is a little hard to read at native 1:1 rendering. The benchmark for displays is 96dpi, I prefer somewhere around 112dpi natively. Applying a 144dpi custom multiplier will result in an effective 112.88dpi. You may ask: “Why 112dpi, where did that come from?” I have an IBM A30p laptop from 2001 that has a 1600×1200 screen which is 15.1 inches, I used this laptop for many years and prefer the native 112dpi. It’s not too tiny and not to big, it’s the goldilocks of native resolutions.

These are the changes I make to have a comfortable environment with very legible text reading. Yes, you are “throwing away” resolution, but the tradeoff is that everything is sharper.

Continue reading “Setting up HiDpi on XFCE”

End of an Era: Fry’s Closes Doors

The phrase “End of an Era” sounds cliche, but in this case it really is the end of an era. Fry’s Electronics was the last bastion of geekdom, it was the WalMart of electronics/computing/snacks. The closing of Fry’s Electronics bookends the era that I grew up in.

Fry’s Electronics wasn’t especially good at any one thing, what they lacked in specificity was made up for in grandeur and selection. If you wanted to buy an external DVD drive, they had 40 different examples in different speeds, dual layer, not dual layer, R/W, ROM, etc. If you needed some RAM for your computer, you could choose from 10 different manufacturers in different speeds and densities. What Fry’s brought to the table was an overwhelming volume of stock on hand. If you NEEDED a new hard disk right now, you could hop over to Fry’s and get one.

Fry’s was the last of the original electronics retailers in Silicon Valley, they were the biggest and outlasted the rest. Storied institutions such as WeirdStuff and Halted’s (HSC Electronic Supply) to the more obscure shops such as A-Z Surplus and Action Computer, and companies such as NCA computer tried to compete with Fry’s, but loss leaders every week are hard to compete with.

Continue reading “End of an Era: Fry’s Closes Doors”