Decomposing Monoliths by Cleaning Rooms

Your engineers are telling you that the monolith needs to be split into several microservices. Work has slowed to a snails pace. It takes forever to get new features out. Changes often introduce bugs in areas of the code that weren’t even touched. It seems that any step forward is a step backward.

Have you ever cleaned a messy kid’s room? For me and my children it often starts like this, “Daddy I need help cleaning my room, it’s too hard!” The mess that they have made while playing has become too much to handle on their own. Monoliths are sometimes like that, built feature by feature until one day it’s too hard to move forward. Thankfully, there is a formula to clean up the mess.

Before I get stared with my child I have to determine if they are really ready to clean up. If they are too tired they will not be able to focus. Splitting a monolith involves the same inspection. Are you sure the engineers that built the monolith want to change? Microservices require an increased level of production readiness from the owning engineers. That muscle is going to take time to develop. This may require you to sell the change and potentially move or remove those opposed.

Also, it takes a certain type of engineer to do this work. I love this work. Give me a problem like this and I’ll go into my hobbit-hole for six months and return with the end product. Engineers that prefer greenfield work will be bored to tears (or quitting) by refactoring the same method call over 8,000 lines. It may be better to reserve them for maintenance and enhancement of the extracted services.

If the engineers are on-board there’s another hidden obstacle. When my children clean their room sometimes they try to push everything to the corners or pile it in the closet. The room looks somewhat clean but really the mess is just hidden better. Under the hood nothing has really changed. Are the engineers working on decomposition going to lead the system to the same state?

This hidden obstacle is very hard to avoid due to Conway’s law. Conway’s law states that organizations are constrained to produce systems that match the communication structure of the organization. I have witnessed this at multiple companies. Do you have a top-down organization? You will get a few god services that run the show. Do you have an office environment where one team can turn around and talk to another? You will get an tangled mess where services reach into other services to get work done. To fix this you may need to restructure teams and the building. Segregate teams and ensure they communicate through the proper channels.

Now it’s time to begin cleaning the room. When a mess is especially bad I help my kids by pushing every toy to the middle of the room. My son likes this because it creates a huge pile of toys. Sometimes I go as far as emptying every toy box onto the pile. This large pile is your monolith. Each function is a toy, individually distinguishable but together a large mess.

When the large pile is created the next step is to start defining sub-piles. I might know that there are Lincoln Logs, Legos, and toy cars in the pile. My son and I will create three piles and begin picking through the large pile for these items. Filtering the large pile is a lot of work and not all of the piles are known up front. We may discover that there should be a pile for toy trains as they are uncovered.

Decomposing a monolith is similar. You may have a general idea of the bounded contexts (sub-piles) within the code. As you start refactoring though you will discover more that need to be created. Some of these may even preempt work currently in progress. This can be very frustrating for those going through the process for the first time. They might expect that an architect, someone in charge, or even that they have it all mapped out. With a large monolith that’s practically impossible and if possible it is ill-advised.

Creating these bounded context cannot be done in a vacuum. It can be very tempting to put the engineer who knows the most about an area in charge. This can end in disaster. The engineer may say something like, “This process is great for an unknown system but is unnecessary in this case because I know it so well.” You may end up with microservices but they will be structured like the existing services. That would be like dividing the massive pile into smaller plies and placing each in a separate room. The term for that mess is a distributed monolith and is actually worse than a normal monolith. Toy trains are in each room now and it’s hard to play with them all at once. Distributed monoliths cause network traffic and costs to shoot through the roof.

To form a bounded context then you put a team together to start the discovery process. The team will interview domain experts which will help them to determine if they have a valid sub-pile or not. Those domain experts will be used throughout the decomposition process to validate the bounded context along the way. Have the team read Domain Driven Design before they start. It is difficult but imperative that this stage is done right.

Once the bounded context is created your engineers can begin the refactor work. This is like me and my son digging through the large pile to categorize toys into their appropriate sub-plies. This is where resolve comes in on your part. Project owners and managers will be frustrated or confused. Their feature work is even slower now. They may ask, “Why are we investing in the monolith when we’re just going to throw the work out?”

In a way they have a point. It would be better if the room could be picked up and things can be placed in order without creating the large pile. In some systems this is possible. Maybe you have some well defined contexts and just a few things are out of place. In other systems the monolith is too far gone. Those require the discovery and subdivision before splitting. In extreme cases it may feel like it would be faster just to rewrite everything. Burn the room down and start fresh. This is known as a big-bang rewrite and almost never works because you loose out on learnings of the past. Besides, the end goal is not to throw out work but to extract it.

On the other end of resolve you may have to slow down the engineers that want to get to the end state fast. I have seen this masked as “Getting to value.” They may want to skip some steps because it’s too much work right now. You may have to encourage some good engineering hygiene. The patterns and practices are extra work but in the long run create a more robust system.

Getting to value is a great mindset though. During the decomposition process the way to get to value fast is to work within the monolith. Build a well defined API using the bounded context discovery work. Then wire that well factored API up using the existing messy code underneath. This will make the engineers cringe but will prove out that the API is valuable and correct. The code underneath can then be straightened up to match the well defined API.

Once my son and I have created some tidy sub-piles then we begin moving them to boxes. When a box is well worn toys may spill from the holes so it is important to inspect them before use. Similarly, ensure the new APIs have well defined walls or seams. One API should not reach into another to change it’s state. This may look like the orders-api storing it’s data into an orders-data-api. Or an orchestration-api reaching into multiple APIs to “set things up.” This is more art than science and good looks to minimize the network traffic on each request to the system. Systems should act as sources of truth and work even if one component is down. Eventual consistency is key here.

Do you need to pause all feature work while the decomposition process is going on? No, but you do need communication. New feature work needs to go through the same discovery process as the existing work. Then when you are sure of what sub-pile the new work belongs you can either create the pile or add it to the existing work.

The largest killer of this whole process is lack of resolve. When my son begins playing with toys instead of sorting them I have to gently correct him so that we stay on focus. To you this might look like a priority shift. Perhaps one of the products is on fire and you need to shift resources to fight it. Don’t! The truth is this refactor work is likely a multi-year process for the first system. For every week that you disrupt a team you likely set them back two.

Another killer is rushing. When you hear an engineer say it’ll take six months they might be overconfident and it’ll really take a year. It will be tempting to put the engineer with the lowest estimation in charge. The decomposition process takes a long time, there is no way to speed it up.

A cousin to rushing is attempting to throw more people at the problem. If I only had more engineers this project would move faster! The truth is once the bounded contexts have been defined you may only need a single engineer to execute on the refactor. Adding more people just increases the lines of communication and slows the work down.

The truth is the time to pull systems out will decrease dramatically after each system is removed. The first may be a multi-year project but the second and third will be faster. This is why my son likes the large pile. The first few sub-piles take time but once the cruft is out of the way it gets faster to sort toys. The same thing happens with code. The first refactor is actually touches on the second and third. Each pass is a bit faster than the last.

So is it worth it in the end? A monolith in and of itself is not bad. I have seen monoliths scale businesses to 100M+ in revenue. I have also seen new products formed from the extracted APIs and new life breathed into old companies.

It can be worth it. I love seeing my son’s happy dance at the end of the process, “There is so much room to play daddy!” The questions you need to answer are; “How stuck are my engineers?” and “How much resolve do I have?” Once you know the answers to both you can begin the process.

Enable virtualization on Gigabyte AM4 boards

Tearing your hair out because virtualization won’t work on your new Ryzen & Gigabyte K7 PC? Make sure SVM mode is enabled in your Gigabyte motherboard’s bios, it’s buried in an unexpected spot. You can find it under: “M.I.T” > “Advanced Frequency Settings” > “Advanced CPU Core Settings” > “SVM Mode”.

Even when SVM mode is disabled the following will return expected results.
egrep '^flags.*(vmx|svm)' /proc/cpuinfo

However, when you run Virtual Machine Manager you’ll get “KVM is not available.” If you attempt to add the kvm_amd module with sudo modprobe kvm_amd you’ll get "ERROR: could not insert 'kvm_amd': Operation not supported". This lsmod | grep kvm will list kvm but not kvm_amd and VirtualBox will complain that “AMD-V is disabled in the BIOS (or by the host OS).” That last one finally tipped me off to the BIOS setting.

Stay away from Gigabyte motherboards if you are building a Linux based machine. Currently Ubuntu 17.04 fails at install with the error “unexpected irq trap at vector 07.” In the Canonical bug report there is a quote from Gigabyte which reads “Gigabyte do not guarantee Linux Platform on the desktop motherboard.” On the bright side the bug got me to try out Fedora 25 which I am loving so far.

Wedding bulletin cover art

My wife and I had a small budget for our wedding so we ended up doing a lot ourselves. For example, I created this clip art file for the cover of our wedding bulletin. I printed it on some simple stock from our local paper supplier. The results can be seen below.

Wedding Bulletin

The SVG source file can be found on github. I uploaded this to back in 2010 but figured it deserved a permanent spot here.

Please feel free to use or share as it’s licensed under Creative Commons 4.0. Bonus points if leave a comment below after you’ve used it!

Cannot read property ‘replace’ of undefined

If you get the following while setting up a new React/Babel/Webpack project you forgot to install all of the presets.

> webpack-dev-server --content-base client --inline --hot

var elements = request.replace(/^-?!+/, "").replace(/!!+/g, "!").split("!");

TypeError: Cannot read property 'replace' of undefined
at /Users/.../node_modules/webpack/lib/NormalModuleFactory.js:72:26
at /Users/.../node_modules/webpack/lib/NormalModuleFactory.js:28:4
at /Users/.../node_modules/webpack/lib/NormalModuleFactory.js:159:3
at NormalModuleFactory.applyPluginsAsyncWaterfall (/Users/.../node_modules/tapable/lib/Tapable.js:75:69)
at NormalModuleFactory.create (/Users/.../node_modules/webpack/lib/NormalModuleFactory.js:144:8)
at /Users/.../node_modules/webpack/lib/Compilation.js:214:11
at /Users/.../node_modules/async/lib/async.js:181:20
at Object.async.forEachOf.async.eachOf (/Users/.../node_modules/async/lib/async.js:233:13)
at Object.async.forEach.async.each (/Users/.../node_modules/async/lib/async.js:209:22)
at Compilation.addModuleDependencies (/Users/.../node_modules/webpack/lib/Compilation.js:185:8)

Fix by installing the presets with yarn/npm:

yarn add babel-preset-es2015 --dev
yarn add babel-preset-react --dev

Setup OpenVPN on Centos

Occasionally when I’m out I’d like to be able to remote into my machine back at home. In the past I have opened up a random port, moved RDP to it, and called it good. I don’t really trust that level of security and I feel dirty when I get home. I’d like another layer there and I know OpenVPN is a proven technology. So I’m going to spend some time this weekend setting up OpenVPN on a CentOS machine that I have laying around. I’m hopeful that will allow me to VPN into my home network and then access my Windows machine over RDP. Let’s go!

While my CentOS box is updating I’m surfing the web for some instructions. I use this machine as a build, test, and general development server. The following command tells me I’m running CentOS release 6.6 (Final).

$cat /etc/issue

A quick yum search tells me that there is no ‘openvpn’ in the base repo’s so I’m going to enable EPEL. I’m pretty sure I had it setup once already, but I recently had to rebuild the server after a hard drive crash. Oh, that reminds me, I’m logged in as root because I haven’t setup extra accounts yet. I’ll need to fix that before vacation.

$yum search openvpn
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base:
* centosplus:
* extras:
* updates:
Warning: No matches found for: openvpn

I’m skimming sites for OpenVPN config info. This one mentions the EPEL and setting up keys with ‘easy-rsa’. What the heck, can’t I setup keys with some funky openssl command? I’ll keep digging… here’s ‘easy-rsa’ again and it looks like they’re building rpm’s by hand. No thanks.

Ok, yum update finished so it’s time to install EPEL. Here’s the command I used to install this extra repo. I also follow it up by disabling it by default. Whenever I need it or want to run an update I tack on --enablerepo=. That keeps me from installing or updating something from the EPEL on accident. Don’t ask me where I got the command from, hopefully it still works.

$rpm -Uvh

Oh, it’s already installed and ‘/etc/yum.repos.d/epel.repo’ says it’s disabled by default.

cat /etc/yum.repos.d/epel.repo
name=Extra Packages for Enterprise Linux 6 - $basearch

Now yum reports that it found openvpn. Here goes nothing.

$yum --enablerepo=epel search openvpn
$yum --enablerepo=epel install openvpn

The easy part is over and the fun begins. Now what? chkconfig --list tells me it is installed and not running yet. The post mentioned above tells me that there are sample config files and I see them. There’s nothing in ls /etc/openvpn so I guess that’s what I need to do.

$'ls /usr/share/doc/openvpn-2.3.2/sample/sample-config-files/'
client.conf office.up roadwarrior-server.conf tls-office.conf server.conf xinetd-client-config
home.up static-home.conf xinetd-server-config
loopback-client README static-office.conf
loopback-server roadwarrior-client.conf tls-home.conf

I’m going to copy server.conf over to /etc/openvpn, but wait what’s the init.d file say for config?

$cat /etc/init.d/openvpn

I see nothing about a config file. Oh wait, never mind. It loops through the /etc/openvpn directory to find all .conf files. That seems a little sketchy but what do I know about init.d files? Guess I’ll copy it over and edit. I don’t know why I still type vim instead of vi.

$cp /usr/share/doc/openvpn-2.3.2/sample/sample-config-files/server.conf /etc/openvpn/
$vim /etc/openvpn/server.conf

Steve at GRC tells me that some ISP’s block 1194 so I’m going to dump it on a random port. WhyY U NO FINISH TUTORIAL?? not, I was going to do it anyway. By the way Steve, why haven’t you finished your OpenVPN tutorial yet? Yes yes, I know proxpn sponsors Security Now.

port **********

Research time. What’s dev tap/tun and dev-node? Why do all of these guides have one setup a private key first? That’s gotta be the easiest part of the whole thing. Ok, dev tun it is. Server fault tells me that mobile doesn’t support dev tap. Thanks Siegfried Löffler, I believe you even though you only have one internet point (that’s more points than me anyway).

Oh fine, time to create a key. Looks like ‘easy-rsa’ is supplied by the openvpn people. Do I have to use it? Whatever, guess I’ll go for it. Oh, it’s not installed by default.

$yum --enablerepo=epel install easy-rsa

Crud, where did it install? This says to make a directory and copy the files over. They’re in that spot so I’m going to believe what the magic internet tells me.

$mkdir -p /etc/openvpn/easy-rsa/keys
$cp -rf /usr/share/easy-rsa/2.0/* /etc/openvpn/easy-rsa/
$vim /etc/openvpn/easy-rsa/vars

Well look at that, that’s exactly what ‘vars’ tells me to do anyway. Change this, it’s obviously wrong.

# These are the default values for fields
# which will be placed in the certificate.
# Don't leave any of these fields blank.
export KEY_CITY="SanFrancisco"
export KEY_ORG="Fort-Funston"
export KEY_EMAIL="me@myhost.mydomain"
export KEY_OU="MyOrganizationalUnit"

Still following for the key creation. Looks like they got it right to me. Why so many different extensions for (config, conf, cnf)? Wow, this config file is full of a bunch of stuff I don’t care about.

$cp /etc/openvpn/easy-rsa/openssl-1.0.0.cnf /etc/openvpn/easy-rsa/openssl.cnf

Magic internet site tells me to run the following. Super user tells me that source “executes the content of the file passed as argument, in the current shell.” Thanks nagul, maybe I should have researched that before blindly running the command. Weird, how the heck does ./clean-all do anything? I don’t see it in the directory… oh, duh it’s a directory.

$cd /etc/openvpn/easy-rsa/
$source ./vars
$./build-key-server server
$./build-key client

Answer all of the nice scripts questions now please. You know what’s funny? I have done this exact process while creating a self signed cert for my server. Maybe I could do without easy-rsa? Oh well, this certainly is a lot less work. Ugh, this is taking forever.

Finally, now I have a self signed cert that I don’t have to touch for 3650 days. I’m sure I’ll forget how I generated it in that time. Ignoring that copy the files like says. After this is all over I’ll need to remember to copy the client keys to my Mac.

$cd /etc/openvpn/easy-rsa/keys/
$cp dh2048.pem ca.crt server.crt server.key /etc/openvpn/

Back to the server config file, edit and change the lines. Everyone’s using Google’s DNS servers so I may as well also. I do like OpenDNS but I’m not going to argue with three other sites that I’m following.

$vim /etc/openvpn/server.conf
dh dh2048.pem
push "redirect-gateway def1 bypass-dhcp"
push "dhcp-option DNS"
push "dhcp-option DNS"
user nobody
group nobody

Am I done now? It seems like there should be more to the whole process. Guess I’ll punch a hole in my router and start the service. Maybe I can connect via my Mac internally. I’m assuming I know it works if I connect and get a 10.8.x.x address since I think I saw that in a config somewhere.

$vim /etc/sysctl.conf
net.ipv4.ip_forward = 1
$sysctl -p

Time to fire it up!

$service openvpn start
$chkconfig openvpn on

Looks good, ‘ifconfig’ gives the same output as shown here. Now don’t I need to get some sort of key to my client? I would hope this is all secured with key files. Yep, I’ll need to copy the following to my Mac.


To do that I’m going to archive them and then copy the archive over to my Mac. However you get the files to your machine is up to you. I always forget how scp works so the command is below but you’ll need to tune it for your setup.

$cd /etc/openvpn/easy-rsa/keys/
$tar -czvf client.tar.gz ca.crt client.crt client.key
$cp client.tar.gz /home/
$scp root@ ~/Documents/client.tar.gz

I’m using Viscosity on my Mac to do the client side. It’s $9, can handle multiple profiles, and I already have it installed.

Nuts, looks like the Mac isn’t connecting. I know I can reach the server … oh poop, I choose the same port that SSH is running on. It couldn’t have even started. Interesting, checking the logs I see the following. I think I may need to correct that but I don’t really look forward to changing out my network configuration.

Nov 1 18:58:09 dev openvpn[12583]: NOTE: your local LAN uses the extremely common subnet address 192.168.0.x or 192.168.1.x. Be aware that this might create routing conflicts if you connect to the VPN server from public locations such as internet cafes that use the same subnet.

I see little else in the logs and ‘service openvpn restart’ didn’t error while shutting down the service. Just in case I’m going to move the port and open a hole in the firewall. Oh, hah, SSH is using tcp over my port and openssh is setup to use udp. I think I just need to poke the udp hole in my firewall for that port. I’ll try that instead.

MONEY! Yes! Now to poke a hole in my external firewall. I’ll port forward the same udp port to my server at

Now I’m not sure how to test this other than from some remote location. It all appears to be working correctly. I know I can reach my server and connect to it but I have no idea if my traffic can hop from my server to my workstation. I enabled Remote Desktop and poked a hole through Windows firewall for port 3389. I also added my user as a Remote Desktop User because I don’t run as admin.

Network Topology - Produced with Dia Diagram Editor
Network Topology – Produced with Dia Diagram Editor

Oh, the internet tells me I can setup a subnet to test. That makes sense but I’m not sure that I can do that with my router. Maybe I can put my wireless router on a different subnet? Nope, not supported. I have a spare router around here somewhere. Oh, wait my cable modem has a router attached. I can jack into that and attempt to VPN into my second internal network. I’m running two because I don’t really trust this COX router.

The subnet that my OpenVPN server lives on is 192.168.1.X and the subnet that my Mac and internal router are now on is 192.168.0.X. Traffic cannot flow from 192.168.0 to 192.168.1 without the VPN working. I’m excited, fire it up! HAHAHA, IT CONNECTED!

Bad news, I cannot connect to my internal Windows box while connected to OpenVPN. I am connected though as I see tun0 with an IP of Traffic is absolutely flowing but I think it’s stuck at the server. I also see ‘Nov 1 20:11:46 dev openvpn[12663]:’ in the logs which is the IP I’m connecting from. Time to research.

Oh, neat. I ran iptables -F now my pings are coming through. Perhaps it was working and my firewall rules were just dropping ping packets. Nope, something is still blocking. I can now ping my Windows machine but RDP doesn’t seem to be working. I’m going to install Wireshark on my Windows machine to see if packets are coming in.

What??? The packets are hitting the Windows machine. I can see them in Wireshark “Transmission Control Protocol, Src Port: 59778 (59778), Dst Port: 3389 (3389), Seq: 0, Len: 0”. Do I not have Windows Firewall setup correctly? Windows Firewall shows TCP/UDP 3389 is allowed in. I did change the port that RDP is listening to a little while ago. Maybe that service needs a reboot. That was it, after restarting all “Remote Destop *” services RDP is letting me in now.

I would rather not run without firewall rules so I’ll do the opposite of ‘iptables -F’ now. Part of the trouble is my firewall rules are really agressive. Only certain ports are allowed in and out. I’ll drop all of the outbound rules because OpenVPN will be opening random connections when connecting to my LAN.

Got it! Thanks to Bebop here I was able to get the correct settings for iptables. The major changes I made are below.

iptables -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -s -j ACCEPT
iptables -t nat -A POSTROUTING -s -o eth0 -j MASQUERADE

There are a few cleanup tasks that I want to do next. First, I want some sort of password added to my key. Second, I need some sort of dynamic dns setup because my public IP will change eventually. Finally, I would love to have Google Authenticator setup to handle two factor auth. It’s getting late so I’m going to save those tasks for another day.

More than a “Hello World” in VBA

When I started at Bates Group, LLC one of my first assignments was to debug an Excel VBA macro.  Knowing nothing about the language I fought my way through the bug and fixed the macro.  After that I quickly decided to learn more about the language.   Since “Hello World” only gets me so far, I decided to do something a little tougher.  What better way to do that than to think back to my college assignments?

Back then one of the assignments I had was to write a random walk function.  Imagine standing next to a lamppost on the street.  From the lamppost you can take a step in one of four directions; North, South, East, or West.  You take a step in a random direction and then look at where you are.  From your new location you take another step in a random direction and you keep taking these random steps for a while.  Finally you stop and look up, how far away from the lamppost are you?

The following function does that only much faster than you or I could.  It takes 20,000 steps total and colors them along the way.  Every 2,000 steps it will change colors leaving a cool trail as it goes along.

Public Sub TakeAWalk()

    ' Where on the sheet should we start?

    ' How many steps per turn should we take?
    STEPS_PER_TURN = 2000

    ' How many turns should we take?
    TURNS = 10

    For j = 3 To (TURNS + 3)
    For i = 0 To STEPS_PER_TURN

        ' Should we step east or west?
        randomX = Int(4 * Rnd)

        ' Should we step north or south?
        randomY = Int(4 * Rnd)

        ' Move west-east
        Select Case randomX
            Case 2 ' Move one step west
                If ActiveCell.Column < 1 Then ' Do not overstep the west border
                    ActiveCell.Offset(0, -1).Select
                End If

            ' Case 1 - Stay in the same spot

            Case 0 ' Move one step east
                If ActiveCell.Column <= 255 Then ' Do not overstep the east border                     ActiveCell.Offset(0, 1).Select                 End If         End Select         ' Move north-south         Select Case randomY             Case 2 ' Move one step north                 If ActiveCell.Row > 1 Then ' Do not overstep the north border
                    ActiveCell.Offset(-1, 0).Select
                End If

            ' Case 1 - Stay in the same spot

            Case 0 ' Move one step south
                If ActiveCell.Row <= 65535 Then ' Do not overstep the south border
                    ActiveCell.Offset(1, 0).Select
                End If
        End Select

        ' Leave a trail
        ActiveCell.Interior.ColorIndex = j

    Next i
    Next j
End Sub

Macro Flower ShotWith that done I wanted to add another function to learn how to create a menu.  I came up with the square flower.  This function will generate a square of random size with each section of the square filled with a different color.  This function taught me some tricks about looping in VBA, some ways are a lot faster than others.

Public Sub Flower()

    Dim start As Range
    Dim Length As Integer
    Dim Width As Integer
    Dim Color As Integer

    ' The starting point of the flower
    Set start = ActiveCell

    ' The maximum size of the flower
    size = Int(57 * Rnd)

    ' Ignore boundry errors for now
    On Error Resume Next

    For z = 0 To size
        ' Generate a random color for this row
        Color = Int((56 - 1 + 1) * Rnd + 1)

        ' Left side
        Range(start.Offset(0, 0), start.Offset(Length, 0)).Interior.ColorIndex = Color

        ' Bottom side
        Range(start.Offset(Length, 0), start.Offset(Length, Length)).Interior.ColorIndex = Color

        ' Upper side
        Range(start.Offset(0, 0), start.Offset(0, Width)).Interior.ColorIndex = Color

        ' Right side
        Range(start.Offset(0, Width), start.Offset(Width, Width)).Interior.ColorIndex = Color

        Set start = start.Offset(-1, -1)
        Length = Length + 2
        Width = Width + 2
    Next z

    On Error GoTo 0
End Sub

So what did I learn after all of this?  Mostly that I have a strong dislike for VBA.  It works well for small projects with small data sets.  However those small projects quickly expand into real programs which need to be maintained.  You are better off doing it right the first time instead of maintaining a large clunky macro.Excel Random Walk

Download the complete macro here. You will need to enable macros in your security settings to get them to work.  Once enabled, select “Random Walk” from the “ – Hello World VBA” menu.  This will start a random walk which will finish after a couple of seconds.  The “Square Flower” menu item will create a square flower under your cursor.