So, how did you improve your company? By that I mean: processes, tech stack, optimization, etc.
The people were decent and the product fine, but my personal interest in the company went from "eh, sounds kind of interesting" to "why am I here...?" very quickly, and every day felt like an uphill battle to not rub my disinterest off on my coworkers.
Looking for inspiration is a mind trap that is best avoided. Inspiration comes when you are silent and you listen to yourself (and those around you). Keep an open mind. What you feel inspired to do may take you in a very different direction than you expect. Maybe the company doesn't need a new tech stack optimization, but would benefit greatly from a BBQ in the park - also no better time to get to know the people you work with better than when they are not sitting at a desk.
If you want to improve your company, listen. That's all there is to it. Listen to the needs of the people you work along side and then answer those needs. You are a part of a small tribe and that tribe values and benefits from active listeners and naturally inspired contributors.
Forceful inspiration typically has an opposite effect, so be mindful of your underlying drive.
Take out the garbage.
By which I mean, everybody has a part of their job that they just don't like to do. It's a necessary chore, but not fun or interesting or exciting in any way. Look around you for that kind of work that's already being done by your management or your peers. Take that task off of their plate (most folks will gladly give it to you) and do it for a while, and then see if there's a way to eliminate it, automate it, or otherwise improve the experience of doing the job.
Especially as a software person, the amount of power you have to take little parts of the business that are rough and make them smooth is tremendous. Taking out the garbage is just an easy way to get started helping out.
I'll say this also applies to your first few weeks/months in a new codebase. Find the little problems that everyone else is ignoring because they have bigger things to worry about and tackle those for them. Beyond being helpful to the team, it helps you learn the territory of the codebase more quickly than you otherwise would.
Every time I start a new role, I write up detailed standard work for the processes and make notes of what can be improved. As time allows, I make those improvements. I also keep rough track of how long each update takes, so if I have 20 minutes before a meeting I can start and finish a task. This helps me on performance reviews because I can objectively say "I removed 1 FTE worth of report updating, creating ongoing cost savings".
The productivity gains are amazing. Over time, reports become fully automated. Logging improves, helping trace down errors. What used to be 8 hours per week of report updating becomes 20 minutes of validating data.
It's 10x easier to take a week off if your tasks are documented. I routinely write up a process then make another team member do it, so I can find the weakesses and fix them. This helps the bus number, and makes it easy for someone to cover for me if needed.
As our team was grew, it was very common to publish a PR, and right before merging realizing you had a conflict with someone else who'd merged to master ahead of you, using your sequence number.
I made a small change to the system, replacing the sequence numbers with unix timestamps and added some previously non-existant tests to cover the migration utility.
Unfortunately the subsequent PR took weeks to be approved by the team/eng leads because there was a lot of hand-wringing about this change. Once it was merged though we never thought about it again, it worked exactly as I'd hoped and nobody ever had to make a final "fix migration name" commit again.
The organization I worked at had no direction. Low morale. Lots of complaining. No leadership from people in management positions.
A colleague and myself tried some of the things mentioned by others below to build camaraderie, etc. Minimally effective.
Then, we started talking to colleagues.
"What's going well?" "What challenges are you facing?"
Listening alone, letting these people know their voices were heard by anyone, went a long way in building relationships, alignment, and getting things done.
Then we took what we were hearing, developed an initiative, pitched it to leadership. They rubber stamped it without really paying attention or asking questions.
We executed.
People were shocked. Their voices had been heard, and something had been done to address common concerns they had. I don't know how to measure or describe this impact, but it's the most significant thing I have ever accomplished.
Another example is really basic automation. You probably have people/teams around you that are not developers. They probably are spending time doing really basic repetitive tasks. One team I work with regularly would take a spreadsheet, then for each row create a folder and docs in Google Drive. I wrote them a Google Apps script that could do the task with a click. Not quite as impactful as the checklists, but each script like that saves someone a couple days a year.
So my advice would be to look for a broader area where you see a gap or failing, and then look for ways to address this that don't need a huge investment of people or resources; guerilla activities FTW! If you can show some success, you can then pursue as appropriate/desired...
- Commit messages must include a ticket number (or a keyword like "hotfix"). This is enforced by a commit hook that's bundled with the project.
- Ticket templates that encourage clear tickets. A ticket should always explain why it must be done. It should also contain enough information for any team member to judge its priority, or start work on it.
- Any relevant discussion about a ticket should be in the ticket.
Just enforcing this makes a world of difference. It ties every bit of code to a justification. This way, if you decide to rewrite the project, you won't end up losing years of discussions and important lessons. You'll also have a much easier time prioritising and assigning tasks, since everyone knows what they mean.
Aside from that:
- My first pull request at a company is usually an update to the README file. It rarely matches how you actually use the project.
- I write "recipes" for common tasks (lint, deploy, test...). This way, you know that the CI system and every developer in the team performs those tasks in the same way. You can change the recipes, but they are always called with the same command. "scripts/clear-cache" is also easier to memorise than "docker-compose exec backend rm -rf /var/cache...".
- Add :party_parrot: to Slack
2. Remove any "shiny-new-toy". A perfect example would be my bank, which operates in ~15-20 countries: Their interface isn't a shiny web interface with pretty animations to cover up a slow response time. The interface all employees use is built with ncurses. That speaks volumes.
3. Profiling the code and by doing so find unnecessary bottlenecks.
4. Never blindly reinvent the wheel. If a solution looks absurd but is highly used, chances are the solution is there to cover an edge case. Ask before you decide to "fix" it.
The book is: High Output Management by Andrew Grove.
It is a dated book but many of the procedures described are still valid today (obviously using real digital tools).
I highly recommend you to read it. Improving a company often means improving the way people in the company work and interact while also increasing the quality of life of these people.
I think many of the practices described in this book are about these very things.
Good luck improving your business and sorry for my English (I'm Italian).
Editing to include URLs: - Original site: www.devhub.com - Spin-off: www.rallymind.com
Cutting deployment time makes a huge impact because it makes everything go faster. You can iterate faster, which means getting bugs fixed faster, getting data to product managers faster, etc.
It also has an effect on reliability. The faster you can make changes, the faster you can fix errors (as long as you aren't introducing them faster too!).
It's always the first thing I focus on.
I dragged them, kicking and screaming, away from CVS into git for source control.
I did a LOT of documentation of their ancient and honorable :-) SQL schema.
I converted a lot of old legacy code to more modern languages.
I converted a lot of old legacy SQL embedded in their code to use prepared statements and stored procedures, and developed a tiny little framework to make it easy for other developers to do the same.
I came in early one summer morning and washed the windows in the office. Seriously. They hadn't been washed for 15 years, and were disgusting.
I helped talk them into replacing their low end home brew bug tracking system.
I pushed, hard, to get them to gradually discontinue using FreeBSD in favor of standardized Linux cloud distros.
I developed a way for sales, dev, and ops to communicate to do capacity planning BEFORE bringing on new gigantic enterprise customers, not after.
I failed to get them to adopt CI/CD.
I retired.
My working principle: always work myself out of any job. Do all my work so somebody else can take it over.
Document pain points is a big one, everyone says them, but not always are they fixed or prioritized. Something to fix those will work - like creating, documentation, sigh.
People skills and politics, not even technical. :(
It's been very helpful for our client-facing folks who have to copy a lot of user data.
- Organized a professional development group to promote book clubs, meetups, etc within the company
- Helped marketing/sales folks update web pages to fix errors or "generic business language"
- Encourage and lead efforts to use off-the-shelf tools instead of homegrown internal tools
- Took meeting notes for all-hands meetings and published them internally
- Build relationships with non-technical staff so that you can later give them feedback about processes they control
These are some things I've done over 10 years at my company, so it's a long process!
For example there are dashboards to track the progress of certain tech debt we are resolving, like replacing icons (oh god that looks horrible on mobile [0]). On the process side of things, we have a lot of people reviewing code and I built this dashboard where you can see the availability of reviewers and how much reviews they have done in the past days. This might help distribute load and also allow reviewers to see when they are doing too much [1].
When we were hiring more last year, we actively looked into improving our hiring experience. That was a group effort, but Training other folks to do technical interviews was very insightful and contributed back to our process.
Tech stack wise there are a few skeletons in the closet. Generally once you start digging, you find a lot of weird things. We work together to define metrics in order to assert which improvements have the highest impact. Always important to go in the right direction, little steps, and verify that you are going in the right direction. For example we focused on decreasing our JavaScript that is loading on every page, until we realized CSS cruft was blocking Rendering more than the JavaScript. Then we switched focus to that. Now after we identified what to fix, we focus on the JS again.
[0]: https://leipert-projects.gitlab.io/is-gitlab-pretty-yet/icon...
[1]: https://leipert-projects.gitlab.io/maintainer-workload/
I love tech communities, and started internal and external meetup groups, found ways to stream online, managed to get the tech blog running. All things that I love doing : making other tech people more brilliant than I am successful.
Keep doing it, be consistent.
After a while, it will be seen, recognized and you will be followed :). The cool thing is that because it's something you like, it doesn't sound like work!
Good luck and let us know what you've found in a while !
I would describe the tech stack I've inherited charitably as "nearly pure opportunity."
It’s so much healthier to say “yes, I trust that you need what you say you need but I need more information to be able to help you.”
For me it usually goes with:
1. Have people trust you by usually executing a big project. For me it went by executing very well small projects and getting bigger projects until you're implementing a very big and critical project with a team.
2. Once you have executed (1) people usually trust or know you. Then you can start talking with people across the entire organization asking what hurts the most, make some list of things that are interesting to tackle and who would be the stakeholders
3. Choose one from the list of (2) and do it (or try to sell the need for it)! More often than not it involves a lot of conversation, empathy, teaching.. for example, if you want to implement CI/CD then be ready to:
a. Sell the value of tests
b. Teach best practices for testing
c. Have a skeleton project that people can easily copy
d. Have some tools to easily set up a Ci/CD for a project following the structure of (c)
e. Adjust notifications and workflows
And.. you have Ci/CD for a group. Now do the same for all other groups. Now your company has CI/CD, yey! Pick another item from (2)
I also came up with an architectural improvement not used anywhere else in the company that allowed us to close a security vulnerability related to a file copy process. I think the tech lead and the manager who owned the vulnerability were happy. Nobody else cared.
I think these types of improvements are fairly common and add up to great things, especially when multiple people have this mindset. Unfortunately in my experience most people don't have that mindset because it's not part of the culture where I work. Don't hold your breath for recognition or appreciation. The only way to achieve that is by improving something for the 'business people'.
While not all hackathons have produce production level ideas, many have. One hackathon resulted in the initial iteration of the biggest product shift in our company. One that I'm incredibly proud of.
This might not be what you were asking for, but I know for a fact that doing this literally had the most impact on the company.
Many of those things have since been superseded in the intervening 15 years, but it still pleases me to walk by the NOC and see tools of mine that I wrote 10-15 years ago still running (now maintained by others, but still running).
One of the most useful and longest-lived tools is one of the simplest (I literally built the essence of it in 4 hours, 6-10 PM one evening). It graphs a timeline, 1 second per pixel in X, logarithmic dollar value in Y, plot every order. That was the first version.
It's since evolved to have a bunch of per-minute summary data on the screen (AOV, CR%, errors/info/warning/404s, total bookings, paid vs unpaid orders, database connections in use, idle connections available, long-running transactions, long-running pages, etc per minute), records to a database, so you can "playback" outages or go exploring, etc. It's not the best tool for deep digging, but when you want a fast-reacting, "quick check" that the entire site is working post-release or post-outage, it's unambiguous that people are getting all the way through checkout (or not). You might be surprised what you can learn from such a simplistic tool.
From the beginning, rsync.net offered standard ftp service along with ssh tools. It pleased me to offer an old fashioned standard that "just worked" and allowed some weird corner cases to function for people.
Simultaneously, I wanted a "clean" nmap. I wanted to see port 22 and nothing else. So, we disabled ftp (and with it, inetd on all of our FreeBSD storage arrays) and reduced our attack surface as well as the number of processes we need to run and audit.
We made this change about 18 months ago...
For example, let's say your company is a SaaS business that is number 4 or 5 in its market and is looking to move up to number 1 or 2 position, then you may find that the key strategy to get there (just spitballing here) is to get to feature parity with the market leaders. If so, then you would need to launch more features faster.
How do you launch features faster? Well, could be improving the code quality so there is less rework, building better tools, hiring more devs, etc? So lets take one of those, eg improving code quality. How do you improve code quality? Well, maybe hire better devs, train the devs you have or improve your QA processes? If your devs are all rockstars, then it must be QA that sucks. So let's look into how to improve QA.... (etc)
Keeping working down this logical path and you will arrive at the most impactful changes you can make in your organization.
Source control discipline is probably the single most valuable thing I have worked to improve. We still have a long ways to go, but we went from a place where on a team of roughly twenty developers with many active projects would constantly have to ask "Who has the latest code?" to a place that practices disciplined code management.
When I first started, my jaw almost hit the floor. I couldn't believe that no one actually knew how the source control worked (at the time TFVC). It is hard for me to explain just how bad things were. There were instances of features completely disappearing from production because some one never checked in the code.
Check-ins were at best months a part and production versions of code were scattered on different people's PCs.
After a long, long time we have most developers on board with good practices. There are still some who refuse to follow industry best practices, but our team, and the products we create have a much higher quality.
C2: I was the "living documentation" for the entire system. There was plenty of docs, gigabytes of it in fact. Too much for most developers to know by heart. My job was to actually read it all and answer questions. I'd bring up things like how this portion is not optimized according to this specification here, or that portion is missing error handling N, which happens frequently, or this other server fallback is not being used. The system has millions of daily active users, so every time an API is called twice, that costs serious money. I'd also spot things like unstable hacks that were hidden in the code by some old programmers.
C3: Whole code was a mess. I spent half my time refactoring it, which also meant a few late nights sometimes, and chopping out thousands of lines of code a week. It cut down maintenance and bugs drastically. Where one thing would require 4 hours to do, it then required 10 minutes.
This can be code, build systems (or lack thereof), missing automation, or even lack of clarity between various stakeholders.
The data was mostly a mess as the bot had interfaced with Flowdock at one point and later Slack, but even then Slack had undergone a few changes to the message schema. All up, there were about 4 different distinct schemas.
To be clear, what was stored was the representation of an incident that also included data about the person who commanded the incident. Generally the commander data was just a copy of their profile data, and that was the thing that mostly changed over time.
Anyway, one of my first tasks when I joined was to throw together a small web API to expose that data so we could generate reports and what not.
Perhaps the most important thing to note is that nothing I did actually drove any particular change. I just got lucky because when our new CTO joined and started asking product teams how many incidents they often had, product teams started asking our group for reports.
Seeing how much time my manager was spending generating these things, I took the next logical step of throwing together a basic UI (filtering, sorting, exporting to CSV) so that it inverted the overhead onto the teams themselves rather than it being bottlenecked by one or two people doing reports.
Anyway, there's more to it than that, and funnily enough no one in our wider team was really aware of what I put together. Having said that, it was a fun experience and kinda nice getting a little bit of attention for a while. It was more or less a side project so I got to field customer requests (and treat users like customers), balance priorities and so on
I guess the theme here would be a mix of visibility and automating toil?
---
I've often thought about doing a New Old Thing style postmortem on that bot. It was rewritten a little while ago now but the first version, using the default hubot adaptor, made me really appreciate Redis. It was doing some highly uestionable stuff under the hood but as a testament to Redis, it never missed a beat.
My company is large, and coordinating all this data was typically done over email or jira.
I designed and built a test data API service with a web UI, now in heavy use across many departments and three business units.
The best signal is the thing most people are frustrated with but never seem to find time to get to.
As you move up in your career you will be faced with more difficult problems and challenges, especially those with no clear answers or path to follow.
A not so long time ago, I convinced developers to use the automated QA system as part of their development process. The QA folks had automated almost everything, so it was much faster to test by submitting a QA job. Moreover, QA owned the metrics around performance goals, so developing to the test meant fewer surprises.