google reader refugee.
1933 stories
·
48 followers

Future workforce: Xero finds a gap in tech hiring

1 Comment

You may have heard us banging on about the skills shortage in tech. And you are probably aware that for developers it’s a job hunter’s market. But when it comes to filling senior roles in the industry it’s even more competitive. There are a lot of companies who want access to the same pool of talent when it comes to experienced developers.

James O’Reilly, Talent Acquisition Lead at Xero, found a simple and under-utilised way to get ahead of his competition and support women in tech at the same time. A search for full-time developer jobs in Melbourne will return approx. 3000 results on SEEK, but when you alter that search to part-time you can count them on one hand. Based on this, Xero decided to hire three part-time developers for their Melbourne product teams.

“Our employer brand resonates well with candidates. A compelling employee value proposition is really helpful to attract candidates but ultimately it’s still a really competitive market for top Developer talent, particularly full-time positions.  We recognised part-time developers were an untapped market, and couldn’t be happier with the results of our recruitment campaign in this area.”

One of the staff members hired from this initiative is Sarah Loh, a Developer with two children under five.

“I had been looking for a part-time role since having my second child, but technical part-time jobs are very rare. I was tempted to apply for part-time non-technical jobs, but I ended up getting a full time IT consultant role, developing software for clients. I found it really difficult to juggle work and family at that time with the non-flexible hours. So that led me to continue my search for a part-time job. That’s when I found and applied for Xero.”

James (a dad of three young girls himself) says Sarah’s situation is very common:

“Of the part-time candidates we met, many had to commence with their employer in a full time capacity and then negotiate for flexibility down the track.  There is a great pool of top quality developers that want to work part-time for reasons such as childcare, side projects, additional studies etc. but very few businesses are offering positions that cater for this talent up front.”  

Sarah says that in terms of the job and lifestyle, her role at Xero is beyond her highest expectations: 

“Working part time allows me to spend more time with my young kids without having to choose between a career and family. I’m so impressed at the effort the people here, especially the People Experience team, put in just make us feel happy and like we belong.”

There are so many factors that are impacting our changing workforce. Technology enables working from anywhere so much more doable, while also creating new jobs and making others obsolete. It’s almost crazy to think that more companies are not looking to hire part-time developers.

Come along to our next Sydney event to discuss this and all the latest tech industry trends.

At Cracking the Tech Career we provide you with everything you need to land your dream tech role (yes that will probably have to be full-time currently).

See you there!!

Written by Penny Ivison
Social Media & Content Lead @ Code Like a Girl



Read the whole story
pfctdayelise
6 days ago
reply
Fully agree, offering part time rolled is such an easy hack to get great candidates
Melbourne, Australia
Share this story
Delete

Four short links: 20 July 2017

1 Share

SQL Equivalence, Streaming Royalties, Open Source Publishing, and Serial Entitlement

  1. Introducing Cosette -- a SQL solver for automatically checking semantic equivalences of SQL queries. With Cosette, one can easily verify the correctness of SQL rewrite rules, find errors in buggy SQL rewrites, build auto-graders for SQL assignments, develop SQL optimizers, bust “fake SQLs,” etc. Open source, from the University of Washington.
  2. Streaming Services Royalty Rates Compared (Information is Beautiful) -- the lesson is that it's more profitable to work for a streaming service than to be an artist hosted on it.
  3. Editoria -- open source web-based, end-to-end, authoring, editing, and workflow tool that presses and library publishers can leverage to create modern, format-flexible, standards-compliant, book-length works. Funded by the Mellon Foundation, Editoria is a project of the University of California Press and the California Digital Library.
  4. The Al Capone Theory of Sexual Harassment (Val Aurora) -- The U.S. government recognized a pattern in the Al Capone case: smuggling goods was a crime often paired with failing to pay taxes on the proceeds of the smuggling. We noticed a similar pattern in reports of sexual harassment and assault: often people who engage in sexually predatory behavior also faked expense reports, plagiarized writing, or stole credit for other people’s work.

Continue reading Four short links: 20 July 2017.

Read the whole story
pfctdayelise
7 days ago
reply
Melbourne, Australia
Share this story
Delete

LinkArchiver, a new bot to back up tweeted links

3 Shares

Twitter users who want to ensure that the Wayback Machine has stored a copy of the pages they link to can now sign up with @LinkArchiver to make it happen automatically. @LinkArchiver is the first project I’ve worked on in my 12-week stay at Recurse Center, where I’m learning to be a better programmer.

The idea for @LinkArchiver was suggested by my friend Jacob. I liked it because it was useful, relatively simple, and combined things I knew (Python wrappers for the Twitter API) with things I didn’t (event-based programming, making a process run constantly in the background, and more). I did not expect it to get as enthusiastic a reaction as it has, but that’s also nice.

The entire bot is one short Python script that uses the Twython library to listen to the Twitter User stream API. This is the first of my Twitter bots that is at all “interactive”—every previous bot used the REST APIs to post, but can not engage with things in their timeline or tweeted at them.

That change meant I had to use a slightly different architecture than I’ve used before. Each of my previous bots were small and self-contained scripts that produced a tweet or two each time they run. That design means I can trigger them with a cron job that runs at regular intervals. By contrast, @LinkArchiver runs all the time, listening to its timeline and acting when it needs to. It doesn’t have much interactive behavior—when you tweet at it directly, it can reply with a Wayback link, but that’s it—but learning this kind of structure will enable me to do much more interactive bots in the future.

It also required that I figure out how to “daemonize” the script, so that it could run in the background when I wasn’t connected and restart in case it crashed (or when I restart the computer). I found this aspect surprisingly difficult; it seems like a really basic need, but the documentation for how to do this was not especially easy to find. I host my bots on a Digital Ocean box running Ubuntu, so this script is running as a systemd service. The Digital Ocean documentation and this Reddit tutorial were both very helpful for my figuring it out.

Since launching the bot, I’ve gotten in touch with the folks at the Wayback Machine, and at their request added a custom user-agent. I was worried that the bot would get on their nerves, but they seem to really appreciate it—what a relief. After its first four days online, it’s tracking some 3,400 users and has sent about 25,000 links to the Internet Archive.

Read the whole story
pfctdayelise
14 days ago
reply
Melbourne, Australia
mindspillage
15 days ago
reply
Mountain View, California
Share this story
Delete

I bit my coworker

2 Comments and 4 Shares

A reader writes:

So I bit a coworker yesterday. Obviously, I’m mortified.

I work in an incredibly dysfunctional office. The tone is set by our office manager. He’s in his fifties, has always worked in an office setting, and is difficult. Things are right if it’s in his favor and wrong if anyone else does it. He once cursed at me and called me a child for asking him not to say I’m prettier if I smile. He then didn’t speak to me for a year — which was a relief.

Well, yesterday, I had a meeting with a coworker. (If it makes a difference, the office manager and I are on the same level, as is the person I was meeting with.) My hands were full of paperwork and a full mug. When I got to the coworker’s office, the office manager was in the doorway, braced with one arm stretched across the opening. I stopped, said, “Excuse me, I have a meeting.” Aaaaaand he refused to move. He replied that he didn’t give a s*** and it wasn’t his problem. The coworker grimaced but said nothing, as is usual for our office.

Normally, I’d sit and argue. Rarely, I’m able to convince him to move. In those cases, I’d put down my things in the office and wait for the colleague and him to finish speaking. They don’t work together or like each other, but they angry-gossip frequently.

This time — this time I bit him. I don’t know! His arm was in front of my face, my hands were full, I know from experience he almost never moves, and I’m reaaaaally busy right now.

In any case, I bit him, over his sleeve, pulled back, and we just sort of stared at each other for a second, because … wow. He finally got his feet under him, figuratively, and retaliated by stomping on my feet (I was in ballet flats and he had heeled dress shoes) and shoving me. As I’m regaining my balance and trying to save my feet, I dropped my mug, which shattered. At that point, he stopped and bent to pick up the shards. I ducked into the office and shut and locked the door. Not helping him pick up the shards angered him more.

I’ve since apologized. He accepted gracefully, while admitting no fault on his part.

This office is bad. It’s warping my perceptions of normal behavior. I know there is no one above us who would address this issue with him and short of quitting, I have to deal with him every day. What is the right way to deal with difficult coworkers in these situations? Just keep arguing? Walk away and reschedule the meeting? There are no magic words to deal with impossible people, but how do I reason with myself mentally to stop myself from going down this road again?

Thank you for considering my question. I suppose most everything is solved by “walking away,” but I feel helpless and clearly spiral a bit into wild behavior when at a loss…

Ooof.

I think the thing to do here is to use this incident as a way of seeing really clearly that this office is messing you up. It’s destroying your sense of norms, it’s making you act in ways that (I assume) you would never normally act, and it’s turning you into someone who you don’t want to be. (Again, I’m assuming, but it feels like a safe bet that you don’t want to be someone who bites coworkers as a means of conflict resolution.)

It’s also going to start messing with your professional reputation, if it hasn’t already. It’s going to be hard for people to recommend you for other jobs if they know you bit a coworker.

So, three things:

1. You need to start actively job searching right away. Not like sending out a resume every few weeks when the mood strikes, but seriously working to get yourself out of this situation as soon as you can.

2. You should apologize to the coworker who saw the incident. It’s all kinds of messed up that she didn’t say anything at the time or afterwards, but that’s probably a further illustration of how out of whack the norms in your office are. Regardless, though, she did see it, and you don’t want her to think that you think it was okay. So talk to her and explain that you’re mortified and that you know it wasn’t okay.

3. For whatever amount of time you have to continue working there, it’s crucial to keep in the forefront of your mind that you are not somewhere that supports normal behavior. You should expect that when you deal with the office manager, he will be rude, unreasonable, and hostile. You should go into your interactions with him expecting that, so that when it happens, you’re not surprised by it. You want your reaction to be an internal eye roll, not outrage. You should also be prepared to have to alter your plans when he obstructs you. So for example, when he blocked your path to your coworker’s office, ideally you would have said, “Jane, I can’t get past Fergus, but let me know when you’re ready to meet” and then left.

It might help to think of yourself as being in a foreign country with completely different norms than the ones that feel obvious to you. Hell, pretend you’re on another planet where the inhabitants have their own, seemingly bizarre rules for interacting. If this were happening during your interplanetary trip to Neptune, you probably wouldn’t go into a rage and bite an alien — you’d more easily see it as their own particular culture. You might also try very hard to get off Neptune very quickly, and that would be reasonable. But while you were there, you’d understand that they were playing by different rules.

But really, this is as clear a sign as anyone will ever get that you’ve been there too long and it’s time to go.

I bit my coworker was originally published by Alison Green on Ask a Manager.

Read the whole story
pfctdayelise
14 days ago
reply
o.0

Well this is going to be in the Worst Workplaces 2017 list
Melbourne, Australia
Share this story
Delete
1 public comment
iaravps
14 days ago
reply
Talk about a toxic workplace o.O
Rio de Janeiro, Brasil

How do you cut a monolith in half?

1 Share

It depends.

The problem with distributed systems, is that no matter what the question is, the answer is inevitably ‘It Depends’.

When you cut a larger service apart, where you cut depends on latency, resources, and access to state, but it also depends on error handling, availably and recovery processes. It depends, but you probably don’t want to depend on a message broker.

Using a message broker to distribute work is like a cross between a load balancer with a database, with the disadvantages of both and the advantages of neither.

Message brokers, or persistent queues accessed by publish-subscribe, are a popular way to pull components apart over a network. They’re popular because they often have a low setup cost, and provide easy service discovery, but they can come at a high operational cost, depending where you put them in your systems.

In practice, a message broker is a service that transforms network errors and machine failures into filled disks. Then you add more disks. The advantage of publish-subscribe is that it isolates components from each other, but the problem is usually gluing them together.


For short-lived tasks, you want a load balancer

For short-lived tasks, publish-subscribe is a convenient way to build a system quickly, but you inevitably end up implementing a new protocol atop. You have publish-subscribe, but you really want request-response. If you want something computed, you’ll probably want to know the result.

Starting with publish-subscribe makes work assignment easy: jobs get added to the queue, workers take turns to remove them. Unfortunately, it makes finding out what happened quite hard, and you’ll need to add another queue to send a result back.

Once you can handle success, it is time to handle the errors. The first step is often adding code to retry the request a few times. After you DDoS your system, you put a call to sleep(). After you slowly DDoS your system, each retry waits twice as long as the previous.

(Aside: Accidental synchronisation is still a problem, as waiting to retry doesn’t prevent a lot of things happening at once.)

As workers fail to keep up, clients give up and retry work, but the earlier request is still waiting to be processed. The solution is to move some of the queue back to clients, asking them to hold onto work until work has been accepted: back-pressure, or acknowledgements.

Although the components interact via publish-subscribe, we’ve created a request-response protocol atop. Now the message broker is really only doing two useful things: service discovery, and load balancing. It is also doing two not-so-useful thing: enqueuing requests, and persisting them.

For short-lived tasks, the persistence is unnecessary: the client sticks around for as long as the work needs to be done, and handles recovery. The queuing isn’t that necessary either.

Queues inevitably run in two states: full, or empty. If your queue is running full, you haven’t pushed enough work to the edges, and if it is running empty, it’s working as a slow load balancer.

A mostly empty queue is still first-come-first-served, serving as point of contention for requests. A broker often does nothing but wait for workers to poll for new messages. If your queue is meant to run empty, why wait to forward on a request.

(Aside: Something like random load balancing will work, but join-idle-queue is well worth your time investigating)

For distributing short-lived tasks, you can use a message broker, but you’ll be building a load balancer, along with an ad-hoc RPC system, with extra latency.


For long-lived tasks, you’ll need a database

A load balancer with service discovery won’t help you with long running tasks, or work that outlives the client, or manage throughput. You’ll want persistence, but not in your message broker. For long-lived tasks, you’ll want a database instead.

Although the persistence and queueing were obstacles for short-lived tasks, the disadvantages are less obvious for long-lived tasks, but similar things can go wrong.

If you care about the result of a task, you’ll want to store that it is needed somewhere other than in the persistent queue. If the task is run but fails midway, something will have to take responsibility for it, and the broker will have forgotten. This is why you want a database.

Duplicates in a queue often cause more headaches, as long-lived tasks have more opportunities to overlap. Although we’re using the broker to distribute work, we’re also using it implicitly as a mutex. To stop work from overlapping, you implement a lock atop. After it breaks a couple of times, you replace it with leases, adding timeouts.

(Note: This is not why you want a database, using transactions for long running tasks is suffering. Long running processes are best modelled as state machines.)

When the database becomes the primary source of truth, you can handle a broker going offline, or a broker losing the contents of a queue, by backfilling from the database. As a result, you don’t need to directly enqueue work with the broker, but mark it as required in the database, and wait for something else to handle it.

Assuming that something else isn’t a human who has been paged.

A message pump can scan the database periodically and send work requests to the broker. Enqueuing work in batches can be an effective way of making an expensive database call survivable. The pump responsible for enqueuing the work can also track if it has completed, and so handle recovery or retries too.

Backlog is still a problem, so you’ll want to use back-pressure to keep the queue fairly empty, and only fill from the database when needed. Although a broker can handle temporary overload, back-pressure should mean it never has to.

At this point the message broker is really providing two things: service discovery, and work assignment, but really you need a scheduler. A scheduler is what scans a database, works out which jobs need to run, and often where to run them too. A scheduler is what takes responsibility for handling errors.

(Aside: Writing a scheduler is hard. It is much easier to have 1000 while loops waiting for the right time, than one while loop waiting for which of the 1000 is first. A scheduler can track when it last ran something, but the work can’t rely on that being the last time it ran. Idempotency isn’t just your friend, it is your saviour.)

You can use a message broker for long-lived tasks, but you’ll be building a lock manager, a database, and a scheduler, along with yet another home-brew request-response system.


Publish-Subscribe is about isolating components

The problem with running tasks with publish-subscribe is that you really want request-response. The problem with using queues to assign work is that you don’t want to wait for a worker to ask.

The problem with relying on a persistent queue for recovery, is that recovery must get handled elsewhere, and the problem with brokers is nothing else makes service discovery so trivial.

Message brokers can be misused, but it isn’t to say they have no use. Brokers work well when you need to cross system boundaries.

Although you want to keep queues empty between components, it is convenient to have a buffer at the edges of your system, to hide some failures from external clients. When you handle external faults at the edges, you free the insides from handling them. The inside of your system can focus on handling internal problems, of which there are many.

A broker can be used to buffer work at the edges, but it can also be used as an optimisation, to kick off work a little earlier than planned. A broker can pass on a notification that data has been changed, and the system can fetch data through another API.

(Aside: If you use a broker to speed up a process, the system will grow to rely on it for performance. People use caches to speed up database calls, but there are many systems that simply do not work fast enough until the cache is warmed up, filled with data. Although you are not relying on the message broker for reliability, relying on it for performance is just as treacherous.)

Sometimes you want a load balancer, sometimes you’ll need a database, but sometimes a message broker will be a good fit.

Although persistence can’t handle many errors, it is convenient if you need to restart with new code or settings, without data loss. Sometimes the error handling offered is just right.

Although a persistent queue offers some protection against failure, it can’t take responsibility for when things go wrong halfway through a task. To be able to recover from failure you have to stop hiding it, you must add acknowledgements, back-pressure, error handling, to get back to a working system.

A persistent message queue is not bad in itself, but relying on it for recovery, and by extension, correct behaviour, is fraught with peril.


Systems grow by pushing responsibilities to the edges

Performance isn’t easy either. You don’t want queues, or persistence in the central or underlying layers of your system. You want them at the edges.

It’s slow is the hardest problem to debug, and often the reason is that something is stuck in a queue. For long and short-lived tasks, we used back-pressure to keep the queue empty, to reduce latency.

When you have several queues between you and the worker, it becomes even more important to keep the queue out of the centre of the network. We’ve spent decades on tcp congestion control to avoid it.

If you’re curious, the history of tcp congestion makes for interesting reading. Although the ends of a tcp connection were responsible for failure and retries, the routers were responsible for congestion: drop things when there is too much.

The problem is that it worked until the network was saturated, and similar to backlog in queues, when it broke, errors cascaded. The solution was similar: back-pressure. Similar to sleeping twice as long on errors, tcp sends half as many packets, before gradually increasing the amount as things improve.

Back-pressure is about pushing work to the edges, letting the ends of the conversation find stability, rather than trying to optimise all of the links in-between in isolation. Congestion control is about using back-pressure to keep the queues in-between as empty as possible, to keep latency down, and to increase throughput by avoiding the need to drop packets.

Pushing work to the edges is how your system scales. We have spent a lot of time and a considerable amount of money on IP-Multicast, but nothing has been as effective as BitTorrent. Instead of relying on smart routers to work out how to broadcast, we rely on smart clients to talk to each other.

Pushing recovery to the outer layers is how your system handles failure. In the earlier examples, we needed to get the client, or the scheduler to handle the lifecycle of a task, as it outlived the time on the queue.

Error recovery in the lower layers of a system is an optimisation, and you can’t push work to the centre of a network and scale. This is the end-to-end principle, and it is one of the most important ideas in system design.

The end-to-end principle is why you can restart your home router, when it crashes, without it having to replay all of the websites you wanted to visit before letting you ask for a new page. The browser (and your computer) is responsible for recovery, not the computers in between.

This isn’t a new idea, and Erlang/OTP owes a lot to it. OTP organises a running program into a supervision tree. Each process will often have one process above it, restarting it on failure, and above that, another supervisor to do the same.

(Aside: Pipelines aren’t incompatible with process supervision, one way is for each part to spawn the program that reads its output. A failure down the chain can propagate back up to be handled correctly.)

Although each program will handle some errors, the top levels of the supervision tree handle larger faults with restarts. Similarly, it’s nice if your webpage can recover from a fault, but inevitably someone will have to hit refresh.

The end-to-end principle is realising that no matter how many exceptions you handle deep down inside your program, some will leak out, and something at the outer layer has to take responsibility.

Although sometimes taking responsibility is writing things to an audit log, and message brokers are pretty good at that.


Aside: But what about replicated logs?

“How do I subscribe to the topic on the message broker?”

“It’s not a message broker, it’s a replicated log”

“Ok, How do I subscribe to the replicated log”

From ‘I believe I did, Bob’, jrecursive

Although a replicated log is often confused with a message broker, they aren’t immune from handling failure. Although it’s good the components are isolated from each other, they still have to be integrated into the system at large. Both offer a one way stream for sharing, both offer publish-subscribe like interfaces, but the intent is wildly different.

A replicated log is often about auditing, or recovery: having a central point of truth for decisions. Sometimes a replicated log is about building a pipeline with fan-in (aggregating data), or fan-out (broadcasting data), but always building a system where data flows in one direction.

The easiest way to see the difference between a replicated log and a message broker is to ask an engineer to draw a diagram of how the pieces connect.

If the diagram looks like a one-way system, it’s a replicated log. If almost every component talks to it, it’s a message broker. If you can draw a flow-chart, it’s a replicated log. If you take all the arrows away and you’re left with a venn diagram of ‘things that talk to each other’, it’s a message broker.

Be warned: A distributed system is something you can draw on a whiteboard pretty quickly, but it’ll take hours to explain how all the pieces interact.


You cut a monolith with a protocol

How you cut a monolith is often more about how you are cutting up responsibility within a team, than cutting it into components. It really does depend, and often more on the social aspects than the technical ones, but you are still responsible for the protocol you create.

Distributed systems are messy because of how the pieces interact over time, rather than which pieces are interacting. The complexity of a distributed system does not come from having hundreds of machines, but hundreds of ways for them to interact. A protocol must take into account performance, safety, stability, availability, and most importantly, error handling.

When we talk about distributed systems, we are talking about power structures: how resources are allocated, how work is divided, how control is shared, or how order is kept across systems ostensibly built out of well meaning but faulty components.

A protocol is the rules and expectations of participants in a system, and how they are beholden to each other. A protocol defines who takes responsibility for failure.

The problem with message brokers, and queues, is that no-one does.

Using a message broker is not the end of the world, nor a sign of poor engineering. Using a message broker is a tradeoff. Use them freely knowing they work well on the edges of your system as buffers. Use them wisely knowing that the buck has to stop somewhere else. Use them cheekily to get something working.

I say don’t rely on a message broker, but I can’t point to easy off-the-shelf answers. HTTP and DNS are remarkable protocols, but I still have no good answers for service discovery.

Lots of software regularly gets pushed into service way outside of its designed capabilities, and brokers are no exception. Although the bad habits around brokers and the relative ease of getting a prototype up and running lead to nasty effects at scale, you don’t need to build everything at once.

The complexity of a system lies in its protocol not its topology, and a protocol is what you create when you cut your monolith into pieces. If modularity is about building software, protocol is about how we break it apart.

The main task of the engineering analyst is not merely to obtain “solutions” but is rather to understand the dynamic behaviour of the system in such a way that the secrets of the mechanism are revealed, and that if it is built it will have no surprises left for [them]. Other than exhaustive physical experimentations, this is the only sound basis for engineering design, and disregard of this cardinal principle has not infrequently lead to disaster.

From “Analysis of Nonlinear Control Systems” by Dustan Graham and Duane McRuer, p 436

Protocol is the reason why ‘it depends’, and the reason why you shouldn’t depend on a message broker: You can use a message broker to glue systems together, but never use one to cut systems apart.

Read the whole story
pfctdayelise
19 days ago
reply
Melbourne, Australia
Share this story
Delete

Don't crank out code at 2AM, especially if you're the CTO

2 Shares

Dear HubSpot CTO,

Yesterday over on the social medias you wrote that there’s “nothing quite as satisfying as cranking out code at 2am for a feature a customer requested earlier today.” I’m guessing that as CTO you don’t get to code as much these days, and I don’t wish to diminish your personal satisfaction. But nonetheless cranking out code at 2AM is a bad idea: it’s bad for your customers, it sets a bad example for your employees, and as a result it’s bad for your company.

An invitation to disaster

Tired people make mistakes. This is not controversial: lack of sleep has been tied to everything from medical errors to the Exxon Valdez and Challenger disasters (see Evan Robinson on the evils of crunch mode for references).

If you’re coding and deploying at 2AM:

  • You’re more likely to write buggy code.
  • You’re more likely to make a mistake while deploying, breaking a production system.
  • If you do deploy successfully, but you’ve deployed buggy code, you’ll take longer to fix the problem… and the more time it takes the more likely you are to make an operational mistake.

And that’s just the short term cost. When you do start work the next day you’ll also be tired, and correspondingly less productive and more likely to make mistakes.

None of this is good for your customers.

Encouraging a culture of low productivity

If you’re a random developer cranking out code at 2AM the worse you can do is harm your product or production environment. If you’re the CTO, however, you’re also harming your organization.

By touting a 2AM deploy you’re encouraging your workers to work long hours, and to write and deploy code while exhausted. Which is to say, you’re encouraging your workers to engage in behavior that’s bad for the company. Tired workers are less productive. Tired workers make more mistakes. Is that really what you want from your developers?

Don’t crank out code at 2AM: it’s bad for you and your customers. And if you must, don’t brag about it publicly. At best it’s a guilty pleasure; bragging makes it a public vice.

Regards,

—Itamar

PS: While I’ve never cranked out code at 2AM, I’ve certainly made my own share of mistakes as a programmer. If you’d like to learn from my failings sign up for my newsletter where each week I cover one of my mistakes and how you can avoid it.

Read the whole story
pfctdayelise
19 days ago
reply
Melbourne, Australia
Share this story
Delete
Next Page of Stories