Writing materials

Willingness to be Paid for Treatments – An Economist Writes Daily

This is the second of two blog posts on my article “Willingness to Get Paid: Who Trains for Tech Jobs?”. Follow this link to download the document of Labor saving (free until November 27, 2022).

Last week, I focused on the main findings of the article:

  • Women did not reject short-term computer programming work at a higher rate than men.
  • For the incentive portions of the experiment, the women had the same reservation salary to schedule. Women also seemed confident in their ability after belief elicitation.
  • The main results related to sex were, surprisingly, null results. I ran the experiment three times with slightly different subject pools.
  • However, I found that women may be less likely to pursue programs outside of experience based on their self-reported survey responses. Women are more likely to say they are “not confident” and more likely to say they expect harassment in a tech career.
  • In all three experiments, the attribute that best predicts whether someone would program is whether they say they like programming. This subjective attitude appears to be even more important than the fact of having taken courses before.
  • Along with “liking programming” or “liking math,” subjects who have a high opportunity cost of time were less willing to return to experience doing programming at a given salary level.

I wrote this article in part to understand why more people aren’t attracted to the high-paying tech industry. This recent tweet indicates that while perhaps more young people are getting tech-savvy than ever before, the labor market price is still quite steep.

The advantage of controlled experiments is that you can randomly assign treatment conditions to subjects. This article discusses what happened after adding additional information or providing encouragement to certain topics.

Informed by reading the political literature, I assumed that a lack of trust was a barrier to pursuing the technology. A large study by Google in 2013 suggested that women majoring in computer science were influenced by encouragement.

I provided an encouraging message to two treatment groups. The long version of this encouraging message was:

If you’ve never done computer programming before, don’t worry. Other students with no experience were able to follow the training and pass the quiz.

Not only did this not have a significant positive effect on willingness to program, but there are indications that it made subjects less confident and less willing to program. For example, in the “High Stakes” experiment, the booking wage for subjects who saw the encouraging message was $13 more than for control subjects.

My experience doesn’t prove that encouragement never matters, of course. Most people believe that some type of encouragement drives behavior. My results could serve as a cautionary tale for policy makers who would like to step up encouragement. John List’s latest book, The Voltage Effect, deals with the difficulty of delivering effective interventions at scale.

The other randomly assigned intervention was additional information, called INFO. INFO treatment subjects saw a sample programming quiz question. Instead of just knowing they would be doing “computer programming”, they saw bits of R code with an explanation. In theory, someone unfamiliar with computer programming might find comfort in this snippet. My results show that INFO did not affect behavior. Today, most people already know what programming is. About half of the subjects said they had ever taken a course that teaches programming. Perhaps if there are opportunities to educate young adults, it would be in career paths rather than technical foundations.

Since the differences between the treatments turned out to be negligible, I pooled all my data (686 subjects in total) for certain types of analysis. In the graph below, I group each subject as either someone who accepted the programming follow-up job or someone who refused to return to the program at any salary. Remember the highest salary level I offered was considerably higher on an hourly basis than what I expected from their outside income option.

Fig. 5. Characteristics of subjects not requesting a follow-up invitation, grouping all treatments and sample

I will discuss the three characteristics of this graph in what seems to be the order of importance in predicting whether someone wants to program. There was a huge difference in the percentage of people who were willing to come back for an easy and tedious task that I call counting. By inviting all of these subjects back to count at the same hourly rate as the programming work, I got a rough measure of their opportunity cost of time. Someone with a high opportunity cost is less likely to accept me as a programmer. It might sound very predictable, but it’s a big reason why more Americans aren’t getting into tech.

Considering the first batch of 310 subjects, I have a very clear comparison between the booking salary for programming and the booking salary for counting. People who don’t like programming need more payment to program than to come back for counting work. Self-reported pleasure is a very important factor. The orange bar in the graph shows that the majority of people who have accepted the position of programmer say they like programming.

Finally, the blue bar indicates the percentage of female subjects in each group. The breakdown by sex is almost the same. As I show in several ways in the paper, there is a surprising lack of a gender gap for reasoned decisions.

I hope my experience will inspire more work in this area. The experiments are interesting because it’s something someone could try to replicate with a different group of subjects or with a design change. Interesting discrepancies could appear between material types under new circumstances.

The topic of skills issues in the United States is relatively new to labor market and public policy discussions. It is hard to think of a labor market issue where academic research or even research using standard academic techniques has played such a small role, where parties with a material stake in the outcome have so dominated the discussion, where the quality of evidence and discussion has been so poor, and where the stakes are potentially so high.

Cappelli, PH, 2015. Skills Gaps, Skills Shortages, and Skills Mismatch: Evidence and Arguments for the United States. ILR Rev. 68 (2), 251–290.