Will Rinehart has a great essay summarizing the literature on advertising effectiveness. The tl;dr version is that you can can persuade the average consumer to try your product in some contexts, but is it is very very hard. In other words, the idea ad men can easily mesmerize customers into buying their products (as argued in the Hidden Persuaders) is total bunk.
Why is persuading consumers so hard?
One reason it is hard to persuade consumers is most of them choose not to be persuaded. Many models of advertising treat consumers like passive receptors for ads. You show them words on a screen and somehow it "changes" their minds (kinda like if Coca-Cola created the Snow Crash virus). But that isn't really how consumers make decisions. Instead, consumers actively decide whether to engage with your ad or not. This insight is formally captured in Becker & Murphy's model of advertising. In their 1993 paper, they argue that consumers approach advertisements in much the same way they approach other goods--they consume them up until the marginal benefit equals the marginal cost. If consumers hate watching ads (i.e. the marginal benefit of watching the first ad is negative), then you have to give them something in return for watching your ad. For example, giving consumers free web content like YouTube videos with your ad stuck in the middle is one way to "pay" them to watch your ad. Of course, even if you "pay" consumers to watch your ad with free web content, that is no guarantee they will actually watch it. The costs may still exceed the benefit if the ad itself is not also mildly entertaining. Maybe that's one reason why ad clickthrough rates are so low?
I don't know about anyone else, but this theory describes how I watch ads. I won't go out of my way to watch your mildly entertaining 30-second commercial. But, if you stick it in the middle of a video on the Philosophy of Mob Psycho 100? Maybe I'll watch....or maybe I won't. Honestly, even when I am getting free content, the benefit from engaging with most ads is too low to bother (I saw this ad 5 times on mute before I realized it was about a car and not a new antidepressant).
So, if ads are so ineffective, why do companies advertise?
Many econometric analyses look at the consequences of advertising on the average consumer's purchases. However, the actual target of ads is not the "average" consumer. It is the marginal consumer. That person that is just on the fence between buying another unit of your product or not. If you can convince those consumers, you might not only sell more units to them, but the increase in demand might put upward pressure on prices that allows you to sell your goods to other customers at a higher price (this is also discussed more formally in Becker and Murphy). So that is the real question companies have to answer. Does that marginal revenue generated by another dollar of advertising exceeds the marginal cost? Answering that question is very difficult as discussed in Lewis and Rao's paper on the Unfavorable Economics of Measuring Returns to Advertising.
Bazaar Ideas
Random and infrequent thoughts about markets and other aspects of economic life.
Friday, August 23, 2019
Monday, April 1, 2019
Coase and the Hog Cycle
[edited on 9/14/2021 for clarity]
The 1932 UK Reorganization Commission for Pigs and Pig Products Report
The Hog Cycle debates all started when the UK Reorganization Commission for Pigs and Pig Products found in 1932 that hog prices followed a 4-year cycle: two years rising and two years falling. The Commission argued that these price fluctuations were due to forecasting errors made by hog farmers and that government intervention could help stabilize hog prices.
Why would forecasting errors lead to cyclical price fluctuations? Let's walk through a price cycle using an example of the model the Commission used, which is often called the Cobweb model (illustrated below). We begin time at period 1 where hog farmers bring Q1 to the market to sell. Supply is fixed at Q1 this period because farmers can't produce more hogs on the spot, so the price that prevails on the market will be P1. Since this price exceeds the marginal cost of production, the individual producer is earning more revenue on each hog they sell than it cost to produce.
After selling all their hogs in period 1, the farmers will go back home to produce more for the next sale period. Suppose, as the UK Reorganization Commission did, that the next sale period is in 2 years because it takes that long to produce hogs ready to sell. When deciding how much to produce, the farmer needs to forecast what the price will be in 2 years. Intuitively, you might think the farmers would use the information available on how demand and supply for hogs will shift over 2 years to forecast this price. However, the UK Reorganization Commission instead argued that hog producers just assume the price of hogs in next sale period (period 2) will be the same as they were during the last sale period (period 1).
Because hog producers earned more revenue on each hog than it cost produce in period 1, each producer will individually increase their hog production hoping to earn even more profit in period 2. However, when the producers return to the market in 2 years, they will find that everyone else increased production too and that quantity supplied is now Q2. As a result, the price will plummet to P2 and the producers will actually lose money on each hog they sell (MC < P2). Not learning their lesson, the hog producers again go home decide how much to produce for period 3 based on the assumption that the price next period will be P2.
Hopefully you see where this is going, even if the hog producers don't. The producers will collectively cut production for to Q3 and prices will go up to P3. Thus, we have a 4-year cycle in hog prices where prices fall for 2 years and rise for 2 years. How long will this cycle continue? That depends on the elasticities of supply and demand. If demand is less elastic than supply, as was believed to be the case in the hog market, then the price swings will continue forever and only get bigger as time goes on. This is why the Commission said government intervention was needed.
Source: Wikipedia
Coase and Fowler (1935) Take the Cobweb Model to the Data
The Cobweb Model is really clever, but does it actually capture the reality of the hog market? Coase and his co-author Ronald Fowler tried to answer that question by evaluating the model's assumptions. The easiest assumption to test was whether it really took 2 years for hog producers to respond to higher prices. To do this, Coase and Fowler (1935) spend a lot time investigating how hogs are actually produced. They found that the average age of a hog at slaughter is eight months and that the period of gestation is four months. So a producer could respond to unexpectedly higher hog prices in 12 months (possibly even sooner since there were short-run changes producers could also make to increase production). If that is the case, why does it take 24 months for prices to complete their descent?
Clearly the Cobweb Model is missing something. Maybe the cycle isn't due to forecasting errors at all. However, Coase and Fowler's critics were not convinced. If it wasn't forecasting errors that were driving the Hog Cycle, then what was? "They have, in effect, tried to overthrow the existing explanation without putting anything in its place" wrote Cohen and Barker (1935). Coase and Fowler (1937) attempted to provide an explanation, but this question would continue to be debated for decades.
The Next Chapter
Ultimately, John Muth (1961) proposed a model that assumed producers didn't have systematically biased expectations about future prices (in other words that they had "rational" expectations). Muth argued this model yielded implications that were more consistent with the empirical results found by Coase and others. For example, rational expectations models generated cycles that lasted longer than models that assumed static or adaptive expectations. So a 4-year hog cycle no longer seemed as much of a mystery. I'm not sure what happened to rational expectations after that. I hear they use it in Macro a bit. Anyways, if you are interested in a more detailed summary of Coase's work on the Hog Cycle, then check out Evans and Guesnerie (2016). I found this article on Google while I was preparing this post and it looks very good.
References
Evans, George W., and Roger Guesnerie. "Revisiting Coase on anticipations and the cobweb model." The Elgar Companion to Ronald H. Coase (2016): 51.
Coase, Ronald H., and Ronald F. Fowler. "Bacon production and the pig-cycle in Great Britain." Economica 2, no. 6 (1935): 142-167.
Coase, Ronald H., and Ronald F. Fowler. "The pig-cycle in Great Britain: an explanation." Economica 4, no. 13 (1937): 55-82.
Cohen, Ruth, and J. D. Barker. "The pig cycle: a reply." Economica 2, no. 8 (1935): 408-422
Muth, John F. "Rational expectations and the theory of price movements."Econometrica: Journal of the Econometric Society (1961): 315-335.
Monday, March 26, 2018
Thoughts on Another Labor Market Concentration Paper
Efraim Benmelech et al (2018) released an NBER working paper last month that looked at the impact of labor market concentration on wages. Specifically, they estimate a series of models using panel data on on actual wages at the establishment level from the U.S. Census Bureau and estimates of local-labor concentration.
The paper is well worth a read, but I am still skeptical about labor market concentration being a huge policy problem. Here are three questions that came to mind as I read the paper that kept me from being fully convinced.
- Are the authors defining local-labor markets correctly? My first concern with the paper is the way the authors define local-labor markets. Specifically, they define labor markets by county and 3 or 4 digit SIC industry code. So, for example, there is a market for labor in the "Paper Mill" industry (SIC 2621) in Haywood County, North Carolina (FIPS 37087). And this market is separate from the labor market of other industries. Just speaking from personal experience, this definition seems very narrow to me. Wouldn't a paper mill in Haywood County actually be employing people based on their skills and not which industry they worked in? For example, my mother was hired at the Champion paper mill in Haywood County after working in the Ingles grocery store bakery. They didn't hire her because she had paper mill experience, they hired her because they needed unskilled labor.
- Are they capturing the effect of higher market concentration on wages or the effect of lower labor demand? The authors measure local-labor market concentration using an Herfindahl-Hirschman Index (HHI) for each county-industry (see page 3 and page 9).The authors find that wages fall when the HHI for a county labor market increases. If HHI only increased because of firms merging, then it seems obvious that wages must be falling because the market is becoming more concentrated and firms have more bargaining power.However, this is not the only reason that the HHI increases. As the authors note on page 24, the HHI might also increase if a firm closes and leaves fewer firms in the market. That seems like a very different story to me. If a firm closes, that doesn't just mean the market is becoming more concentrated, it also means demand for labor has likely fallen. So how do we know rising HHI isn't just mostly serving as a proxy for falling labor demand? How often is HHI rising due to firms exiting a market? The authors don't seem to address this concern at all in their paper.
- Do their results really suggest labor market concentration had a large effect on wages? Ignore my previous two questions. Suppose the authors are correctly defining the labor market and are successfully capturing the effect of higher concentration on wages. What do their results actually tell us? They find that a 1 standard deviation increase in HHI lowers wages by as much 1.7% when defining labor markets using 3-digit SIC codes or 2.1% when using 4-digit SIC codes (see pages 10-14). But, in this context, raising HHI by 1 standard deviation is huge! The average HHI when defining labor markets using 3-digit SIC codes is 0.545. An increase of 1 standard deviation (0.35) would mean increasing HHI to 0.895! To put that in context, the max value for the HHI is 1. That means taking a county from the "average" level of concentration to near pure monopsony will only lower wages by at most 2% (maybe less). That's not nothing, but that seems surprisingly low given such a dramatic increase in concentration.
So, overall, I found the paper interesting but unconvincing. But I could be missing something. If anyone has answers to my questions above, I'd be happy to hear them.
Friday, March 2, 2018
Thoughts on Latest Labor Market Concentration Paper
There is an interesting new paper by Azar, Marinescu, Steinbaum, and Taska on measuring concentration in the U.S. labor market using a dataset of nearly all online US vacancies from Burning Glass Technologies. This is obviously a very important issue and I am glad they are investigating it. However, I am concerned about how the authors define labor markets in this paper.
Specifically, they define labor markets based on USDA ERS commuting zone and 6-digit SOC occupation. So, if I understand this correctly, one labor market would be for Economics Professors (SOC: 25-1063) in the area surrounding Asheville, North Carolina (Commuting Zone 91). If that is correct, I can't help but think this definition of a labor market seems very narrow. To illustrate, I have two questions.
Specifically, they define labor markets based on USDA ERS commuting zone and 6-digit SOC occupation. So, if I understand this correctly, one labor market would be for Economics Professors (SOC: 25-1063) in the area surrounding Asheville, North Carolina (Commuting Zone 91). If that is correct, I can't help but think this definition of a labor market seems very narrow. To illustrate, I have two questions.
- Aren't there occupations where using ERS commuting zones is less appropriate for defining the labor market? Economics Professors seems like an obvious example. Asheville doesn't have a local market for economics professors. Instead, if UNC Asheville posted a job for an economist, they would get applicants from all over the country. The authors note that 81% of applications on CareerBuilder.com are within the same commuting zone. However, it isn't obvious how well that result applies to the Burning Glass Technologies dataset (the one they actually use in the paper).
- How common is it for the same person to apply to jobs in different occupation codes? Again, economists seem like a good example since they often apply to jobs in multiple occupation codes. For example, I have applied for jobs as an Economic Professors (SOC: 25-1063) and as a non-academic Economist (SOC: 19-3011). Are economists unique in this regard? Would a person that currently delivers food (SOC: 53-3031) never consider delivering office mail and packages (SOC: 43-5021)? If people are considering jobs across multiple 6-digit SOCs, maybe they are too narrow for defining labor markets?
Anyways. These are just my initial thoughts. It was a very interesting paper, so I will be eager to see what other people think as it gets passed around the web.
--Update--
Just a note that the authors were kind enough to engage with some of my concerns on twitter. You can follow the thread below (I may upload screenshots instead). In the end, I am still not sure they addressed my concerns. Our conversation revolved mostly around whether my economics market example was appropriate (I also brought up delivery drivers as an example but that seemed to be ignored). Sometimes it seemed like they were saying the economics job market was appropriate and well described by their market definitions. For example, at one point Azar said Yale and Harvard face different markets for economics professors because they occupy two different commuting zone. At other times, it seemed like they were saying that the economics job market was an outlier. For example, Azar and Taska both said at the end of the discussion that the economics profession isn't representative of other professions. In the end we basically had to agree to disagree.
https://twitter.com/dedubyadubya/status/969801949448146945
--Update--
Just a note that the authors were kind enough to engage with some of my concerns on twitter. You can follow the thread below (I may upload screenshots instead). In the end, I am still not sure they addressed my concerns. Our conversation revolved mostly around whether my economics market example was appropriate (I also brought up delivery drivers as an example but that seemed to be ignored). Sometimes it seemed like they were saying the economics job market was appropriate and well described by their market definitions. For example, at one point Azar said Yale and Harvard face different markets for economics professors because they occupy two different commuting zone. At other times, it seemed like they were saying that the economics job market was an outlier. For example, Azar and Taska both said at the end of the discussion that the economics profession isn't representative of other professions. In the end we basically had to agree to disagree.
https://twitter.com/dedubyadubya/status/969801949448146945
Saturday, December 30, 2017
Everyone Keeps Getting Becker's Crime Model Wrong
In a recent JPE article, Steven Levitt claims that Gary Becker's 1968 paper on crime makes predictions that are at odds with reality. Specifically, he claims that Becker's model predicts that the most efficient way to deter criminals is by combining a low probability of punishment (p) with an extremely severe penalty or fine (f) when a criminal is caught. Obviously, no developed country has a criminal justice system that functions this way. So, either the whole world is wrong or Becker was mistaken. Right?
I really respect Levitt, but this just isn't so.
Becker's predictions depend on the risk preferences of criminals. If criminals are risk averse (or risk neutral), then Levitt is right that the optimal policy is to have a very low p and a very high f. To see why, suppose you start off with the opposite policy--a high p and low f. What happens if you decide to cut p in half and double f? The expected value of the penalty stays the same (pf = (p/2)*(2f)), but crime becomes more risky because you have increased the variance of the outcomes. On the one hand, this is bad news for risk averse criminals because they receive less utility from riskier crimes (see Figure 1). On the other hand, this is good news for taxpayers because imposing penalties is typically cheaper than trying to catch criminals. In other words, you are deterring criminals more and at a lower cost. If you keep increasing f and decreasing p, you will find that the cheapest way to deter risk averse criminals is to have a very low p and very high f. Just like Levitt said!
Unfortunately for Levitt, Becker did not think that criminals are risk averse. Instead, he spends a good chunk of his 1968 article arguing that criminals are risk lovers. In that case, making crime riskier actually INCREASES their incentive to commit crimes (see Figure 2). So, having a very low p and very high f is no longer the optimal policy. Becker goes on to argue that actual US policy seems consistent with the implications of his optimality analysis. In other words, Becker argues the exact opposite of what Levitt says.
Levitt is not the first person to mischaracterize Becker's paper in this way. In 2015, Alex Tabarrok wrote a blog post making a similar argument. Tabarrok's post was later boosted by Tim Worstall and Noah Smith. It is a shame that this keeps happening, especially in places like the JPE! It leaves the impression that Becker's paper is inherently flawed, possibly not worth reading. In reality, it is actually a good example of how to apply theory to understanding real-world problems.
I really respect Levitt, but this just isn't so.
Becker's predictions depend on the risk preferences of criminals. If criminals are risk averse (or risk neutral), then Levitt is right that the optimal policy is to have a very low p and a very high f. To see why, suppose you start off with the opposite policy--a high p and low f. What happens if you decide to cut p in half and double f? The expected value of the penalty stays the same (pf = (p/2)*(2f)), but crime becomes more risky because you have increased the variance of the outcomes. On the one hand, this is bad news for risk averse criminals because they receive less utility from riskier crimes (see Figure 1). On the other hand, this is good news for taxpayers because imposing penalties is typically cheaper than trying to catch criminals. In other words, you are deterring criminals more and at a lower cost. If you keep increasing f and decreasing p, you will find that the cheapest way to deter risk averse criminals is to have a very low p and very high f. Just like Levitt said!
Unfortunately for Levitt, Becker did not think that criminals are risk averse. Instead, he spends a good chunk of his 1968 article arguing that criminals are risk lovers. In that case, making crime riskier actually INCREASES their incentive to commit crimes (see Figure 2). So, having a very low p and very high f is no longer the optimal policy. Becker goes on to argue that actual US policy seems consistent with the implications of his optimality analysis. In other words, Becker argues the exact opposite of what Levitt says.
Levitt is not the first person to mischaracterize Becker's paper in this way. In 2015, Alex Tabarrok wrote a blog post making a similar argument. Tabarrok's post was later boosted by Tim Worstall and Noah Smith. It is a shame that this keeps happening, especially in places like the JPE! It leaves the impression that Becker's paper is inherently flawed, possibly not worth reading. In reality, it is actually a good example of how to apply theory to understanding real-world problems.
Figure 1. Making Crime Riskier Deters Risk Averse Criminals
Following Becker (1968), these figure assumes the following. If a criminal is not caught, they get to keep all of the income they "earned" (Y). If they are caught, they have to pay some penalty or fine (f), leaving them with (Y-f). The probability the criminal is caught is p. In this figure, I illustrate the impact of increasing f from f to 2f and decreasing p from p to p/2 on a risk averse criminal.
Figure 2. Making Crime Riskier Encourages Risk Loving Criminals
Following Becker (1968), these figure assumes the following. If a criminal is not caught, they get to keep all of the income they "earned" (Y). If they are caught, they have to pay some penalty or fine (f), leaving them with (Y-f). The probability the criminal is caught is p. In this figure, I illustrate the impact of increasing f from f to 2f and decreasing p from p to p/2 on a risk loving criminal.
Following Becker (1968), these figure assumes the following. If a criminal is not caught, they get to keep all of the income they "earned" (Y). If they are caught, they have to pay some penalty or fine (f), leaving them with (Y-f). The probability the criminal is caught is p. In this figure, I illustrate the impact of increasing f from f to 2f and decreasing p from p to p/2 on a risk averse criminal.
Figure 2. Making Crime Riskier Encourages Risk Loving Criminals
Following Becker (1968), these figure assumes the following. If a criminal is not caught, they get to keep all of the income they "earned" (Y). If they are caught, they have to pay some penalty or fine (f), leaving them with (Y-f). The probability the criminal is caught is p. In this figure, I illustrate the impact of increasing f from f to 2f and decreasing p from p to p/2 on a risk loving criminal.
Monday, October 23, 2017
Resolving a Cocaine Paradox with Derived Demand
Earlier this year, Tom Wainwright appeared on Russ Robert's EconTalk to discuss his new book, Narco-nomics. This book is about the economics of the drug trade. During their conversation, Wainwright described how governments in countries like Colombia eradicate millions of acres of coca leaf crop every year as part of the "war on drugs." The idea behind this policy is that by making coca more expensive, we will also make cocaine more expensive since it is the drug's key ingredient. However, to the chagrin of policy makers, the price of cocaine has not risen much (if at all).
Wainwright's explanation for this seeming paradox is that 1) the price of coca represents a small portion of the price of cocaine (less than 1%), and 2) drug cartels have market power that allows them to negotiate lower prices with coca leaf growers. These both sound like good reasons to me, but I think Wainwright maybe forgetting one other reason that output prices might not be rising along with input prices. Specifically, Roberts and Wainwright carry on their conversation as if the price of all other inputs into cocaine production stayed the same in the face of coca eradication. But why should we expect that?
Coca seems to have no substitutes in the production of cocaine. So all other inputs should be complements in the production process. That means an increase in the price of coca will lead cocaine producers to use less coca and less of all other inputs. As cocaine producers purchase less of these other inputs, the price of these other inputs will fall to clear their respective markets. As a result, the price of coca goes up, the price of other inputs goes down, and the price cocaine will increase by less than the price of coca (possibly much less if the supply of other inputs is very price inelastic).
Since the "other inputs" used in cocaine products beside coca leaf include the violent aspects of the drug trade, I wonder if this analysis implies that those services would be in less demand? If so, maybe coca eradication at least makes trafficking less violent? I kind of doubt it, but it is something to think about.
Anyways, if you want to think about this some more, you can do so more formally using the derived demand model that I explored in my last post. Here's a quick visual representation of the analysis above using the derived demand model. For simplicity, I drew this assuming you need 1 unit of coca and 1 unit of "other services" to make 1 unit of cocaine. As you can see, the price of coca goes up, the price of other inputs goes down, and the price of cocaine goes up by less than the price of coca.
Sunday, February 26, 2017
Primer on Deriving Demand for Inputs in a Fixed Proportion Production Process
The demand for inputs in the production of final goods is ultimately derived from the demand for the final products themselves, which is why input demands are sometimes called "derived demands." This relationship can sometimes be lost in all the math surrounding modern textbook treatments. I think this is why it is best to introduce students to the concept of derived demand using an example where an industry uses a fixed-proportion production process. Here the math is so simple that it doesn't get in the way of the economics of how output markets influence input markets (and visa versa).
However, few modern textbooks discuss this special case (exceptions include Friedman's Price Theory and Becker's Economic Theory). I think that is a shame. So, I thought I'd write a short primer on deriving an industry's demand for inputs into a fixed proportion production process. First, I provide an intuitive explanation for how to derive the inverse demand for an input using Alfred Marshal's famous knife manufacturing example. Second, I provide a formal discussion of how to derive the elasticity of input demand. Third, I show how this simple example illustrates Marshall's four laws of derived demand. Lastly, I provide some links to additional reading.
There is nothing original here. I am basically just summarizing some old notes that I wanted in one place. Hopefully someone besides me finds them useful.
1. Knives, Blades, and Handles
Suppose that knives are produced using a fixed proportions technology. Specifically, one handle and two blades are combined to create one knife. Figure 1 illustrates the demand curve for completed knives and the supply curves of each input (note that the Pb represents the price of two blades).
However, few modern textbooks discuss this special case (exceptions include Friedman's Price Theory and Becker's Economic Theory). I think that is a shame. So, I thought I'd write a short primer on deriving an industry's demand for inputs into a fixed proportion production process. First, I provide an intuitive explanation for how to derive the inverse demand for an input using Alfred Marshal's famous knife manufacturing example. Second, I provide a formal discussion of how to derive the elasticity of input demand. Third, I show how this simple example illustrates Marshall's four laws of derived demand. Lastly, I provide some links to additional reading.
There is nothing original here. I am basically just summarizing some old notes that I wanted in one place. Hopefully someone besides me finds them useful.
1. Knives, Blades, and Handles
Suppose that knives are produced using a fixed proportions technology. Specifically, one handle and two blades are combined to create one knife. Figure 1 illustrates the demand curve for completed knives and the supply curves of each input (note that the Pb represents the price of two blades).
Figure 1. Demand Curve for Final Product and Supply Curves of Inputs
Source: Friedman (1972)
So, how do we derive the demand for one input like handles? Well, let's think about what each curve is telling us. The demand curve for knives shows the most that consumers are willing to pay for a given quantity of knives. Similarly, the supply curve for each input shows the least that suppliers would have to be paid to provide a given quantity of that input. Thus, the most that knife producers would be willing to pay for a given quantity of handles is the difference between the demand for knives and the supply of blades (see Note XIV in Marshall's Mathematical Appendix). Put another way, the "demand price" for handles equals the "demand price for knives" minus the "supply price for two blades":
Figure 2 illustrates the derived demand for handles. Note that no handles are purchased when the price for knives equals the supply price for blades. This is because blades are so expensive at that level of output there is no money left over for handles.
Figure 2. Derived Demand for Handles
Source: Friedman (1972)
2. Deriving the Elasticity of Demand for Handles
We can use the inverse demand function for handles above to derive the elasticity of demand. I provide the details here, but the ultimate result is:
This formula can be useful in applied settings. For example, the EPA used this formula to calculate the elasticity of demand for small, stationary combustion engines (a key input in the product of irrigation equipment among other things) when considering adding regulations on that industry (see page 4-2).
3. Marshall's Laws of Derived Demand
This expression for the elasticity of demand illustrates several of Marshall's 4 laws of derived demand.
- "The demand for anything is likely to be more elastic, the more elastic is the demand for any further thing which it contributes to produce." (Note that as the elasticity of demand for knives increases, the elasticity of demand for handles increases).
- "The demand for anything is likely to be more elastic, the more readily substitutes for the thing can be obtained." (Not illustrated here because there is no substitutes for handles)
- "The demand for anything is likely to be less elastic, the less important is the part played by the cost of that thing in the total cost of some other thing, in the production of which it is employed." (Note that as Ph/Pk decreases, the elasticity of demand also decreases)
- "The demand for anything is likely to be more elastic, the more elastic is the supply of co-operant agents of production." (Note that as the elasticity of supply for blades increases, the elasticity of demand for handles also increases)
Side Note: Hicks later showed that Marshall's third law only holds if the elasticity of final demand is greater than the elasticity of substitution. An intuitive explanation by Saul Hoffman for why this is the case can be found here. However, we don't need to worry about that special case with fixed proportion technologies because the elasticity of substitution across inputs is zero. So the condition will always be satisfied.
4. Additional Reading
For more info on this topic, I'd recommend checking out these resources:
- Becker, Gary. Economic theory. Transaction Publishers, 2007.
- Diewert, W.E. "A Note on the Elasticity of Derived Demand in the N-Factor Case," Economica (May 1971): 192-198.
- Friedman, Milton. Price theory. 1972.
- Muth, R., "The Derived Demand Curve for a Productive Factor and the Industry Supply Curve," Oxford Economic Papers 16 (1964): 221-234
- Hoffman, Saul D. "Revisiting Marshall's Third Law: Why Does Labor's Share Interact with the Elasticity of Substitution to Decrease the Elasticity of Labor Demand?." Journal of Economic Education 40, no. 4 (2009): 437-445.
- Thurman, Walter N. "Applied general equilibrium welfare analysis." American Journal of Agricultural Economics 73, no. 5 (1991): 1508-1516.
Subscribe to:
Posts (Atom)