Sunday, August 23, 2020

Writing Your Perfect News Story

Composing Your Perfect News Story A Detailed Guide on Writing a News Story Journalistic composing isn’t as simple as it appears. Frequently understudies have no clue about where to begin and how to state their sentences even, also directing meetings. Underneath, you will discover top tips on the best way to make a subjective report. The subject or occasion you pick must be later and newsworthy The primary thing you ought to figure out how to do is to characterize what is newsworthy and what isn't. Any occasions occurring in a general public that may catch your reader’s consideration and is one of a kind and powerful is viewed as a newsworthy story. In the event that you chose to educate perusers concerning a business which isn't new or doesn’t offer any changes, it won’t be a newsworthy story. Nonetheless, telling about another business in a specific territory is truly newsworthy. Try to educate your perusers concerning the most recent occasions just, not the ones happening seven days back or somewhere in the vicinity. Society is talking about something different as of now and your news must be modern and expound on something happening at this moment. Likewise, make a point to be â€Å"local†, for example composing for the correct network. Referencing the overall occasions is likewise adequate; notwithstanding, the occasions going on in your general vicinity are your need. Address the overall occasions just in the event that they some way or another influence the crowd you compose for. Meeting the observers The meetings must be led with the right individuals, and it is vital. On the off chance that you expound on the bank burglary, you need to converse with the chief of the bank, a clerk, or any representative engaged with the occasion. Except if the bank customers were observers of the burglary, you ought not talk with them. Try to ask everybody as quickly as time permits. Recall that you need to tell about the most current occasions. Who? What? When? Where? Those fundamental four Ws must be built up inside the primary section of your composition, while â€Å"why† and â€Å"how† can be uncovered in the accompanying passages as of now. You are making a kind of a pyramid, on the highest point of which you compose the most significant data. Set up your piece When you have all the essential materials, begin building your piece. Recall that the most significant data is on the top, for example in the absolute starting point. Include citations You can include citations when composing or at explicit focuses in your story. Make a point to express the complete name, occupation, period of key individuals in your story. Quest for some extra figures or realities When you have finished your story, google extra raw numbers which will be intriguing and will enable your story to stick out. Your activity will be to contend with different news sources, and you will utilize a similar data and convey your data to similar perusers, so you will without a doubt need some additional data. Rehash your article uproarious Prior to giving your article to your educator (or manager), read your article resoundingly to comprehend the general progression of your story. Composing a report isn't so troublesome, and you will increase a few abilities in the wake of finishing such assignment a few times.

Friday, August 21, 2020

ORGANIZATIONAL ANALYSIS- THE ORGANIZATION Essay Example | Topics and Well Written Essays - 500 words

Hierarchical ANALYSIS-THE ORGANIZATION - Essay Example Papworth Hospital likewise has the Chest Medical Unit that offers respiratory administrations (Papworth Hospital NHS Foundation Trust, 2013). It is a Public NHS and has a colossal relationship with the network because of its long assistance in the region. The way of thinking of Papworth Hospital elevates a constructive way to deal with audit and conclusion, guaranteeing that the patient gets to clinical consideration in the most appropriate setting by the best faculty (Papworth Hospital NHS Foundation Trust, 2013). This identifies with cooperation from the staff who offer work all things considered for sheltered and successful clinical consideration. The hospital’s statement of purpose is to maintain its status as greatness place for conclusion and treatment of patients. Papworth Hospital might be supposed to be a grid association as obligations are partaken in the association. The board individuals are engaged with dynamic, patients and the general population are likewise remembered for meetings (Papworth Hospital NHS Foundation Trust, 2013). This type of correspondence clarifies the degree of interviews inside the association. Papworth Hospital appears to have issues with its inward control frameworks. This implies the administration procedure must be changed to improve the hazard the board frameworks just as think of a system that will guarantee that all partners assume their job productively (Papworth Hospital NHS Foundation Trust, 2013). In any case, the emergency clinic is quick to continually audit its reports what permits to control exercises in the association just as accomplish the set objectives. The clinic has a culture of looking into dangers corresponding to the quality gauges; subsequently, quality help arrangement proficiently (Papworth Hospital NHS Foundation Trust, 2013). Papworth Hospital nursing care conveyance framework includes present day ladies that guarantee tidiness of the patient condition just as help the medical caretakers in the wards effectively (Papworth Hospital NHS Foundation Trust, 2014). Social laborers help in releasing

Thursday, July 9, 2020

Writing Short Essays

Writing Short EssaysShort essays are great for those who have time to do the reading and do not have time to spend their time finding the best essay topic. These essays do not have a set length, but it is possible to compose essays which will last up to two or three pages.This type of essay has become quite popular among students who enjoy writing. The essay is written as a logical outline with detailed tips that are written in the essay that is about three pages long. There are many ways to write a short essay.One can choose to write an essay on an idea, and then write the argument that supports this idea. The conclusion is written in the middle. The idea that is used to support the essay idea could be a process, a fact, or a feeling. The truth of the matter is that there is no limitation for the length of the essay.You can also decide to add information to the essay that is related to the essay topic. The tips and techniques used by the author in the essay will need to be linked to the ideas and facts that were previously presented.Using the tips that are found in the essay is very important. It is important to also know how to deliver the information effectively to the reader.There are many topics that can be covered in short essays. It is important to choose a topic that has already been covered by others. The idea of doing a short essay does not have to be to test you on your skills, but it can be to prove to the student that they can write well.The essay is only as good as the opinion piece that is associated with it. This opinion piece should focus on the quality of the essay, and not the length of the essay. A good essay can be done in a shorter time, and the student can actually write an essay that is good enough to be read.

Tuesday, May 19, 2020

Netflix Case Study - 943 Words

Summary The movie rental industry is a living industry; there are constant changes with advances in technology, rights management, and the slow, but steady, move away from physical Media. Companies such as Netflix, Hulu, RedBox, and Blockbuster are being forced to look at new business models and try to keep up with these changes. Assignment Questions 1. How strong are the competitive forces in the movie rental marketplace? Do a ï ¬ ve-forces analysis to support your answer. Threat of New Competition: Netflix has almost zero threat of new competition. Any new competition would have to overcome large capital expenses to get started; these expenses include obtaining TV show and movie rights from the studios. Even if the starting†¦show more content†¦Streaming licenses can be revoked and/or modified at any time by the content provider. Intensity of Competitive Rivalry: The threat of rivalry is relatively low. The movie rental industry is dominated by a few firms, namely Blockbuster, and Movie Gallery (which liquidated all itself in 2010). However they are in competition with other industries such as cable and satellite companies, VOD services and sites like Hulu and Amazon. 2. What forces are driving changes in the movie rental industry? Are the combined impacts of these driving forces likely to be favorable or unfavorable in term of their effects on competitive intensity and future industry proï ¬ tability? The demand for digital content is driving changes in the rental industry. Technology is shifting from a physical medium to a digital distribution system. This is likely to be beneficial because Netflix is already rooted in the digital streaming industry and would only have to adapt to minor changes in technology. 3. What does your strategic group map of this industry look like? How attractively is Netï ¬â€šix positioned on the map? Why? Netflix is in a fairly favorable position on the strategic group map. Where Added value is measured in terms of instant movies and recommendations, and market coverage is measured in number of stores, vending machines, and online presence. 4. What key factors will determine a company’s success in the movie rental industryShow MoreRelatedCase Study : Netflix And Netflix1031 Words   |  5 Pages Case Study Netflix started in 1998 when a guy named Reed Hastings was tired of being charged late fees for keeping a rental movie. â€Å"Netflix would have no late fees; users could keep the movies as long as they wanted† (Grewal, Levy, 57). Netflix depended on on the Internet to take customers requests for rental discs because they had no brick-and-mortar stores. In 2002, Netflix had 1 million customers, in 2006, they had a 5 million customers, and in 2010, a whopping 14 million customers. CustomersRead MoreNetflix Case Study2880 Words   |  12 PagesAbstract The following is a case study of Netflix, Inc. an American-based company that provides the streaming of online media to consumers in North America, South America, and parts of Europe. This case study will provide a brief overview of the company’s history along with four present-day challenges that the company will face as it tries to stay ahead of the competition. In its discussion of the present-day challenges that Netflix, Inc. faces the discussion will also relate the proposed challengesRead MoreNetflix Case Study1171 Words   |  5 PagesImpact of horizontal and vertical conflict to Netflix Horizontal and vertical conflict has a great impact on Netflix. Less than a decade ago, if you wanted to watch a movie in the comfort of your own home, your only choice was to roust yourself out of your recliner and trot down to the local Blockbuster or other neighborhood movie-rental store. Blockbuster is still the world’s largest store-rental chain with over 9,000 stores in 25 countries and $4.1 billion in annual sales. But its revenues haveRead MoreNetflix Case Study1279 Words   |  6 Pagesexpenditures in up-front Ramp;D and advertising costs, both of which are emphasized in order to differentiate service and build brand equity. There are also government policies to reinforce the barrier. For example, in addition to its red envelops, Netflix has patents to protect essential characteristics of its business model such as its â€Å"Max Out† and â€Å"Max Turns† approaches. This creates cost disadvantages through a greater learning curve for new entrants, espe cially when competing against algorithmicRead MoreThe Netflix s Case Study3053 Words   |  13 Pages The Netflix Approach to Compensation – Case Study By: Maximillien Alepin, Yashar Eskandari, Shuhan Chen, Jake Bretton, Melissa Reed For: Professor Chen Yu-Ping MANA 443 – Compensation and Benefits Concordia University Summary – Part 1 The case study â€Å"Equity of Demand: The NETFLIX Approach to Compensation† includes information regarding the company, named Netflix. The case study provides useful information regarding the organizational culture of Netflix. The case is usually associatedRead MoreCase Study - Netflix Rollercoaster818 Words   |  4 PagesWelshymer BA 370 9/29/15 Extra Credit # 1 Case Study: The Netflix Rollercoaster 1. Netflix’s original marketing strategy offered several flat-rate monthly subscription options; in which, members could stream movies and shows via the Internet or have disks sent to their homes in a pre-paid and pre-addressed envelope. Free from the despair of due dates and late fees, members could keep, up to, eight movies at a time. Upon the return of a disk, Netflix would automatically mail out the next movieRead MoreEssay on Netflix Case Study1461 Words   |  6 PagesNetflix Inc.,: Streaming Away From DVD’s Case Study: Emily Heath Part 3- Alternative Solutions To ensure the company will achieve stability by maintaining customer appreciation and satisfaction, Netflix must invest their time and finances into new alternative solutions. The solutions are based on what problems have presented themselves and are in best interest of the customers and the company. The main concerns at the moment seem to be the unreliability and instability of the companyRead MoreNetflix Case Study Essay1334 Words   |  6 PagesNetflix Case Study The video rental industry began with brick and mortar store that rented VSH tape. Enhanced internet commerce and the advent of the DVD provided a opportunity for a new avenue for securing movie rentals. In 1998 Netflix headquartered in Los Gatos California began operations as a regional online movie rental company. While the firm demonstrated that a market for online rentals existed, it was not financially successfully. Netflix lost over $11 million inRead MoreNetflix Case Study1814 Words   |  8 Pagesidentifying creativity and innovation as the key to Netflix past success as Harold has consistently shown in his decisions throughout the history of the company taking bold action to chase un-ventured routes to satisfying customer needs. The essence of the report however, is to highlight the issues surrounding the current technological advancements in the DVD rental market now that VOD has become a feasible and realistic platform that can be supported. Netflix is faced with a multitude of options and myRead MoreNetflix Case Study5103 Words   |  21 Pagesï » ¿ Netflix in 2012: Can It Recover from Its Strategy Missteps? Executive summary: Netflix employs a subscription-based business model and subscribers can chose from a variety of subscription plans. The business model consists of two parts; the DVD-by-Mail option, and the streaming option, which launched in January 2007. Both options were bundled together until July 2011 when Reed Hastings announced the separation of the two services. Before the announcement Netflix recorded tremendous

Wednesday, May 6, 2020

The Lost Boy - 1478 Words

A Child Called it In his two novels A Child Called it, and The Lost Boy, the author, Dave Pelzer explains about his childhood. During that time, author was a young boy from an age 3 to an age 9. Davids mother has started to call him The Boy and it. The author mainly covers the relationship between his family. His main focus point is the bond between his mother and him. He describes his mother as a beautiful woman, who loves and cherished her kids , who changed from this The Mother, who abused him because she was alcoholic and was sick. The Mother used David to take her anger out. An abusive mother who systematically closed down any escape he may have from her clutches. Shuts out any source for food for the poor starving child.†¦show more content†¦The endless sea of faces, prodding me, teaching me to make the right choices, and helping me in my quest for success. Daves purpose of writing these books was to tell the world, how he was treated like many other kids are treated in the families. He was giving a message to other people, how child abuse had changed over many years. There are many kids in the world who are mistreated like David was. Reading his book makes you realize and makes you see through a child perpective of getting abused by this own mother. Obviously, the stories of Daves childhood are difficult to read. At times, I had to put the book down and walk away for a few days before I could continue. So why would anyone want to read this book, with its seemingly endless tales of torture and cruelty? More importantly, why should anyone read it? There are two reasons, the first being that Pelzers tale is a testament to how much the human spirit can endure and remain whole. Pelzer tells of his resolve to not be defeated. With each incident, Dave managed to find some way to placate his mother. While he couldnt make the abuse stop, he learned how to manipulate his mothers behavior enough to keep the immediate situation from getting even more ugly. Each time his mother walks away from him, you get the feeling that he would like to shout out after her, Ha! You didnt kill me this time, bitch, and you arent going to kill me next time either! It isShow MoreRelatedThe Lost Boys Of Sudan Essay1200 Words   |  5 Pagesâ€Å"The Lost Boy s of Sudan; the Long, Long, Long Road to Fargo† by Sara Corbett, The New York Times April 1, 2001. SPEAKER: Sara Corbett, a contributing New York Times writer and journalist, is the speaker and narrator of the lost boys of Sudan. She mainly writes about the struggles of people around the world and the fortitude, or courage, they uphold. According to a recent interview by The Lightning Notes, Corbett states, â€Å"In general, I find myself really interested in moments of fortitude inRead MoreEssay about The Lost Boy1438 Words   |  6 Pages A Child Called quot;itquot; In his two novels A Child Called quot;itquot;, and The Lost Boy, the author, Dave Pelzer explains about his childhood. During that time, author was a young boy from an age 3 to an age 9. David’s mother has started to call him quot; The Boyquot; and quot;it.quot; The author mainly covers the relationship between his family. His main focus point is the bond between his mother and him. He describes his mother as a beautiful woman, who loves and cherished her kidsRead Moreâ€Å"The Lost Boy†1704 Words   |  7 PagesThe fictional life and death of a twelve year old little boy named Robert is vividly articulated in this moving tale by Thomas Wolfe. The reader learns of the boy’s life through four well developed points of view. The reader’s first glimpse into Robert’s character is expressed through a third person narrative. This section takes place on a particularly important afternoon in the boy’s life. The second and third views are memories of the child, through the eyes of his mother and sister. His motherRead MoreThe Lost Boys of Sudan Essay1159 Words   |  5 Pagesof the Lost Boys of Sudan is one that provides the world with many examples of social interaction, some being violent and others being inspirational. Their journey from Sudan to Ethiopia and Kenya, then on to the United States for a better life for themselves and their families gives an insight into how certain cultures deal with and overcome adversity. Culture is the complex system of meanin g and behavior that defines the way of life for a given group or society, in the case of the Lost Boys, theRead MoreA Lesson in Maturity from J.M. Barries Peter Pan Essay975 Words   |  4 Pages J M. Barries Peter Pan is a poignant tale about the magic of childhood. The main character, Peter Pan, is a magical boy who wishes never to fall into the banality of adulthood, but to have an adventure every moment and remain forever young. The play details Peters relationship with a young girl, Wendy, who is on the cusp of young adulthood. Peters gang, the Lost Boys, wish for a mother to read them stories. Peter goes and retrieves Wendy to be their new mother. Their adventures reveal muchRead MorePeter P A Story Of Our Childhood1216 Words   |  5 Pagesis an imaginary land. The story is about the interesting adventures of Peter Pan, Wendy, Michael, and John. Peter Pan is my favorite charecter. He is a free spirited and mischievous young boy who can fly and never grows up. Peter Pan spends his never-ending adventures in Neverland, he is the leader of the lost boys, fairies, mermaids, Indians, and some normal children from the world outside Neverland. Peter is close to our childhood. He is my favorite character in this story. He teaches children thatRead MoreUnited for One1047 Words   |  4 Pagesto work and school but in his eyes I was sinning. This is also why I think I relate most to Luma. There are other reasons why I think that I relate to Luma. Throughout the whole book, Luma never wants to give up. For example in chapter nine, â€Å"Get Lost†, when Luma knows she has to find a new field for her fugees to play on. According to St John, on page 93, the YMCA calls and lets Luma know they found a field for her. She never gave up on her high hopes. Luma always wanted the best for her playersRead MoreAnalysis of The House of the Scorpion by Nacy Farmer659 Words   |  3 Pages The House of The Scorpion, by Nancy Farmer, follows the life of a boy named Matt Alacrà ¡n throughout the first fourteen years of his life. In the country Opium during a futuristic time period, Matt lives with his â€Å"mom like figure† named Celia, who is actually a caretaker and housekeeper at the residence of El Patron. El Patrà ³n is a very wealthy man who is 148 years old. He manages to live for so long due to his production of eejits, which are clones that he relies on for transplants. With his needRead MoreA Window Of Your Dreams993 Words   |  4 Pages A Window To Your Dreams In J.M. Barrie’s classic children’s story Peter Pan, a young boy named Peter takes three children on the adventure of their dreams in a faraway place called Neverland. The story, revolving around how â€Å"all children, except one, grow up† (1) takes the reader back to mindsets of children and their elaborate fantasies that might actually be true if they could only remember. With sword fights between pirates, fairy dust, flying away from home right out the window and death beingRead MoreDifferent Times, Different Ideals1267 Words   |  6 Pagescompanionship of a male character, which often results in jealousy. Wendy and Tinker, being the two main female characters of the story, represent these two different types of women acceptable in the Victorian Era. Wendy, who is a mother-like figure to the Lost Boys and a female accomplice for Peter, seems to be the only female character in the novel that is a â€Å"fitting† women of the time period with the traits of being both parental as well as desiring the companionship of Peter. Tinker Bell, on the other hand

Expectation in Perceptual Decision Making †Assignmenthelp.com

Question: Discuss about the Expectation in Perceptual Decision Making. Answer: Introduction: Values are guided by the inherent knowledge and moral obligations possessed by an individual. The personal and ethical values possessed by an individual have a profound effect on the life processes of a person. The current section of the chapter focuses upon a number of aspects which can have a profound effect on the decision making and judgemental skills of an individual. Therefore, the manner of deconstruction of the problems, phrasing of the questions, eliciting of responses can exercise huge influences upon the judgmental patterns of an individual. The response elicitation procedures may become a major instrumental tool for affecting the process of inculcation of values by an individual. As argued by Herben Goldberg (2014), the processes of decision making is very complex and have to pass through multiple layers of cognition. The values possessed by an individual along with the present circumstances can have a huge impact upon the decision or judgment making. This often results in a decisional glitch where an individual does not possess a coherent opinion. The lability in judgment could be attributed to a number of factors such as habituation and eagerness for learning possessed by an individual. The stimulus presentation is the second most important stage where homogeneity of information along with limited knowledge about alternatives could impact the decision making of an individual(Summerfield De Lange., 2014) However, I think the values are based upon the ever changing circumstances and challenges encountered by an individual. In this respect, being the captain of my university sports team I would need to emphasize upon the present situation and circumstances before taking up a particular decision which works in the favour of the different members of my sports team. Though, I have often found it difficult in arriving at a decision which would be agreed upon by most of team members. In such situations there is a constant conflict between existing values and weighing of the alternatives. The current head analyses the perception of risk possessed by an individual and the responses elicited by the evaluation of risk upon the decision making of an individual. The assessment of the risk could be directly related to Maslows theory of hierarchy, where the complex fears could be neutralized only with the evaluation of the more direct and pertinent fears. The deconstruction of risk and fear in the minds of an individual could be explained on the basis of a number of cumulative theories. The theories could be diversified into different components such as the knowledge theory, personality theory, economic theory and political theory. The knowledge theory has the most direct and pronounced effect on the decision making of an individual. As mentioned by Coghill, Seth, Matthews (2014), much of the risk perception is often influenced by paradoxes or the value sets incorporated within an individual. I had come across a number of different perspectives regrading the assessment and neutralization of fear during my university days. In this respect, much of my anxieties would stem from the pre-conceived notion where any new choices taken up me could have a profound effect on my future. I had to choose from a number of courses presented to me in my university module. My decision making was based upon a set of ideologies regrading the ample amount of benefits offered by different courses. I had been more inclined towards taking up nanotechnology as my future concern. However, I was hyper anxious attributed to the high costs involved in the nanotechnology courses. I guess I personally wanted to avoid the complexities associated with the course, much of the hypothesis was put in my mind from relevant sources. Thus, the reflexes play a pivotal role in shaping the personality attributes of an individual which further governs the choices. Decision making is affected by rationality and may vary from person to person. In this respect, the psychology of perception sets the living world as a standard against which the decision making or the rationale is done. Additionally, the psychology of thinking forms the basis for the rationale or the decision making. In this context, the biases in people can be seen as a hindrance which further challenges the realization and the affirmation of the self goals (Tsunada et al. 2016). The judgemental biases are further supported by researches and statistical evidences. I have personally faced such decision making biases owing to the overlapping facts and information made available to me. I had to produce a power point presentation based upon the current trends in marketing intelligence as a part of my university course work. However, I had very little or no time available for the data collection and presentation. This is because we had a series of exams within the seminar presentation. One of the friends copied much of the data available over the internet with little or no manipulation. Moreover, his work turned out to be well and he received a token of appreciation for that. However, I could not resort to the means of false representation of facts and data. In this respect, I was affected with the basic psychological perception in my mind regrading fair business practices. On the contrary my friend was affected with the oblivion which focussed more upon profit than ethics. I believe such conflicting situations could also be attributed to the real world. The last section of the chapter focuses upon Neutral Omnipartial Rule Making (NORM). This stems from the various factors which are taken into consideration for the presentation of a choice on a moral ground(Newell Shank, 2014). The process evaluates the underlying logic which forms the basis of our moral decision making. The NORM is based upon the philosophical and cultural attributes which forms the basis of decision making of a person. NORM also includes publicly elicited and general responses in the process of decision making. In this respect, I had to face an ethical dilemma in my practice as a trainee nurse during my nursing postgraduate program. I had to deal with patients who were receiving care services for end of life palliative care. My moral obligation was to disclose to them every trivial details their regarding their present health conditions. However, disclosing the fact about terminal illness to a patient would often mean lowering the hope and positive attitudes present in a patient which could also affect their recovery. References Coghill, D. R., Seth, S., Matthews, K. (2014). A comprehensive assessment of memory, delay aversion, timing, inhibition, decision making and variability in attention deficit hyperactivity disorder: advancing beyond the three-pathway models.Psychological medicine,44(9), 1989-2001. Herben, T., Goldberg, D. E. (2014). Community assembly by limiting similarity vs. competitive hierarchies: testing the consequences of dispersion of individual traits.Journal of Ecology,102(1), 156-166. Newell, B. R., Shanks, D. R. (2014). Unconscious influences on decision making: A critical review.Behavioral and Brain Sciences,37(1), 1-19. Riskin, L. L. (2013). Inner and Outer Conflict. Summerfield, C., De Lange, F. P. (2014). Expectation in perceptual decision making: neural and computational mechanisms.Nature reviews. Neuroscience,15(11), 745. Tsunada, J., Liu, A. S., Gold, J. I., Cohen, Y. E. (2016). Causal contribution of primate auditory cortex to auditory perceptual decision-making.Nature neuroscience,19(1), 135.

Wednesday, April 22, 2020

Lust for Life by Lana Del Rey free essay sample

You only live once andliving life is all that matters. Sometimes life can be hard, and throw you curveballs, but in the end what you make out of it is the final outcome.Lana Del Rey really captured this concept in her new album â€Å"Lust for Life†.She has her: good songs, her bad songs, and some songs that left me asking myself what did I just listen to? Let’s start with her good songs. Lana packs her new album with a plethora of finely detailed songs. One of these songs is â€Å"Love†. â€Å"Love† captures what it’s like to be young and in love, but not on the receiving end. The lyrics tell a story of someone watching as the â€Å"popular kids† all fall in love. Soon the main character develops a crush on someone and attempts to look nice for them. At the end of the song the character stops thinking about their crush, and starts focusing on themselves. We will write a custom essay sample on Lust for Life by Lana Del Rey or any similar topic specifically for you Do Not WasteYour Time HIRE WRITER Only 13.90 / page Any teenager with a crush on somebody could find this song very relatable. Keeping with her love song streak, Del Rey goes on to sing â€Å"Tomorrow Never Came†. While â€Å"Love† being sentimental and empowering ,it is nothing like† Tomorrow Never Came†, which is packs a whole lot more emotion. The song features guest singer Sean Ono Lennon. The pair’s harmony together is perfect and they go on creating a beautiful piece of artwork. The lyrics paint a picture of a messy breakup between a couple. Lana depicts her and her special other listening to the radio, as well as meeting together in the pouring rain Like many other breakup songs, they the relationship. The one thing this song does that makes it stand out is it talks about only the good parts of the relationship. Unlike some love songs, it is not written in vengeance, it is merely a story of a broken heart. Lana has definitely put a lot of emotion and thought into her album. This showed in the fi rst two songs. Although, her hard work really showed in one particular song â€Å"Beautiful People, Beautiful Problems†. The song starts off with a strong piano accompaniment and poetic lyrics. She sings about the colors blue, green, and red. As her guest singer, Stevie Nicks, starts soon singing about touch such as warmth and hardness. They both sing with passion and emotion in this song. As the song gets closer to end we begin to realize the meaning of the song. We are all beautiful and we all have our own problems just as beautiful. All these songs are examples of Lana’s hard work and true talent. But there were some songs I found were not her best. Every artist has some songs that are either unknown or aren’t cared for. I found these songs confusing and hard to understand and /or didn’t like them.The first on my list is â€Å"Cherry†. The title is misleading, but it is, in fact, a love song. Unlike her other love songs â€Å"Cherry† is marked explicit and has unnecessary vulgar language. The song ismore of a revenge song and I fail to understand what the meaning of the song is. Lana although does use her vocal talent more in this song. The song â€Å"Cherry† stood out to me because of Lana’s use of her vocals . Unlike â€Å"Cherry†, the song â€Å"Summer Bummer† reminded more of her music. Lana has a unique style of music and I didn’t expect her to incorporate rap into her music. But in â€Å"Summer Bummer† Lana features rapper A$AP Rocky. The background beat didn’t feel right with her type of vocals. This song really seemed to focus more on A$AP Rock y’s style of music. But some who enjoy rap music may enjoy this song. The last song on my list I found controversial as well. â€Å"When the World Was at War We Kept Dancing†. The song is about how when we were at war we kept living our lives. The song is very historic and has a good beat. Some lyrics which repeated itself were hard to understand. â€Å"When the World Was at War We Kept Dancing† is also marked explicit and again uses unnecessaryvulgar language. Although these songs weren’t chart-toppers they did certainly surprise me and may interest others but not me. One thing Lana Del Rey does is adds songs to the album that don’t necessarilyhave a clear point. Every now and then Lana leaves some fans confused by some songs. Like I said earlier Lana is a unique artist and she left me confused by these songs. The first song that confused me was â€Å"Change†. The title and song are exactly the same. Lana sings about how change is coming and how she doesn’t know when it’s coming, but she’s ready for it. The song’s vocals are so intricate and lyrics so mystifying, it had me listening to it over and over again, the point is I got confused. It was only until listening to it for about the 50th time did I understand the song. But each time I listened to it I realized, this doesn’t sound like Del Rey. This song is very different from most of her songs. â€Å"Change† has a strong piano accompaniment which sets the tone for the song, which is soft and powerful. The next song that confused me was â€Å"In my feelings†. The song starts off with Lana describing someone who distracts her from, well I don’t know. Then the song takes a mood change and the chorus comes in. â€Å"You got me in my feelings†¦..† why this immediate change? What does it mean? This was my initial reaction I had to go to the next song â€Å"13 Beaches†. The song starts off instrumental, then with someone talking in the background. The song is kind of slow and the lyrics are kind of all over the place along with the vocals. All this combined I didn’t understand the overall main idea of this song. Either way, Lana Del Rey is still an icon. So there are the good, the bad, and the confusing songs off Lana Del Rey’s latest album â€Å"Lust for Life†. An extraordinary singer who has a unique style of music which boosted her fame. Lana Del Rey proves to us that anything is possible after overcoming something so strong. In the end Lana teaches us a something. She overcame something so powerful, why can’t we do the same?The phrase â€Å"anything is possible† is true and we are shown this through Lana’s Perseverance. Perseverance is a powerful thing.

Tuesday, March 17, 2020

Feasibility Assessment of the Pearl River Tower The WritePass Journal

Feasibility Assessment of the Pearl River Tower References Feasibility Assessment of the Pearl River Tower 1.Introduction:1.1 Background to the Problem:1.2. Substitute Technology:1.2.1. Description of Pearl River Tower:1.1.2. Purpose of the report:2. Method2.1. Criteria2.2. Procedure3. Results3.1. Sustainability and Energy Generation Techniques3.1.1. Sustainability Approach: â€Å"Zero Net Energy†3.1.2 The Active Faà §ade3.1.3 Radiant Ceiling and Below Floor Ventilation3.1.4. Building Integrated Photovoltaics3.1.5 Wind Turbines3.2. Safety:4. RecommendationReferencesRelated This research report provides a feasibility assessment of the Pearl River Tower. The Pearl River Tower upon completion plans to be the most energy efficient and sustainable of all mega structures in the world till present day. Conventional building is preferred due to its economic benefits and amount of time needed for completion of a project, but it is one of the biggest contributors to global warming. Green building, which is perfectly portrayed in the Pearl River Tower, serves as a good solution for the ongoing environmental problem. This report defines the Pearl River Tower and explains the way it functions as well as its numerous benefits. The Pearl River Town is more efficient than and other super tall structures, is economically feasibly, can generate its own energy as well as an additional amount of energy that can be supplied to exterior sources. After a full analysis of each criterion, a recommendation will be reached concerning the implementation of the Pearl River Tower a nd similar green structures. 1.Introduction: 1.1 Background to the Problem: One of the greatest hurdles of the 21st century is the excessive emission of CO2. The environmental conditions are regressing annually and have reached a point where it has become a strain on human and animal life. Buildings and other mega structures are primary contributors to global warming. Classical building’s main concern was purely based on economy, durability, and comfort. However nowadays people have become more aware of the environment, and are therefore taking drastic measures to improve this situation. Some of these measures include green building, which refers to building using energy efficient methods, and shifting from fossil fuels to renewable energy. Buildings are responsible for a minimum of 40% of energy consumption and carbon emissions in most countries (4, para.1). The per capita carbon emission in china was almost five tons in 2008 (average is 4.18 tons per capita), which is about 18% of world emissions (3, para.7). The Chinese government aims to reduce 40-45% of emissions by 2020(6, para.1). International companies are taking on new sustainable and renewable projects, many of which are based in China. Energy-saving technologies in buildings can drastically reduce carbon emissions. The main problem with traditional towers is the consumption of fossil fuels, which results in numerous types of wastes. The methods applied within the structures to generate and consume energy are polluting and hazardous to the environment. They use artificial lighting, cooling, and heating systems that demand a great deal of electricity, which in turn is a result of burning fossil fuels. The glass that these buildings are composed of is another example of the inconvenient methods applied. This glass is called architectural glass that allows the transfer of heat and energy, which leads to the squandering of heat. All these negatives have lead to the development of various renewable energy techniques. Solar energy, wind power, hydropower, biomass, biofuel, and geothermal energy are all types of renewable energy that could be used in green buildings. 1.2. Substitute Technology: The Pearl River Tower was recognized as a major substitute for traditional buildings. It is a structure that is non-hazardous to the environment and that can generate and supply energy. 1.2.1. Description of Pearl River Tower: The Pearl River Tower is a 309.6-meter tower, consisting of 71 floors, and extending over 214,100 square meters of land (5, project facts). This tower, upon completion, is expected to be the most efficient of all the super tall structures in the world. It is located in Guangzhou, China. The Pearl River Tower is designed to generate its own energy using sustainable methods, which decreases its dependency on the electrical network, therefore reducing consumption of fossil fuels required to power it. The design took many aspects into consideration, which included the integrated system’s interdependence and the building site. To achieve the optimum design many factors were studied such as the site, wind direction and speed, material, sun path, energy sources and building alignment. After these numerous studies, the team of engineers and architects where able to combine a number of different systems which include wind turbines, photo-voltaic, active faà §ade, and double wall systems (7, para 7) Although the structure was originally designed to be â€Å"net zero energy†, such that the building is self-sustaining, and any extra power would be sold and sent back to the grid; however, there were modifications to the plan. The tower was optimized in a way to consume 60% less energy than any other conventional building of its size (1, para 1). 1.1.2. Purpose of the report: Constructing â€Å"carbon- neutral† structures has become of major interest to engineers recently. Therefore, the Pearl River Tower would serve as a stepping stone for future designs of green skyscrapers. This report will study the feasibility of the Pearl River Tower through evaluating certain criteria. The closing of the report will include a recommendation of whether to construct green buildings such as the Pearl River Tower. 2. Method 2.1. Criteria We will study the feasibility of the Pearl River Tower by studying various criteria: Sustainability and Energy Generation Techniques: We will study and discuss the methods used to produce energy within the tower, in addition to the structures sustainability. Safety: We will study the effect the tower has on its occupants and its surroundings. Construction process: We will explore the techniques used while constructing the tower and the time needed for completion. Economy: We will consider the cost of the tower and whether the project is economically feasible. Efficiency: We will discuss whether the structure is efficient on the long run. 2.2. Procedure In order to assess the mentioned criteria we gathered information from plenty of sources including a published report released by the Pearl River Tower’s construction company called SOM, numerous online articles, and interviews conducted with university professors. 3. Results 3.1. Sustainability and Energy Generation Techniques 3.1.1. Sustainability Approach: â€Å"Zero Net Energy† The initial approach for the Pearl River Tower was one that would provide â€Å"Zero Net Energy† generated. This approach would require the implementation of four interdependent steps: Reduction: This first step is defined by identifying the possibilities of energy reduction, then proceeding in reducing the amount of energy consumed by the building as much as possible. The focus is on systems with high power consumption such as HVAC (Heating ,Ventilation and Air Conditioning) as well as lighting systems. Absorption: The second step is to include absorption strategies, defined by taking advantage of natural and passive energy sources, such as the sun and the wind. Reclamation: This step in a high performance design aims to recollect all the energy that is already stored within the building. Once energy is added to the building, it could be reused. An example used in the Pearl River Tower is the recirculation of chilled air from AC systems to precool the outside air before it enters the building so that less energy is required to cool it down to required levels. Figure 1 Micro Turbine Installation Generation: this final step aims at generating clean power in an efficient and environmentally friendly manner. The Pearl River Tower implements micro –turbines which are able to generate energy cleaner and more efficiently than what the grid is capable of. It is worthwhile to mention that these turbines can be operated using different typed of fuel such as kerosene, diesel, propane and methane gas. (7,p4) Figure 2 A double walled high performance facade 3.1.2 The Active Faà §ade Nowadays, employment of reflective, fully glazed facades is becoming increasingly common. Their popularity started in Europe and is now spreading across the United States and China. By including a second layer of glass behind the exterior one, the room for increased venting, shading and control application would be increased. The active faà §ade is an application for the reduction strategy mentioned earlier since a dehumidification system could be applied by harnessing the heat collected in the double wall faà §ade. The design of the double walled faà §ade provides benefits such as increased thermal comfort , an improved air quality due to recirculation as well as better lighting due to a transparent nature of the walls. They also provide noise insulation from outside conditions, and that is especially needed if the tower is high enough, since wind speeds at high altitudes would create vortices that produce a lot of noise, as well as the street level floors that have the problem of noise from traffic. Furthermore, the increased penetration of light from the exterior would require less artificial lighting and therefore would lead to saving energy. The cavity also acts as a natural chimney using the cooler air from the occupied office areas to enter the cavity via a gap at floor level to allow fresher air to enter the occupied areas. The trapped hot air in the cavity is extracted through the ceiling void and is used either as a pre-heat or pre-cool depending upon outside air temperatures. The faà §ade then acts as an integral part in our reclamation as well as reduction strategy. 3.1.3 Radiant Ceiling and Below Floor Ventilation As mentioned previously, HVAC operation is one of the more costly operations in a building when it comes to energy. The Pearl River Tower designers therefore implemented new techniques that help cut down on costs. The traditional approach is to dump cold or warm air into the occupational space for it to mix with the ambient air in order to balance out at a comfortable temperature. This approach requires constant energy input for the HVAC system. The designers chose to implement a radiant ceiling and below floor ventilation system in order to provide that comfort from different methods, not just dumping air into a room. The room temperature would be conditioned from above and below simultaneously through a radiative system in the ceiling and a floor air delivery system. This system is effective in cutting down maintenance and operation costs compared to traditional HVAC 3.1.4. Building Integrated Photovoltaics Building integrated photovoltaics as opposed to normal photovoltaics would make up the building exterior instead of being added as an extra feature. In the Pearl River Tower application, the photovoltaics serve a dual purpose: they provide the buildings outer envelope as well as generating electricity gathered from solar radiation. We thus save money and energy by not paying for wall mounting panels, and adding the cost of the photovoltaics as an extension. The system not only provides electricity generation, but it also shades the parts of the tower that are most susceptible to sunlight. 3.1.5 Wind Turbines Figure 3 Building integrated Photovoltaics Wind energy is the fastest growing renewable energy source in the world; so naturally, the Pearl River Tower will have wind turbines installed in an effort to harness the wind’s power, especially at high altitudes, where the wind speeds are highest. Wind turbine performance is also significantly increased in the tower due to their integration with the tower’s architecture. â€Å"The Pearl River tower will implement vertical axis wind turbines, as they are capable of harnessing winds from both prevailing wind directions with minor efficiency loss.†(7 p8) Figure 5 Wind portal opening The tower will have 4 large openings that are designed to decrease wind drag forces and optimize wind velocity. It is in those openings that the wind turbines will be installed. A model of the building with the openings was studied in a wind turbine and results showed â€Å"If the wind strikes the building perpendicular to the opening, there is a drop in portal velocity. However, from almost all other angles, the wind velocity increase exceeds the ‘ambient’ wind speeds.† (7 p 9). Therefore by placing vertical axis wind turbines, one in each of those 4 openings, a sustainabl e and renewable energy source would be provided year round. It is noteworthy to mention that those turbines are low maintenance, low noise and low vibration devices that will not prove to be a nuisance to people in the building. Figure 4 Vertical axis wind turbine 3.2. Safety: The tower is beneficial for both lives inside and outside of it. Because it emits less greenhouse gasses it is prone to be less hazardous to the surrounding environment. The systems used within the tower have proved to provide a health and safe environment for its occupants. The double wall system provides a big amount of natural light into the building therefore lessening the necessity for artificial light (7, para6). This in turn affects the comfort of the human eye. The photovoltaic panels are located on the roof level of the tower therefore protecting the roof occupants from the direct and harmful effect of UV-rays. (7, para8)The absence of electric fans and air conditioners in the building in addition to the ventilation system installed, has improved indoor air quality and reduced the humidity. All these factors improve inhabitant’s comfort and productivity, and maintain a health environment. 4. Recommendation The sum of all the sustainable and renewable methods employed in the Pearl River Tower led to a significant reduction in energy consumption. Although initial design was for the building to rely solely on those methods, the project cannot be considered as a failure, only as an achievement and stepping stone for future green buildings. The implementation of all those systems and ideas prove that the concept of a â€Å"zero energy† superstructure is within our reach in the near future and is not as crazy an idea as initially conceived. It is important to note that the micro turbines were dropped from the project since the power company in Guangzhou would not allow resale of electricity, and therefore the use of micro turbines, although beneficial, would not justify their cost of installation and operation and in an economically wise decision, they were removed from the design. Their addition would have further increased efficiency to a great extent. After the results achieved, it is only logical to expect a rise in green buildings around the world, especially with the rapid progress of new technology in sustainable energy, and ultimately a â€Å"zero energy† superstructure will be constructed. Till then, the investors in such buildings would need government cooperation in order to continue their pioneering efforts in creating a more sustainable and healthier mode of living. References Frechette III, R. E. (2009). Seeking Zero Energy. Retrieved on march 14th, 2011 from: http://web.ebscohost.com/ehost/pdfviewer/pdfviewer?hid=113sid=decf7698-dfb6-44e8-bd01-69ee6db3a178%40sessionmgr115vid=2 Fortmeyer, R. (2011). SOM’s Pearl River Tower. Architectural Records (archive). Retrieved on march 14th, 2011 from: http://archrecord.construction.com/features/archives/0612casestudy-1.asp Go, K. (2010 , 02 , 01). World’s most energy efficient building to rise in China. Shanghai News. Retrieved on march 14th, 2011 from: ecoseed.org/en/energy-efficiency/green-buildings/article/79-green-buildings/6053-world%E2%80%99s-most-energy-efficient-building-to-rise-in-china- Richerzhagen, C. (2008). Energy efficiency in buildings: a contribution of China to mitigate climate change. Retrieved on march 14th, 2011 som.com/content.cfm/pearl_river_tower evonik.cn/region/greater_china/en/company/news/low-carbon-economy/Pages/default.aspx Frechette, R. , Gilchrist, R.(2008) ‘Towards Zero Energy’ A Case Study of the Pearl River Tower, Guangzhou, China. CTBUH 8th World Congress 2008.

Saturday, February 29, 2020

Airline Economics Essay Example for Free

Airline Economics Essay Choose cite format: APA MLA Harvard Chicago ASA IEEE AMA Haven't found the essay you want? Get your custom sample essay for only $13.90/page ? The purpose of this note is to provide background to the study of the airline industry by briefly discussing four important economic aspects of the industry: (1) the nature and measurement of airline costs; (2) economies of scope and hub-and-spoke networks; (3) the relationship between yields and market characteristics; and (4) the S-curve effect. The Appendix to this note contains a glossary of key terms used throughout the discussion. Airline costs fall into three broad categories: flight sensitive costs which vary with the number of flights the airline offers. These include the costs associated with crews, aircraft servicing, and fuel. Once the airline sets its schedule, these costs are fixed. traffic-sensitive costs which vary with the number of passengers. These include the costs associated with items such as ticketing agents and food. Airlines plan their expenditures on these items in anticipation of the level of traffic, but in the short run, these costs are also fixed. fixed overhead costs which include general and administrative expenses, costs associated with marketing and advertising, and interest expenses. The largest category of costs is flight-sensitive. An important point about an airline’s cost structure, and a key to understanding the nature of competition in the industry, is that once an airline has set its schedule, nearly all of its costs are fixed and thus cannot be avoided. Because it is better to generate cash flow to cover some fixed costs, as opposed to none at all, an airline will be willing to fly passengers at prices far below its average total cost. This implies that the incidence of price wars during periods of low demand is likely to be greater in this industry than in most. There are two alternative measures of an airline’s average (or, equivalently, unit) costs: cost per available seat mile (ASM) cost per revenue passenger mile (RPM) Cost per ASM is an airline’s operating costs divided by the total number of seat-miles it flies. (An available seat mile is one seat flown one mile.) It is essentially the cost per unit of capacity. Cost per RPM is the airline’s operating costs divided by the number of revenue-passenger miles it flies. (A revenue passenger mile is one passenger flown one mile.) It is essentially the cost per unit of actual output. These two measures are related by the formula: Cost per RPM = cost per ASM ( load factor where load factor is the fraction of seats an airline fills on its flights. In the end, it is cost per RPM that an airline must worry about, for it must cover its cost per RPM to make a profit. Airlines differ greatly in both their costs per ASM and costs per RPM. For example, in 1992 Southwest had a cost per ASM of 7.00 cents, while USAir had a cost per ASM of 10.90 cents. Similarly, Delta had a cost per RPM of 15.33 cents while American had a cost per RPM of 13.81. Differences across airlines in cost per ASM reflect differences in: 1) average length of flights (cost per ASM declines with distance). 2) fleet composition (cost per ASM is smaller with bigger planes). 3) input prices, especially wage rates. 4) input productivity, especially labor. 5) overall operating efficiency. Differences across airlines in cost per RPM reflect differences in cost per ASM plus differences in load factor. Two airlines might have very similar costs per ASM, but quite different costs per RPM because of differences in load factor. For example, in 1992 USAir and United’s cost per ASM differed by less than 2 cents (USAir 10.90, United 9.30), but their costs per RPM differed by nearly 5 cents (USAir 18.54, United 13.80) because of USAir’s lower overall load factor (USAir .59, United .67) Economies of Scope and Hub-and-Spoke Networks Economies of scope play an important role in shaping the structure of the U.S. airline industry. The source of economies of scope in the airline industry is the hub-and-spoke network. In hub-and-spoke network, an airline flies passengers from a set of â€Å"spoke† cities through a central â€Å"hub,† where passengers then change planes and fly from the hub to their outbound destinations. Thus, a passenger traveling from, say, Omaha to Louisville on American Airlines would board an American flight from Omaha to Chicago, change planes, and then fly from Chicago to Louisville. In general, economies of scope occur when a multiproduct firm can produce given quantities of products at a lower total cost than the total cost of producing these same quantities in separate firms. If â€Å"quantity† can be aggregated into a common measure, this definition is equivalent to saying that a firm producing many products will have a lower average cost than a firm producing just a few products. In the airline industry, it makes economic sense to think about individual origin-destination pairs (e.g., St. Louis to New Orleans, St. Louis to Houston, etc.) as distinct products. Viewed in this way, economies of scope would exist if an airline’s cost per RPM is lower the more origin-destination pairs its serves. To understand how hub-and-spoke networks give rise to economies of scope, it is first necessary to explain economies of density. Economies of density are essentially economies of scale along a given route, i.e., reductions in average cost as traffic volume on the route increases. Economies of density occur because of two factors: (1) spreading flight sensitive fixed costs and (2) economies of aircraft size. As an airline’s traffic volume  increases, it can fill a larger fraction of seats on a given type of aircraft and thus increase its load factor. The airline’s total costs increase only slightly as it carries more passengers because traffic-sensitive costs are small in relation to flight-sensitive fixed costs. As a result, the airline’s cost per RPM falls as flight-sensitive fixed costs are spread over a larger traffic volume. As traffic volume on the route gets even larger, it becomes worthwhile to substitute larger aircraft (e.g., 300 seat Boeing 767s) for smaller aircraft (e.g., 150 seat Boeing 737s). A key aspect of this substitution is that the 300 seat aircraft flown a given distance at a given load factor is less than twice as costly as the 150 seat aircraft flown the same distance at the same load factor. The reason is that doubling the number of seats and passengers on a plane does not require doubling the number of pilots or flight attendants or the amount of fuel. Economies of scope emerge from the interplay of economies of density and the properties of a hub-and-spoke network. To see how, consider an origin-destination pair – say, Indianapolis to Chicago – with a modest amount of traffic. An airline serving only this route would use small planes, and even then, would probably operate with a low load factor. But now consider an airline serving a hub-and-spoke network, with the hub at Chicago. If this airline offered flights between Indianapolis and Chicago, it would not only draw passengers who want to travel from Indianapolis to Chicago, but it would also draw passengers from traveling from Indianapolis to all other points accessible from Chicago in the network (e.g., Los Angeles or San Francisco). An airline that includes the Indianapolis-Chicago route as part of a larger hub-and-spoke network can operate larger aircraft at higher load factors than an airline serving only Indianapolis-Chicago. As a result, it can benefit from economies of density to achieve a lower cost per RPM along the Indianapolis-Chicago route. In addition, the traffic between Indianapolis and the other spoke cities that will fly through Chicago will increase load factors and lower costs per RPM on all of the spoke routes in the network. The overall effect: an airline that serves Indianapolis-Chicago as part of a hub-and-spoke network will have lower costs per RPM than an airline that only serves  Indianapolis-Chicago. This is precisely what is meant by economies of scope. Relation Between Airline Yields and Market Characteristics An airline’s yield is the amount of revenue it collects per revenue passenger mile. It is essentially a measure of the average airline fares, adjusting for differences in distances between different origins and destinations. Airline yields are strongly affected by the characteristics of the particular origin-destination market being served. In particular, there are two important relationships: Shorter distance markets (e.g., New York-Pittsburgh) tend to have higher yields than longer distance markets (e.g., New York-Denver). Controlling for differences in the number of competitors, flights between smaller markets tend to have higher yields than flights between larger markets. The reasons for relationship 1) are summarized in Figure 1. higher cost per RPMlower load factor Cost per ASM generally falls as distance increases. This is because, say, doubling trip mileage does not require doubling key inputs such as fuel or labor. Thus, shorter flights have higher cost per ASM than longer flights, and airlines must achieve higher yields to cover these higher costs. In addition, shorter distance flights generally have lower load factors than longer distance flights, which implies a higher cost per RPM for shorter distance flights, again requiring higher yields. Why are load factors lower for shorter flights? The reasons has to do with the greater substitution  possibilities that consumers have in short-distance markets (e.g., car of train travel are more viable options). In short –distance markets, we would therefore expect that some fraction of time-sensitive travelers (e.g., vacationers) would travel on these alternative modes, so short distance flights would have a higher proportion of time-sensitive travelers (e.g., business persons) than longer distance flights. Competitive pressures thus force airlines to offer more frequent flight schedules in short-distance markets, which leads to lower load factors. The reason for relationship 2) has to do with the economies of density discussed earlier. Smaller markets will have lower traffic volumes, and airlines will generally operate smaller aircraft at lower load factors, increasing costs per RPM and yields. The S-curve effect refers to a phenomenon whereby a dominant carrier’s market share (share of RPM) in a particular origin-destination market tends to be greater than the carrier’s share of capacity (share of ASM). Thus, for example, if United offers 70% of the seats flown between Denver and San Francisco, and Continental flies the remaining 30%, then the S-curve effect says that United’s share of the actual traffic in this market will be greater than 70% and Continental’s will be less than 30%. This translates into an S-shaped relationship between â€Å"share of capacity† and â€Å"market share,† as shown in Figure 2. The S-curve effects stems from two sources. First, an airline with a greater share of capacity in a market is likely to have greater visibility in that market, so passengers are likely to contact it first. Second, an airline with a greater capacity share is likely to have more frequent – and thus more convenient – departures. This, too, works to boost its share of the actual traffic. The S-curve phenomenon makes capacity an important competitive weapon in the rivalry among airlines. An airline with the financial resources to purchase  aircraft and airport gates to achieve a dominant capacity share on key routes is likely to win the fight for market share. This suggests that, in general, it will be very difficult for a small carrier to challenge a dominant carrier at a hub airport, unless the small carrier can achieve significant cost advantages unrelated to scale. The history of competition in the post-deregulation airline industry seems to bear this out. Airline Economics. (2016, Oct 10).

Thursday, February 13, 2020

Essay Example | Topics and Well Written Essays - 3500 words

Essay Example The seven wastes include motion; transportation; waiting time; overproduction; inventory; processing time; and defects. Other common wastes are energy; untapped human resources; and by-products. Motion and transport are related to layout; organisation; and engineering. Waste arises as motion and transport does not always result in useful work. In the current case study, motion and transportation includes rearrangement of storage areas that are temporary before and after manufacture or product components; and movement associated with searching fixtures, jigs, tools, equipment, materials, etc. Movement allows an opportunity for product damage during handling and movement; poor space utilisation – large distances between stages or large gangways or storage areas; higher labour cost from low productivity; large batches waiting for transport – large inventories, long leadtimes, low responsiveness. Waiting time, overproduction and inventory are related to scheduling; setups; communication; quality; skills; reward systems; breakdowns; and layout. Waiting time could be caused by material; machine; or labour. Lack of material could be caused by scrap; breakdown; poor schedule; or poor supplier. Machine unavailability could be caused by breakdown; setups; large batches; or unavailability of tools, jigs, fixtures, etc. Skills shortage, absenteeism, or operating or supervising more than one machine could cause skill shortage. Overproduction could be a case of too much or too early. Too much is when there is more production than needed. This could be caused by setups that are long, improper scheduling for EOQ, or inadequate design of processes. Too early includes production earlier than required. This could be caused by lack of machine capability, subcontracting of operations, long in-process delays, or long leadtimes. Overproduction could also be caused by unbalanced material flow; cushion storage; safety storage; and lot delays.

Saturday, February 1, 2020

Buddhism and Hinduism in comparision Essay Example | Topics and Well Written Essays - 1000 words

Buddhism and Hinduism in comparision - Essay Example It highly reflects the prevailing theme at the time in which Japan took pride in its genius through the fields of religion, philosophy, art, and rich literature. While the fundamental color of brilliance is perceived through the coating to designate in equilibrium the simplicity of brightness through wood carving, the Hindu sculpture has, among the revered gods Vishnu, Shiva, Krishna, and some others, on the contrary been worked using stone or hard rock medium to enhance the proper locus of the aspects with light. A rare sculpture of ‘Vishnu’ seated on a Naga coil under the five hoods of the serpent deity is stone-made. Found at the Nithyakalyanaswamy temple at Thiruvidanthai, the statue is said to date from the Vijayanagara period during the 17th century. Contrary to the standing position of the Bodhisattva of Compassion, Vishnu is depicted sitting in a relaxed posture to signify an aura of meditative heights achieved. This ‘Vishnu on Naga Coil’ is well-ado rned as opposed to the plain appearance of the Buddhist sculpture. Being considered as the ‘Supreme god’ in the Vaishnavite tradition of Hinduism, the symbolic statue reflects him as the all-pervading essence of all beings and this is a strong ground for sculpting Vishnu in a manner that exhibits details rich in adornments and are defined under perfect human features, except for bearing four arms. Not having any earth-related object by him suggests how Vishnu’s state is one that is severely distant from men and this further indicates no sign of humility unlike in the case of the bodhisattva. Though his divinity radiates opulence of things by which he could sustain and govern the universe as the ‘Preserver’, the sculpture seemingly lacks the essence of purity. In one of the modifications made for the structure of Vishnu with the same serpent deity, the presence of Lakshmi, his consort, heavily coated with gold altogether gives a manifestation of extre me wealth and power. Its lavish creation of curves and lines inlaid on the golden stone medium occur to characterize immortal possession of authority and matter, transcending the significant idea behind misery or suffering, a major part in Buddhist principle of attaining pure divinity. Such design with Vishnu and similar Hindu gods mirrors the equivalent aspect in Indian culture of observing colorful festive traditions abounding in food, clothing, ceremonies, and other stuffs of various kinds. On the other hand, Though the ‘Eleven-Headed Bodhisattva of Compassion’ primarily consists of wood, the sophisticated carvings and the countenance which appears to possess a blend of gold and bronze external coating, aimed to bring about a wholly essential color, suggests subtle prominence while depicting the purpose of identifying a bodhisattva by nature. The smooth lines of the sculpture were fashioned such that the strokes exude a character with humble gesture, one with which n o trace of rigidity can be detected. In the absence of conspicuous edges in its shape all throughout, the eleven-headed Kannon may be readily claimed to have been so intentionally brought to the gentlest of forms. ‘Eleven-Headed Bo

Friday, January 24, 2020

Technology Paper :: essays research papers

Syndication of the Web 'Syndication involves the sale of the same good to many customers, who then integrate it with other offerings and redistribute it (Werbach, 2000).'; E-Trade is one such organization. They distinguish themselves from their competition by the way they package and price the information they sell not through the information itself. Syndication is a very different way of structuring the business of today. This way of doing business is very different than the way business has been done in the past. It requires small and large businesses to rethink their tactical and strategic plans, thus causing the reshaping of organizations. This will also change the way they interact with customers and partner with other entities. In addition, businesses will be forced to develop new models for collecting revenues and earning profits. Syndication has traditionally been rare in the business world for three reasons. First, syndication works only with information goods. Second, syndication requires modularity. Third, syndication requires many independent distribution points (Werbach, 2000). Within this syndication network, businesses can play three different roles or a company can play one role in a syndication network, or two or three roles simultaneously. The three roles are originators, syndicators, or distributors. The originators create the original product or content. The syndicators package the content for distribution to the distributors. Often times they integrate it with the product or content from other originators. And last but not least, the distributors deliver the content to customers (Werbach, 2000). Within the structure of syndication there are syndicators and distributors. Syndicators save the distributors from having to find all of the different originators in an effort to gather all of the content that they want to package and eventually put out for distribution. The syndicators are able to collect standard formats and contracts from a variety of sources and making it readily available. This part of the process frees the distributors from having to find and negotiate with dozens or even hundreds of different originators. This allows syndicators to act as information collectors by collecting and packaging digital information in a way that adds value to it. In the physical world, it is very difficult to find a syndicator that works alone and is not associated with the entertainment industry. Technology Paper :: essays research papers Syndication of the Web 'Syndication involves the sale of the same good to many customers, who then integrate it with other offerings and redistribute it (Werbach, 2000).'; E-Trade is one such organization. They distinguish themselves from their competition by the way they package and price the information they sell not through the information itself. Syndication is a very different way of structuring the business of today. This way of doing business is very different than the way business has been done in the past. It requires small and large businesses to rethink their tactical and strategic plans, thus causing the reshaping of organizations. This will also change the way they interact with customers and partner with other entities. In addition, businesses will be forced to develop new models for collecting revenues and earning profits. Syndication has traditionally been rare in the business world for three reasons. First, syndication works only with information goods. Second, syndication requires modularity. Third, syndication requires many independent distribution points (Werbach, 2000). Within this syndication network, businesses can play three different roles or a company can play one role in a syndication network, or two or three roles simultaneously. The three roles are originators, syndicators, or distributors. The originators create the original product or content. The syndicators package the content for distribution to the distributors. Often times they integrate it with the product or content from other originators. And last but not least, the distributors deliver the content to customers (Werbach, 2000). Within the structure of syndication there are syndicators and distributors. Syndicators save the distributors from having to find all of the different originators in an effort to gather all of the content that they want to package and eventually put out for distribution. The syndicators are able to collect standard formats and contracts from a variety of sources and making it readily available. This part of the process frees the distributors from having to find and negotiate with dozens or even hundreds of different originators. This allows syndicators to act as information collectors by collecting and packaging digital information in a way that adds value to it. In the physical world, it is very difficult to find a syndicator that works alone and is not associated with the entertainment industry.

Wednesday, January 15, 2020

Merck Case

Pharmaceuticals: Merck Sustaining Long-term Advantage Through Information Technology Hiroshi Amari Working Paper No. 161 Working Paper Series Center on Japanese Economy and Business Columbia Business School December 1998 Columbia-Yale Project: Use of Software to Achieve Competitive Advantage PHARMACEUTICALS: MERCK Sustaining Long-term Advantage Through Information Technology Prepared by Hiroshi Amari Research Associate, Yale University William V. Rapp and Hugh T. Patrick Co-principal Project InvestigatorsCenter for International and Area Studies Yale University New Haven, CT 06520 203-432-9395 (Fax: 5963) e-mail: william. [email  protected] edu Revised December 1998 Table of Contents 1. Introduction: Objective of this Study 2. The Pharmaceutical Industry in a Global Context 3. Product R&D and Clinical Trials 4. Manufacturing and Process R&D 5. Technological Factors Structure-Based Drug (â€Å"Rational Drug†) Design Structure-Based Drug (â€Å"Rational Drug†) Design 6. Merck 7. Managerial Decision Making 8. Decision Making on IT projects 9. Joint Ventures 10. Information Technology and Organization 11.Appendix I – Summary Answers to Questions for Merck – Strategy & Operations 12. Appendix II – INDUSTRY AND FIRM BUSINESS DATA 13. Bibliography 2 Introduction: Objective of this Study This case study of Merck was completed under a three year research grant from the Sloan Foundation. The project's purpose is to examine in a series of case studies how U. S. and Japanese firms who are recognized leaders in using information technology to achieve long-term sustainable advantage have organized and managed this process. While each case is complete in itself, each is part of this larger study. This pharmaceutical industry case together with other cases2 support an initial research hypothesis that leading software users in both the U. S. and Japan are very sophisticated in the ways they have integrated software into their management stra tegies and use it to institutionalize organizational strengths and capture tacit knowledge on an iterative basis. In Japan this strategy has involved heavy reliance on customized and semicustomized software (Rapp 1995) but is changing towards a more selective use of package software managed via customized systems. In turn, U. S. ounterparts, such as Merck, who have often relied more on packaged software, are doing more customization, especially for systems needed to integrate software packages into something more closely linked with their business strategies, markets, and organizational structure. Thus, coming from different directions, there appears some convergence in approach by these leading software users. The cases thus confirm what some other analysts have hypothesized, a coherent business strategy is a necessary condition for a successful information technology strategy (Wold and Shriver 1993). These strategic links for Merck are presented in the following case. Industries a nd firms examined are food retailing (Ito-Yokado and H. Butts), semiconductors (NEC and AMD), pharmaceuticals (Takeda and Merck), retail banking (Sanwa and Citibank), investment banking (Nomura and Credit Suisse First Boston), life insurance (Meiji and USAA), autos (Toyota), steel (mini-mills and integrated mills, Nippon Steel, Tokyo Steel and Nucor), and apparel retailing (WalMart). The case writer and the research team wish to express their appreciation to the Alfred P.Sloan Foundation for making this work possible and to the Sloan industry centers for their invaluable assistance. They especially appreciate the time and guidance given by the center for research on pharmaceuticals at MTT as well as Mr. Sato at Takeda. This refers to cases for which interviews have been completed. See footnote 3. These and other summary results are presented in another Center on Japanese Economy and Business working paper: William V. Rapp, â€Å"Gaining and Sustaining Long-term Advantage Through In formation Technology: The Emergence of Controlled Production,† December 1998 strategy (Wold and Shriver 1993). 3 These strategic links for Merck are presented in the following case. Yet this case along with the other cases also illustrates that implementation and design of each company's software and software strategy is unique to its competitive situation, industry and strategic objectives. These factors influence how they choose between packaged and customized software options for achieving specific goals and how they measure their success.Indeed, as part of their strategic integration, Merck and the other leading software users interviewed have linked their software strategies with their overall management goals through clear mission statements that explicitly note the importance of information technology to firm success. They have coupled this with active CIO (Chief Information Officer) and IT (information technology) support group participation in the firm's business and decision making structure.Thus for firms like Merck the totally independent MIS (Management Information Systems) department is a thing of the past. This may be one reason why outsourcing for them has not been a real option, though their successful business performance is not based solely on software. Rather as shall be described below software is an integral element of their overall management strategy and plays a key role in serving corporate goals such as enhancing productivity, improving inventory management or strengthening customer relations.These systems thus must be coupled with an appropriate approach to manufacturing, R, and marketing reflecting Merck's clear understanding of their business, their industry and their firm's competitive strengths within this context. This clear business vision has enabled them to select, develop and use the software they require for each business function and to integrate these into a total support system for their operations to achieve corpo rate objectives. Since this vision impacts other corporateThese and other summary results are presented in another Center on Japanese Economy and Business working paper: William V. Rapp, â€Å"Gaining and Sustaining Long-term Advantage Through Information Technology: The Emergence of Controlled Production,† December 1998 3 4 decisions, they have good human resource and financial characteristics too (Appendix I & ii). Yet Merck does share some common themes with other leading software users such as the creation of large proprietary interactive databases that promote automatic feedback between various stages and/or players in the production, delivery and consumption process.Their ability to use IT to reduce inventories and improve control of the production process are also common to other leading software users. They are also able organizationally and competitively to build beneficial feedback cycles or loops that increase productivity in areas as different as R, design and man ufacturing while reducing cycle times and defects or integrating production and delivery. Improved cycle times reduce costs but increase the reliability of forecasts since they need to cover a shorter period.Customer satisfaction and lower inventories are improved through on-time delivery. Thus, software inputs are critical factors in Merck's and other leading users' overall business strategies with strong positive competitive implications for doing it successfully and potentially negative implications for competitors. An important consideration in this respect is the possible emergence of a new strategic manufacturing paradigm in which Merck is probably a leading participant.In the same way mass production dramatically improved on craft production through the economies of large scale plants that produced and used standardized parts and lean production improved on mass production through making the production line more continuous, reducing inventories and tying production more close ly to actual demand, what might be called â€Å"controlled† production seems to significantly improve productivity through monitoring, controlling and linking every aspect of producing and delivering a product or service including after sales service and repair.Such controlled production is only possible by actively using information technology and software systems to continuously provide the monitoring and control function to what had previously been a rather automatic system response to changes in 5 expected or actual consumer demand. This may be why their skillful use of information technology is seen by themselves and industry analysts as important to their business success, but only when it is integrated with the business from both an operation and organization standpoint reflecting their overall business strategy and clarity of competitive vision.Therefore at Merck the software and systems development people are part of the decision making structure while the system its elf is an integral part of organizing, delivering and supporting its drug pipeline from R through to sales post FDA approval. This sequence is particularly critical in pharmaceuticals where even after clinical trials there is a continuous need to monitor potential side effects. Therefore Seagate Technology may be correct for Merck too when they state in their 1997 Annual Report â€Å"We are experiencing a new industrial revolution, one more powerful than any before it.In this emerging digital world of the Third Millennium, the new currency will be information. How we harness it will mean the difference between success and failure, between having competitive advantage and being an also-ran. † In Merck's case, as with the other leading software users examined, the key to using software successfully is to develop a mix of packaged and customized software that supports their business strategies and differentiates them from competitors. However, they have not tried to adapt their organizational structure to the software.Given this perspective, functional and market gains have justified the additional expense incurred through customization, including the related costs of integrating customized and packaged software into a single information system. They do this by assessing the possible business uses of software organizationally and operationally and especially its role in enhancing their core competencies. While they will use systems used by competitors if there is no business advantage to developing their own, they reject the view that information systems are generic products best developed by outside vendors who can achieve low cost through economies of scale and who can more easily afford to invest in the latest technologies. 4 In undertaking this and the other case studies, the project team sought to answer certain key questions while still recognizing firm, country and industry differences. These have been explained in the summary paper referenced in fo otnote 3. We have set them forth in Appendix I where Merck's profile is presented based on our interviews and other research.Readers who wish to assess for themselves the way Merck's strategies and approaches to using information technology address these issues may wish to review Appendix I prior to reading the case. For others it may be a useful summary. 5 Merck and the other cases have been developed using a common methodology that examines cross national pairs of firms in key industries. In principle, each pair of case studies focuses on a Japanese and American firm in an industry where software is a significant and successful input into competitive performance.The firms examined are ones recognized by the Sloan industry centers and by the industry as ones using software successfully . To develop the studies, we combined analysis of existing research results with questionnaires and direct interviews. Further, to relate these materials to previous work as well as the expertise loc ated in each industry center, we held working meetings with each center and coupled new questionnaires with the materials used in the previous study to either update or obtain a questionnaire similar to the one used in the 1993-95 research (Rapp 1995).This method enabled us to relate each candidate and industry to earlier results. We also worked with the industry centers to develop a set of questions that specifically relate to a firm's business strategy and software's role within that. Some questions address issues that appear relatively general across industries such as inventory control. Others such as managing the drug pipeline are more specific to a particular industry. The focus has been to establish the firm's perception of its industry and its competitive position as well as its advantage in developing and using a software strategy.The team also contacted customers, competitors, and industry analysts to determine whether competitive benefits or impacts perceived by the firm were recognized outside the organization. These sources provided additional data on measures of competitiveness as well as industry strategies and structure. The case studies are thus based on extensive interviews by the project team on software's use and integration into management strategies to improve competitiveness in specific industries, augmenting existing data on industry dynamics,firmorganizational structure and management strategy collected from the Sloan industry enters.In addition, we gathered data from outside sources andfirmsor organizations with which we worked in the earlier project. Finally, the US and Japanese companies in each industry that were selected on the basis of being perceived as successfully using software in a key role in their competitive strategies in fact saw their use of software in this exact manner while these competitive benefits were generally confirmed after further research.The questions are broken into the following categories: General Manage ment and Corporate Strategy, Industry Related Issues, Competition, Country Related Issues, IT Strategy, IT Operations, Human Resources and Organization, Various Metrics such as Inventory Control, Cycle Times and Cost Reduction, andfinallysome Conclusions and Results.They cover a range of issuesfromdirect use of software to achieve competitive advantage, to corporate strategy, to criteria for selecting software, to industry economics, to measures of success, to organizational integration, to beneficial loops, to training and institutional dynamics, andfinallyto interindustry comparisons. 7 The Pharmaceutical Industry in a Global Context In advanced countries that represent Merck's primary market, the pharmaceutical industry is an exceptionally research intensive industry where many firms are large multinationals (MNCs).It is also heavily regulated for both local producers and MNCs. Regulations work as both constraints and performance boosters since drugs are used with other medical a nd healthcare services. Therefore, healthcare expenditures are divided among many industries and providers of which pharmaceuticals are only one. All parties involved are interested in influencing the regulatory environment and in participating in the growth in healthcare services. This means understanding the industry requires appreciating its political economic context.In this regard, healthcare providers in rich nations are currently under pressure to control costs due to aging populations. Regulators who have the authority to change the demand structure through laws and regulations are considering various measures to reduce costs such as generic drug substitution which may mean lower returns for discovering and developing drugs. Still, if drugs are more effective at reducing healthcare costs compared to other treatments, Pharmaceutical companies can benefit.Since R is at the heart of competition, each drug company must respond to these cost containment pressures cautiously and s trategically in competing for healthcare expenditures. Another important aspect of this industry is technological change arising from the convergence of life and biological sciences. Many disciplines now work together to uncover the mechanisms that lie behind our bodies and various diseases. Examples are molecular biology, cell biology, biophysics, genetics, evolutionary biology, and bioinformatics.As scientists see life from these new chemical and physical viewpoints, the ability to represent, process and organize the massive data based on these theories becomes critical. Because computers are very flexible scientific instruments (Rosenberg 1994), progress in information technology and computer science has broadened scientific frontiers for the life and biological sciences. These advances have opened new doors to 8 attack more complex diseases, including some chronic diseases of old age.These therapeutic areas are present opportunities for pharmaceutical companies since they addres s demographic and technical changes in advanced countries. Still, to take advantage of these opportunities requires information technology capabilities. Historically, the drug industry has been relatively stable where the big players have remained unchanged for years. This has been due to various entry barriers such as R costs, advertising expense, and strong expertise in managing clinical trials. It is difficult and expensive for a new company to acquire this combination of skills quickly.However, there are signs the industry and required mix of skills may be changing. There have been several cross national mergers especially between U. S. and European companies. In addition, new biotechnology companies are very good at basic research, which may force pharmaceutical R to transform itself. For example, no single company even among the new mega-companies is large enough to cover all new areas of expertise and therapeutic initiatives. Thus, many competitors have had to form strategic alliances to learn or access new technologies and to capture new markets. Conversely, a stand-alone company can have a lot to lose.The challenge facing large pharmaceutical companies is how fast and how effectively they can move to foster both technological innovation and cost containment without exposing themselves to too much risk. The pharmaceutical industry in all of Merck's major markets reflects these cost containment pressures, the need to harmonize expensive and time consuming clinical trials, and the impact of extensive regulations. Information technology has had its impacts too. For example, to respond to these challenges Merck is using more management techniques based on consensus decision making among top functional managers.This requires better communication support using e-mail and groupware combined with face-to-face communication. This is part of an industry trend towards greater parallel decision making in R&D and less sequential decision making where A must first c oncur on a project before moving to B, etc Now all elements of the firm evaluate the project simultaneously at each 9 stage. In this manner, Merck has significantly reduced coordination costs while centralizing and speeding the overall decision making process. Additionally, first-tier irms have had to follow a trend in R&D strategies that increasingly use information technologies. Exchange of data and ideas across national borders has become relatively easy, and contracts may specify access to another company's database. Because many companies share similar R instruments and methods, one company's instruments may be compatible with other companies'. Indeed, the trend towards greater use of Web-based technology in R and other operations may change our notion of a firm and its boundaries. Firms may eventually be characterized by knowledge creating capabilities (Nonaka and Takeuchi 1995).Having more ways to communicate with other companies makes frequent communication with greater nuan ce possible. This supports the trend towards more strategic alliances unless overtaken by the creation of larger firms through continued mergers. This is also partially due to the nature of the industry which is part of the fine chemical industry where changes in technologies are rapid and often discontinuous. It therefore requires different management skills from other technology based industries, especially as the knowledge required for innovation tends to be more specialized thus demanding less coordination than assembly industries.Transferring mass production know-how to R is also limited. Still, the U. S. and European industries have been undergoing massive reorganization to achieve economies of scope and scale in R and marketing where firms are taking advantage of the fact that the U. S. industry is much less regulated than most foreign industries (Bogner and Thomas 1996). The U. S. companies grew after World War II due to a huge home market combined with the global market for antibiotics this was before British firms began to recapture market share.At that time, European firms did not have the resources to sell drugs directly to U. S. doctors. The European recovery period gave U. S. firms enough time to take advantage of antibiotics. Then, when the U. S. market became saturated, U. S. 10 firms expanded into global markets in the early 1960s. This forced U. S. firms to diversify their R as well. At the same time, in 1962 amendments to the Food, Drug and Cosmetic Act increased the rigor of drug regulation creating an entry barrier to industry R that favored large established firms (Bogner and Thomas 1996).The U. S. effectively tightened their regulations after their industry had acquired sufficient R skills and resources. This timing seems to account for today's industry success. Another factor is that unlike the European industry, U. S. firms had few incentives to integrate vertically. During the War the military distributed antibiotics. Therefore, the U . S. firms were generally bulk chemical producers such as Merck and Phizer or sellers of branded drugs such as Abbott and Upjohn. At the end of the War, only a few firms such as Squibb were fully integrated.However, as promotion and other downstream functions became more critical, controlling functions such as distribution became a strategic objective. To accomplish this they acquired other firms (Merck acquired Sharpe and Dohine and Phizer acquired Roerig), developing expansion via merger and acquisition as a business strategy and core competency. This helped lay the foundation for subsequent industry consolidation. Today, American healthcare is based on the belief that while making progress in science is the best way to solve medical problems, cost containment is also important.As a result, while American healthcare is the most expensive in the world, it is also not available to everyone and is the most subject to cost scrutiny. Indeed, since drugs are just one way to improve heal th, consumers should want to remain healthy and choose cost effective means to do this. However, the reality is that insurance systems covering different services give incentives and disincentives for particular care (Schweitzer 1997). Thus, coordinated adjustment of prices for healthcare is necessary to get markets for healthcare products to work better. In the U. S. , this has led to a public policy push for HMOs.These healthcare purchasers have in turn set the reward schemes available to healthcare providers such as pharmaceutical companies so as to reduce transaction costs (Ikegami and Campbell 1996) 11 and promote innovation. These developments and trends are putting more pressure on major firms to put more resources into R&D, to focus more critically on just ethical drug development for the global market, and to be more careful in gathering information on clinical trials and side effects. The most important market for Merck in this regard is the U. S. where NTH has pursued a u nified approach.This is because the NIH (The National Institutes of Health) has actively supported basic life science research in U. S. universities, especially after World War II. NSF (National Science Foundation) also encouraged collaboration between academia and industry with partial funding by the government. Other federal and state funding has been important to the scientific community as well, especially in biotechnology. In biotechnology, the funding of basic research has led to a complex pattern of university-industry interaction that includes gene patenting and the immediate publishing of results (Rabinow 1996).U. S. drug companies are of course profit motivated but are regulated by the FDA (Federal Drug Administration) which is rigorous about its drug approvals, demanding clear scientific evidence in clinical research as its operation is basically science oriented. Product R&D and Clinical Trials Still, despite this R&D support, industry economics are driven by pharmaceuti cal R&D's very lengthy process, composed of discovering, developing and bringing to market new ethical drugs with the latter heavily determined by the drug approval process in major markets such as the U.S. , Europe and Japan6. These new therapeutic ethical products fall into four broad categories (U. S. Congress, OTA 1993): one, new chemical entities (NCEs) – new therapeutic entities (NTEs) – new therapeutic molecular compounds never before used or tested in humans; two, drug delivery mechanisms – new approaches to delivering therapeutic agents at the desired dose to the desired part of the body; three, 6 Ethical drugs are biological and medicinal chemicals advertised and promoted primarily to the medical, pharmacy, and allied professions.Ethical drugs include products available only by prescription as well as some over-the-counter drugs (Pharmaceutical Manufacturers Association 1970-1991). 12 next stage products – new combinations, formulations, dosing forms, or dosing strengths of existing compounds that must be tested in humans before market introduction; four, generic products – copies of drugs not protected by patents or other exclusive marketing rights. From the viewpoint of major pharmaceutical firms such as Merck, NCEs are the most important for the R of innovative drugs that drive industry success.Since it is a risky and very expensive process, understanding a company's R&D and drug approval process is critical to understanding the firm's strategy and competitiveness both domestically and globally. Statistics indicate that only about 1 in 60,000 compounds synthesized by laboratories can be regarded as â€Å"highly successful† (U. S. Congress, OTA 1993). Thus, it is very important to stop the R process whenever one recognizes success is not likely.Chemists and biologists used to decide which drugs to pursue, but R is now more systematic and is a collective company decision since it can involve expenditures of $250 to $350 million prior to market launch, thus the need for more parallel decision making. Key factors in the decision making process are expected costs and returns, the behavior of competitors, liability concerns, and possible future government policy changes (Schweitzer 1997). Therefore, stage reviews during drug R are common, and past experiences in development, manufacturing, regulatory approvals, and marketing can provide ample guidance.NCE's are discovered either through screening existing compounds or designing new molecules. Once synthesized, they go through a rigorous testing process. Their pharmacological activity, therapeutic promise, and toxicity are tested using isolated cell cultures and animals as well as computer models. It is then modified to a related compound to optimize its pharmacological activity with fewer undesirable biological properties (U. S. Congress, OTA 1993). Once preclinical studies are completed and the NCE has been proven safe on animals, the dru g sponsor applies for Investigational New Drug (IND) status.If it receives approval, it starts Phase I clinical trials to establish the 13 tolerance of healthy human subjects at different doses to study pharmacological effects on humans in anticipated dosage levels. It also studies its absorption, distribution, metabolism, and excretion patterns. This stage requires careful supervision since one does not know if the drug is safe on humans. During phase II clinical trials a relatively small number of patients participate in controlled trials of the compound's potential usefulness and short term risks.Phase III trials gather precise information on the drug's effectiveness for specific indications, determine whether it produces a broader range of adverse effects than those exhibited in the smaller phase I and II trials. Phase III trials can involve several hundred to several thousand subjects and are extremely expensive. Stage reviews occur before and during each phase, and drug develo pment may be terminated at any point in the pipeline if the risk of failure and the added cost needed to prove effectiveness outweigh the weighted probability of success.There is a data and safety monitoring board in the U. S.. This group has access to â€Å"unblinded data† throughout the conduct of a trial but does not let anyone else know what the data shows until it is necessary. For example, they will not divulge the efficacy data until the trial reaches a point where it seems appropriate to recommend stopping it because the null hypothesis of efficacy has been accepted or rejected. The FDA will usually insist on the drug proving efficacy with respect to ameliorating a disease before giving approval.If clinical trials are successful, the sponsor seeks FDA marketing approval by submitting a New Drug Application (NDA). If approved, the drug can be marketed immediately, though the FDA often requires some amendments before marketing can proceed (Schweitzer 1997). However, suc cessful drug development and sales not only requires approval of therapeutic value and validity but also that the manufacturing process meet stringent â€Å"best-practice† standards. To meet U. S. regulations, Phase IV trials are required. Manufacturers selling drugs must notify the FDA periodically about the 14 erformance of their products. This surveillance is designed to detect uncommon, yet serious, adverse reactions typically not revealed during premarket testing. This postapproval process is especially important when phase III trials were completed under smaller fast track reviews. These additional studies usually include use by children or by those using multiple drugs where potential interactions can be important (Schweitzer 1997). Furthermore, because drug development costs are so high relative to production costs, patent protection is another key aspect of a company's management strategy. Under U. S. aw, one must apply for a patent within one year of developing an N CE or the innovation enters the public domain. Therefore, patenting is usually early in the development cycle or prior to filing the NCE. But as this begins the patent life, shortening the approval period extends a drug's effective revenue life under patent. This makes managing clinical trials and the approval process an important strategic variable. Although creating a drug pipeline through various stages of development is relatively standardized, it is changing as companies use different methods to reduce time and related costs of new drug development.Companies are constantly pressuring the authorities to reduce NDA review times. As a consequence, the FDA did introduce an accelerated approval process for new drugs in oncology, HIV (AIDS) and other life threatening illnesses. A familiar feature of this new fast track review is the use of surrogate end points, or proxies for clinical end points which are measured by laboratory values but lack supporting clinical outcomes data. Accel erated approval speeds new drugs to market saving companies tens of millions of dollars in negative cash flow.However, it does not generate clinical values that insurers and managed care organizations demand. Countering this situation is thus the trend among drug firms to increase the complexity of their analyses during clinical trials. Companies have begun to use cost-effective analysis in their evaluation of new drugs in assessing competing product development investment alternatives and by integrating cost effectiveness analysis into their clinical trials. They also try to capture quality of life 15 measures such as how patients perceive their lives while using the new drug.Companies vary their analysis by country (Rettig 1997) since measures of effectiveness shift according to clinical practice, accessibility to doctors, and what different cultures value as important. There are no universal measures of the quality of life. At present, the components measured depend largely on th e objectives of each researcher but some companies are trying to introduce more systematic measures. Nevertheless, no matter what components are chosen for these studies, capturing, storing and using the data requires sophisticated software and data base management techniques which must be correlated with various families of molecules.Also, to avoid the moral hazard of focusing on the weaknesses in a competitor's drug or molecule, some analysts argue companies should examine all domains and their components (Spilker 1996) and move towards agreed performance standards. Furthermore, quality of life measures should only be used when they are of practical use to doctors in treating patients (Levine 1996). Such judgments should be sensitive and informed and should cover criteria related and important to a broad spectrum of patients while balancing measures which can be easily gathered and those that are more complex due to multiple treatments.These trends make clinical trials and data ga thering complex and expensive and put a premium on a firm's ability to manage the process efficiently, including creating and using large patient and treatment databases. Manufacturing and Process R&D The research process differs from production. Yet, both are important, particularly the firm's knowledge of scale-up. This is difficult because production requires uniformity at every stage. Making the average chemical make-up constant is not enough.Careful scale-up is essential to avoid contamination. Variations from the mean in commercial production must be very small. This requires constant control of variables such as the preparation of raw materials, solvents, reaction conditions, and yields. Often, experience will help achieve purer output in the intermediate processes. This better output alleviates problems in later processes. Thus, there is a learning curve in process R which starts at 16 the laboratory. An important distinction is between continuous process and batch process.I n the continuous process, raw materials and sub-raw materials go into a flow process that produces output continuously. This continuous process is more difficult because many parameters and conditions have to be kept constant. This requires a good understanding of both optimizing the chemical process and maintaining safeguards against abnormal conditions. However, continuous processes are less dangerous and require fewer people to control at the site than batch processing where the chemicals are produced in batches, put in pill form and then stored for future distribution and sale (Takeda 1992).The following compares initial process R once a compound is discovered and commercial manufacturing for a representative chemical entity proceeds (Pisano 1996). Comparison research process and commercial production for representative chemical 17 Process R in chemical pharmaceuticals involves three stages: (1) process research, where basic process chemistry (synthetic route) is explored and ch osen; (2) pilot development, where the process is run and refined in an intermediate-scale pilot plant; and (3) technology transfer and startup, where process is run at a commercial manufacturing site (Pisano 1997).Pisano argues that the scientific base of chemistry is more mature than biotechnology and this difference accounts for the more extensive use of computer simulations in drugs made by chemical synthesis than biotechnology-based drugs. Codifying the knowledge in chemistry and chemical engineering in software has a higher explanatory power than in biotechnology. In chemistry, many scientific laws are available for process variables such as pressure, volume, and temperature.Computer models can simulate these in response to given parameters to predict cost, throughput and yield (Pisano 1997). By contrast, biotechnology has aspects that resemble art dependent on an opprator's skill more than science which only requires the proper formulation. This is particularly true for large -scale biotechnology process (Pisano 1997). Simulation is thus less reliably extrapolated to commercial production. An additional factor is the importance of purification after large-scale production in bioreactors in biotechnology-based drugs.It is not rare at this stage of extraction and purification that commercial application becomes impossible, even though the scale-up is successful. Since avoiding contamination is the key in biotechnology-based drugs, extracting and purifying a small amount of the desired materials from a large amount of broth is critical. This process is done using filters, chromatography, and other methods specific to organisms (Koide 1994). Technological Factors All scientific frontiers affect pharmaceutical companies.Since no company can be an expert on everything, what technology to develop in-house and what to license or subcontract have become important issues. In general, pharmaceutical companies were skeptical of new developments in small biotechnolog y firms. Yet the latter now provide new techniques in basic research and fermentation to the MNCs. Other pharmaceutical 18 companies then tend to follow when competitors adopt ideas from less well known biotech companies. This is why many such companies announce platform deals with drug companies to get more financial resources and opportunities.Biotechnology based pharmaceuticals have entered a new development stage which requires the capital, manufacturing and marketing expertise of the large companies. New drug discovery methods and biotechnology each demand skills different from earlier times. Emerging biotech companies offer new ideas and research tools. Other new technologies such as stripping out side effects, specialized drug delivery systems, and â€Å"antisense† which cancels out the disease causing messages of faulty RNA also come from biotechnology (Fortune 1997).These are promising areas of drug research and potential products. Further, these biotech companies de velop new drugs more quickly than large firms. Where they often have difficulty is in managing clinical trials and the approval process, an area where large firms have considerable experience and expertise, including sophisticated software for tracking the large data bases and handling the new computerized application procedure. In addition, biotechnology demands skills in large scale commercial production which smaller startups may not possess.Thus, close association with large firms is logical and efficient, and one should expect more future alliances and joint ventures, though outsourcing to organizations that will manage clinical trials is growing. Another important factor which further encourages specialization in a network of companies is the industry's heavy use of information technology. Indeed, software strategies have become an important part of the industry through their impact on R, drug approval, including clinical trials, and control of manufacturing.If decisions in a science based industry are generally driven by knowledge creation capability dependent on human resources, having information sharing and access mechanisms so complementary capabilities can be efficiently exchanged and used becomes key to successful corporate strategy, especially when that knowledge is growing and becoming increasingly diverse. 19 There is some evidence suggesting when innovation is dependent on trial and error, it is best done when many players try different strategies and are held responsible for the projects they choose (Columbia Engineering Conference on Quality September 1997).If the large drug companies can successfully form principal-agent relationships with biotechnology companies doing advanced research in a particular area in the same way that Japanese parts manufacturers have with large assemblers, there may be opportunities for major breakthroughs without the drug companies having to put such trial and error processes inside the company where they may be less easy to manage. If the make or buy decision in a science based industry is generally driven by knowledge creation capability dependent on human resources, the basis for new product, i. . drug development, becomes more dependent on the nature and facility of information exchange between groups and individuals than asset ownership. Creating information sharing and access mechanisms so that complementary capabilities can be efficiently exchanged and used then becomes the key to successful corporate strategy in knowledge based industries, especially when that knowledge base is growing and becoming increasingly diverse as in the ethical drug industry. Another information sharing issue related to biotech is pharmacology.Classical pharmacology models are often irrelevant for biotech-based drugs. While some proteins express their activities across other species, others can be more species specific. Neither poor results nor good animal trial results need be predictive for humans. Parti cularly difficult problems are those related to toxicology since some animals develop neutralizing antibodies (Harris 1997). Technical support systems are important in biotechnology as well. One is transgenic animals. They provide information on the contribution of particular genes to a disease.This is done by inserting genes that have the function of expressing the phenotype, or interbreeding heterozygotic animals to produce â€Å"knockout animals† that suffer from inherited metabolic diseases. Transgenic animals are relevant to early phase clinical trials since the data from these animals contribute useful data on dose-selection 20 and therapeutic rations in human studies. In addition, they offer hints to which variables are secondary. This simplifies the clinical trial design.In general, significant input in the design and running of phase I and II trials must come from the bench scientists who built the molecule (Harris 1997). Since clinical trials for biotech drugs lack clear guidelines, inhouse communication among drug discovery, preclinical and clinical trials is important, especially due to the increased use of transgenic animals bred to examine inherited diseases. This process in phase I/II trials can be greatly facilitated by information sharing technologies and acts as another driver towards a more integrated approach to decision making using IT.Structure-Based Drug (â€Å"Rational Drug†) Design This is also true of structure-based drug (â€Å"rational drug†) design or molecular modeling which is a range of computerized techniques based on theoretical chemistry methods and experimental data used either to analyze molecules and molecular systems or to predict molecular and biological properties (Cohen 1996). Traditional methods of drug discovery consist of taking a lead structure and developing a chemical program for finding analog molecules exhibiting the desired biological properties in a systematic way. The nitial compounds we re found by chance or random screening. This process involved several trial and error cycles developed by medicinal chemists using their intuition to select a candidate analog for further development. This traditional method has been supplemented by structure-based drug design (Cohen 1996) which tries to use the molecular targets involved in a disorder. The relationship between a drug and its receptor is complex and not completely known. The structure-based ligand design attempts to create a drug that has a good fit with the receptor.This fit is optimized by minimizing the energies of interaction. But, this determination of optimum interaction energy of a ligand in a known receptor site remains difficult. Computer models permit manipulations such as superposition and energy calculation that are difficult with mechanical models. They also provide an exhaustive way to analyze molecules and to save and store this data for later 21 use or after a research chemist has left. However, mode ls must still be tested and used and eventually, chemical intuition is required to analyze the data (Gund 1996).Then the drug must proceed through animal and clinical trials. Still the idea behind this modeling is the principle that a molecule's biological properties are related to its structure. This reflects a better understanding in the 1970s of biochemistry. So rational drug design has also benefited from biotechnology. In the 1970s and 1980s, drug discovery was still grounded in organic chemistry. Now rational drug design provides customized drug design synthesized specifically to activate or inactivate particular physiological mechanisms.This technique is most useful in particular therapeutic areas. For example, histamine receptor knowledge was an area where firms first took advantage of rational design since its underlying mechanism was understood early (Bogner and Thomas 1996). The starting point is the molecular target in the body. So one is working from demand rather than finding a use for a new molecule. The scientific concepts behind this approach have been available for a long time. The existence of receptors and the lock-and-key concepts currently considered in drug design were formulated by P.Ehrlich (1909) and E. Fischer (1894). Its subtleties were understood, though, only in the 1970s with the use of X-ray crystallography to reveal molecular architecture of isolated pure samples of protein targets (Cohen 1996). The first generation of this technology conceived in the 1970s considered molecules as two topological dimensional entities. In 1980s it was used together with quantitative structureactivity relationships (QSAR) concepts. The first generation of this technology has proven to be useful only for the optimization of a given series (Cohen 1996).The second generation of rational drug design has considered the full detailed property of molecules in the three dimensional (3-D) formula. This difference is significant, since numerical parameters in the QSAR approaches do not tell the full story about the interaction between a ligand and a protein (Cohen 1996). 22 This has been facilitated by software and hardware becoming less costly. Thus many scientists are paying attention to computational techniques that are easier to use than mechanical models.This underscores the role of instrumentation in scientific research stressed by Rosenberg (1994). Availability of new instruments, including computers, has opened new opportunities in technological applications and furthered research in new directions. Three dimensional graphics particularly suits the needs of a multi-disciplinary team since everyone has different chemical intuition but appreciates the 3-D image. Rosenberg (1994) notes scientists who move across disciplines bring those concepts and tools to another scientific discipline such as from physics to biology and chemistry.This suggests the importance of sharing instruments, particularly computer images and databases th at help people work and think together. The predominant systems of molecular modeling calculations are UNIX workstations, particularly three dimensional graphics workstations such as those from Silicon Graphics. But other hardware such as desktop Macintoshes and MS-DOS personal computers on the low end and computer servers and supercomputers on the high end have been used. Computational power is required for more complex calculations and this guides the choice of hardware.A variety of commercial software packages are available from $50-$5,000 for PC-based systems to $100,000 or more for supercomputers. Universities, research institutes, and commercial laboratories develop these packages. Still, no one system meets all the molecular modeler's needs. The industry therefore desperately needs an open, high-level programming environment allowing various applications to work together (Gund 1996). This means those who for strategic reasons want to take advantage of this technology must now do their own software development. This is the competitive software compulsion facing many drug producers.In turn, the better they can select systems, develop their capabilities, and manage their use, the more successful they will be in drug development and in managing other aspects of the drug pipeline. 23 The choice of hardware is based on software availability and the performance criteria needed to run it. Current major constraints are the power of graphics programs and the way the chemist interacts with the data and its representation (Hubbard 1996). Apple computers have frequently been used in R because of superior graphics, though this edge may be eroded by new PCs using Pentium MMX as well as moves to more open systems.However, Dr. Popper, Merck's CIO, feels that the real issue, is the software packages for the MAC that research scientists know and rely on but that are not yet available for Windows NT. Thus, MACs continue to be used for Medical R&D which keeps the Windows ma rket from developing. There are, in addition, the elements of inertia, emotional attachment and training which are apparent at major medical schools too. In sum, rational design has opened a wide range of new research based on a firm's understanding of biochemical mechanisms. This means tremendous opportunities to enter new therapeutic areas.However, since rational design is very expensive, it has raised entry costs and the minimum effective size for pharmaceutical firms by putting a premium on those with a sequence of cash generating drugs. It also has favored firms with broader product lines able to spread the costs of equipment over many projects and to transfer knowledge across therapeutic areas, contributing to the increased cost of new drugs through higher R and systems support spending (Bogner and Thomas 1996). A similar analysis applies to the use of other new technologies because major U. S. nd Japanese companies to discover and develop drugs systematically, such as combina torial chemistry, robotic high-throughput screening, advances in medical genetics, and bioinformatics. These technologies affect not only R but also the organization and the way they deal with other organizations as many new technologies are complementary. For example, high-throughput screening automates the screening process to identify compounds for further testing or to optimize the lead compound. Thus, both regulatory and technological change have raised the advantage of developing innovative drugs, even 24 hough it is inherently risky and forces firms to develop better skills in using information technology to support the process. The Pharmaceutical Industry in the United States As explained above, healthcare and the pharmaceutical industry are closely intertwined, especially in the U. S.. Ever since the election of the Clinton Administration, U. S. healthcare has been the focus of heated debate. The pricing of pharmaceuticals in particular is one of the most controversial aspe cts of the industry. Estimates of the cost of bringing a new drug to market are up to over $250 million (DiMasi et. l. 1991). However, once drugs are on the market, the costs of manufacturing, marketing and distribution are relatively small. This loose connection between marginal cost and the market price seems to require further justification for drug pricing. While the obvious answer lies in the high fixed cost of drug development and the expensive and time consuming approval process prior to any positive cash flow, the answer is still not easy. Furthermore, the drug market is very complex for several reasons. First, there are many drug classes for which only a few products exist.Secondly, FDVIOs (health maintenance organizations) and other managed-care plans can negotiate substantial discounts because they are able to control the prescription decisions made by their participating physicians and because they buy in large quantities. These health organizations are highly price sens itive. This means drug prices are substantially determined by the purchaser's demand elasticity. This demand in turn determines investment decisions (Schweitzer 1997). Thirdly, the market for pharmaceuticals is highly segmented, both domestically and internationally, and price discrimination between and within national markets is common.Research studies cannot even agree on a common measure of wholesale price. Indeed, no measure captures actual transaction prices, including discounts and rebates (Schweitzer 1997). Fourth, consumers do not have enough scientific knowledge to assess different drugs. Thus, gatekeepers such as doctors are important (Hirsch 1975). 25 Yet, the current trend is towards managed care and HMOs who closely control costs. This development clearly indicates physicians are losing some autonomy in drug selection. Thus it is not surprising the market share of generic drugs has increased from 15% to over 41% between 1983 and 1996.This has forced the ethical drug man ufacturers to communicate both more effectively with the HMOs and managed care organizations in addition to physicians and to demonstrate the improved efficacy of their products as compared with generics. The acquisition of PBMs (pharmacy benefit managers) by pharmaceutical companies is an important development in this regard. Physicians now have to prescribe drugs available in the formularies of the managed-care organization. PBMs suggest cheaper alternatives to physicians for a given therapeutic benefit to save money.Eighty percent of the 100 million patient/member PBM market as of 1993 is controlled by the five big PBMs (Schweitzer 1997). In turn, when PBMs and mail-order companies expand, the small pharmacies lose the data necessary to examine various drug interactions. Since current U. S. law protects the propriety data of pharmacists and pharmacy chains, information on prescription for those patients who use pharmacies and mail-order companies actually becomes fragmented. It i s likely this development could affect pharmacists' jobs as well. A fifth reason is FDA approval does not mean new drugs are better than old ones.As noted above, this has pressured drug companies to prove the effectiveness in cost and quality of life their drugs bring to patients. Recently, drug companies have often tried to show how their drugs can help patients restore a normal quality of life. As already described, these concerns complicate the design of clinical trials. Consolidation among wholesalers, the greater complexity of clinical trials and globalization favor firms with substantial resources and are part of the reason for the industry's merger trend, especially between U.S. and European companies. The leading pharmaceutical firms ranked by 1994 sales are as follows (Scrip Magazine, Jan. 1996), with five of them the result of cross border mergers. Merck ranks 2d: 26 27 *3: Comparison is based on U. S. dollars *4: Calculation based on the sales of companies before mergers *5: Including OTC (over the counter drugs) *6: Excludes sales through strategic alliances Merck Merck is a multibillion dollar pharmaceutical firm with a long history going back to the 19th century in the U. S. and the 17th century in Germany.While in the past they have diversified into areas like animal health care, they are now very focused almost exclusively on human health, in particular, on ethical branded prescription drugs within human health care since they have found this is their most profitable business area. Also, given the many opportunities that exist, it will demand all their capital and energy for the foreseeable future. It has therefore spun off its animal health care business to a joint venture and sold its specialty chemical business.This strategy and motivation is similar to Takeda's focus on human health, whose market is more lucrative than its other businesses. The company appears to stress their ability to bring innovative drugs to market. Merck briefly tried to produce generic versions of their drugs, but found it was not worth the investment. In addition, they now assume someone else will produce their OTC (over the counter) versions too. This strategic focus is now underscored by their active formation of strategic alliances. For example, in the OTC medicine market in the U. S. nd Europe, but not in Japan, Merck relies on Johnson & Johnson through a joint venture with J to market, distribute and sell the OTC versions of Merck's prescription drugs. This means Merck has seen the OTC market as one way to lengthen the revenue stream for some of its products after their patents expire. In Japan, Merck's agreement is with Chugai Pharmaceutical Co. Ltd. They formed a joint venture in September 1996 to develop and market Merck's OTC medicines there (Merck 1996 Annual Report). Moreover, Merck and Rhone-Poulenc have announced plans to combine their animal health and poultry genetics businesses to form 28Merial, a new company that will be the wo rld's largest in animal health and poultry genetics (Merck 1996 Annual Report). Their primary strategic focus on ethical drugs seems appropriate, but as explained above it is also critical with respect to this strategy that they maintain relationships with those in scientifically related fields. Their work with Rhone-Poulenc must be examined in this light since improving their competence in the genetic business seems a good part of their strategy given developments in biotechnology and the Human Genome Project. This is because biotechnology-related drugs are often species-specific (Harris 1997).More knowledge about the genetic make-up of human and animal bodies may provide some insights into the appropriate choice of animals in pre-clinical trials from which to extrapolate observations to humans. Since this extrapolation is never perfect and you have to do animal experiments anyway, they have added to their competence in genetics via a joint venture with Du Pont called Du Pont-Merck Pharmaceuticals Co, whose investors are E. I. Du Pont (50%) and Merck (50%). This firm has capabilities in fermentation, genetic engineering/rDNA, cell culture, hybridoma, protein engineering, and tissue culture.By forming this alliance, Merck was able to exchange its strengths with Du Pont, an early investor in biotechnology. Du Pont-Merck Pharmaceutical has also developed its own drugs in cardiovascular disease. 7 Like other pharmaceutical companies, they continue to sell their branded products as long as they can once they have gone off patent but at a lower price in order to meet generic competition. Cost conscious HMO's increase this downward price pressure. Yet, according to Merck some demand for the branded product continues once they adjust the price downward.This is due to better quality, consistent dosage, and brand awareness of the original. Strategically, Merck sees itself as a growth company with a growth target of about 15% per year. This signals a continuing need for cash flow, i. e. from existing drugs, and a Merck sold its share to Dupont in 1998 for over $4billion, apparantly due to its ability to manage more drugs itself. 29 constant flow of new drugs, i. e. from R&D. They need this growth to continue to offer their shareholders the return they expect and to attract the personnel they need to develop drugs which is their corporate mission.Their products now cover 15-16 therapeutic categories. In five years this will expand to between 20 and 25 categories depending on the success of various stages of drug testing. Important new products in the pipeline include Singulair for asthma, Aggrastat for cardiovascular disorders, Maxalt for migraine headaches, and VIOXX, an anti-inflammatory drug, which works as a selective inhibitor targeted at rheumatoid arthritis. They are in phase III trials for all of these new drugs. Propecia for male pattern baldness recently received FDA approval. Merck's R is done internationally.To avoid duplicate investmen t, each research center tends to be focused. For example, the Neuroscience Research Centre in the Untied Kingdom focuses on compounds which affect the nervous system. Maxalt was developed in this Centre. The one laboratory in Italy studies viruses; while the one laboratory in Tsukuba, Japan (Banyu Pharmaceuticals) emphasizes the circulatory system, antibiotics, and anti-cancer research (Giga, Ueda and Kuramoto 1996). This concentration pattern often reflects the comparative strengths in R and the therapeutic demand structure in each local market.Still, selecting the appropriate R projects while critical to their success is very difficult. This is because no discipline in science has as blurred a distinction between basic and applied research as biotechnology. The distinction is usually not well-defined because applied research often contributes to basic research. Indeed, in molecular biology, science often follows technology. Still, as a general approach, Merck tries to focus on app lied research and development rather than basic science. They rely on universities and smaller biotech firms for the later.However, they do some basic research. For instance, th