Monday, September 30, 2019

Effects of Television on Society

THE EFFECTS OF TELEVISION ON SOCIETY December 12th, 2007 At the end of the XVIIIth century, scientists have discovered a way to transmit an image from a point to another. It was the beginning of television. Since then, every household has a television. Over the years, it has revolutionized people’s life. Now, as it has become a spread consumer good, everybody has one television at home, even sometimes, two or more. It has changed people’s life because the television is seen as a mean of entertainment. The striking point is that television has become a usual good whereas in the past it was almost considered as an expensive good that not all the families could afford. By watching television, people are easily able to escape from the routine of their everyday life, and to relax. In other words, television is a mean of discovering, exploring, learning, dreaming, and thinking. However, even if it has changed people’s life and has a good effect on them, it has many bad effects on them too. In fact, it is also a mean to destroy people’s life. It destroys people life because, in most of the cases, it influences a lot of people. Orson Welles, a famous American screenwriter, film and theatre director, a film producer and an actor in films, and theatres, once said: â€Å"I hate television. I hate it as much as peanuts. But I can’t stop eating peanuts†. This quotation is a way to show to people that television can be compared as a drug that people cannot get rid off. Like drugs, it makes people doing things that, in most cases, they would not have done. Actually, it has a strong power of influencing people in a bad way. It influences the ones who are psychological weak, and people who cannot make the difference between the fiction and the real life. As a result, some people think that violence on television influences people because they are not able, to see what is true and what is false. However, even if it has some bad sides, television is seen as educational and a mean to develop people’s knowledge. Even if books and newspapers are the two most important ways of learning, television had conquest people’s heart and now is trusted by lots of them. Television has many bad sides and bad effects that can be very dangerous for people and the ones who live around them. It destroys people’s life in a way, and sometimes pushes them to do things they would not have thought they would have done such as commit crimes. Over the years, television has become a drug for a large amount of people. In fact, they cannot live without it and have to watch it everyday. People became addicted to their everyday shows, and cannot live without watching them. Actually, some people cannot imagine their everyday life without a television. Television has become their most popular hobby. It is, in fact, the most popular one even before sport or going out with some friends. Socialization does not seem anymore to be a priority in people’s life. It seems that nowadays, people prefer watching TV rather than doing other things. According to the website www. turnoffyourtv. com, people spend about two hours per day in front of their television set. In this survey, men are pointed out because they spend more time watching television than women. So, they are more addictive than women because they watch television between two and three hours per day. It is all the more outstanding as television has become most popular than other hobbies such as sport that could be more interesting. To show that people are really addicted to television, the Media Awareness Network website says that â€Å"A scientific American article entitled â€Å"Television Addiction† examined why children and adults may find it hard to turn their TVs off. According to researchers, viewers feel an instant sense of relaxation when they start to watch TV—but that feeling disappears just as quickly when the box is turned off. While people generally feel more energized after playing sports or engaging in hobbies, after watching TV they usually feel depleted of energy. According to the article â€Å"this is the irony of TV: people watch a great deal longer than they plan to, even though prolonged viewing is less rewarding. † Because of this addiction, people can withdraw into themselves and live apart from the society they are living in. According to another website, children spend twenty five hours per week in front of their television set instead of spending their free time playing with their friends or practicing sports. This behavior should alert their parents because going outside and meeting people are parts of their education. As a result, children should not stay all day long in their house watching television and playing video games. It is not good for their education as well as for their socialization. Moreover, the television has other bad consequences and the first target is children. In addition to what was said previously, television influences the youngest. In fact, children are an easy target for television producer. They are much more easily influenced than adults for example. As a result, they can be influenced by violent movies such as thrillers, and can have a weird and violent behavior after having watching such a movie. Watching a violent show can give children the taste of violence and other related behaviors. As they are innocent, they cannot make the difference between what fiction is and what reality is. They want to do what their superheroes do, no matter what. As a result, watching a violent movie can affect in a bad way the weaker. It can make them thinking about murders and sometimes make them committing crimes. Even a person who comes from a well educated and wealthy family can be touched by it. It does not only affect people with financial and other problems. As a result to this violence on television, on April 19th, 2001, in Columbine High School, Colorado, two teenagers named Eric Harris and Dylan Klebold killed twelve students and one teacher and then killed themselves. They had guns that they had ordered online and homemade bombs they had made days before the massacre. According to the media, they did that because they were both influenced by television and both influenced by a singer named Marilyn Manson and his music videos. Moreover, as they were the victims of mockeries and were rejected by the others, they decided to killed people to show them that they were not as stupid as they thought. In a way, both killers were influenced by violent movie. They were both full of anger and were ready to kill everybody. In fact, in the cybercollege website, it is said that the more children watch television, the more aggressive they become in the future. Moreover, they would be more likely to repeat what they have seen on television. In addition to that, an article in changingchannels. com says: â€Å"Basically, what these studies indicate is that violent media images tend to make us all just a bit more aggressive and impatient. The effect is more dramatic on children, who mentally process media violence the same way they would actual violence — and can be traumatized by exposure to excessively violent scenes. † As a shocking fact to prove that, in howtotalktoyourkids. om website in 2001, 75% of boys and 60% of girls said that they hit somebody in the past year because they were angry. Another outstanding fact that could be added is the event that occurred on April 16th, 2007 in the United States of America. Seung-Hui Cho, an American Korean who was a senior at Virginia Tech University, killed thirty three people and killed himself. His motivation to commit murders was because he wanted to kill people. In other words, h e did that just to kill and to have blood on his hands. In fact, this student had psychological problems as doctors said. However, he was not the only one to be blamed. In fact, as he was also part of a generation that spends their time watching violent movies and that is very easily influenced, television should also be blamed. Plus, by watching such movies, kids can think that violence resolves any problems they have to face. As a result, they are willing to practice bullying, to employ swear words and to hit people if they do not have what they want or if they are disappointed of something. They think this is the only way and do not think that talking could be a good solution. Moreover, the ones who watch violent movies could become afraid of the reality. They could, in fact, think that what happens in movies could become true. It can destabilize them. As a result, they can have psychological problems. The university killer had psychological problems, so people can say that he may have watched violent show on television when he was young, and since then he had been traumatized. Then, people should be aware that sometimes wrong information is given on television. As a result, people should remain conscious that information can be amplified to scare people. Most of the time, the weaker cannot see what is wrong and what is true. People should listen to what is said on television; but to confirm what they have learned they should read the newspapers, for example, to expand their knowledge and to have many different points of view to construct their culture. The more information people have about a subject, the more able they are to see what could be true and what could be false. So, while everybody is sitting in front of the screen, many people work â€Å"behind† the television set. Who are these people see? Who controls what will be shown and what won’t? People do not know anything about what happens backstage, and all these people who are working hard â€Å"behind† have a big influence on television viewers. Most of the viewers do not know it, but they work in order to control their mind. Even if they think they can have their own opinion on what they are seeing, television has an important influence on people’s thoughts. If people do not know a part of an information, or if they only have one point of view about the news, they can not be really objectives. In fact, television is absolutely not objective! There are too many possibilities to influence people thanks to television (for politics, or economics managers) than it is a very practical way to introduce some ideas in households and in people’s minds. Television also influences people in their way of consuming goods. It has, in fact, increased the amount of goods purchased per family. Kids sometimes see some toys on television advertisement and want it because it is brand new. They are always influenced by new products coming out and their parents have to buy the toys they want. Because of television and its commercials, people are willing to buy the same things. So, television is used by brands to promote themselves. That’s why there is a lot of advertising on television, during movies, between programs. Companies use the fact that many people watch television to set their spots when many people will see it. As a result to that, most of households have felt into a society of mass consumption that increases each year. Finally, television has bad effects of people’s healthy development. Researches were made to prove to people that watching television increases people’s obesity. In fact, people like eating junk food and drinking sodas while watching television. Excess of nibbling between main dishes is not good for people’s body and increases a lot their weight. A lot of American families cannot spend their television time without eating food. They need at least something to put in their mouth to feed them. However, it does not only touch the American culture. As the Media Awareness Network website shows, almost one in four Canadian children, between  seven and twelve years old, is obese (researches made by the Heart and Stroke Foundation of Canada). Moreover, watching too much television during the same period of time is not good for people’s eyes. In fact if people stay focus in the same position and in the dark while watching their favorite show, it will damage their eyes vision. In addition to that, smokers smoke a lot of cigarettes while there are in front of their television set. So, it increases a lot the possibility for them to have a cancer. It also touch people’s health because instead of doing sport people do another one which is â€Å"coach potato†. This sport is not a good one for people’s health. As a result, people do not lose ny weight because most of them spend a lot of their free time in front of their television set. However, some people think television is good for everybody because positive points can be seen. In fact, even it influences a lot the weaker; it gives a lot of useful information and is a mean of learning. First, television is a new way of learning. When television firs t arrived, it was a revolution for people because they could get the chance to have news from it by watching the daily news magazine. It gives people information about what happen all around the world. It is a mean to have more information so that people can easily make up their mind. In fact, by reading the newspaper, listening to the radio, and then watching documentary shows or news on television, people can increase a lot their knowledge. Television is also seen as a mean to learn. For example, in schools teachers use a lot television to show to their students some documentaries that could be related to their courses. This is seen as an incentive for students. Thanks to it, they can make direct relationships between what their teachers have taught them and what they have seen on television. As teenagers were born with a television, not as most of the teachers were, the teachers think that using television could interest them more. Laurie Rozakis, a writer, focuses a lot of the good effects television has on people. In fact, in Writing Essentials for the Pre-GED Student, she stresses the fact that television helped her in three different ways. First, it helped her to learn English better, and then, it helped her to learn more about the country she was living in: the United Sates of America. The third way was that television helped her to stay out of trouble. She did not go outside at night because she was watching the shows she liked the most. Then, television is a way to escape from reality. Sometimes, it helps people dreaming about a new world for example. Some movies can help them traveling. For example, the ones who cannot travel would enjoy watching a travel documentary of the country they are dreaming about. Television also gives the â€Å"family model† through shows on television. It is the model everybody dreams about. In addition to that, people who have financial problems or who are in hospital can escape their problems by watching television. Thanks to that, they forget for some hours what they have to endure every day. Television is also a family time. It gathers people around the same thing. The Media Awareness Network website focuses a lot on the fact that television is good for family values. First, it allows people to spend time together. Nowadays, people do not have a lot of time to spend with their family. As they spend many hours at works and in transportations, they do not get the chance to have time with their family, except during week ends. As a result, television watched at fix hours allowed them to gather and to discuss together about important subjects. The more information people have from others, the better it is for them and their general culture. As a result to these discussions, children would learn more from it and would become more open-minded. Moreover, they can have the chance to discuss these subjects in class, and give their own opinion to their class. To conclude, the effects the television has on people are an opened debate since many years and would still be in the future. Even if it has bad repercussions, such as fear of the others, violence, mass consumption, it has also good effects. In a way, television increases people’s knowledge. It gives them more information about an event, and it gathers people round the same sofas to share ideas, opinions and feelings. As the Media Awareness Network website says â€Å"Parents should also pay close attention to what their children see in the news since studies have shown that kids are more afraid of violence in news coverage than in any other media content. Fear based on real news events increases as children get older and are better able to distinguish fantasy from reality. † It has been shown that television is something very important in households, and that many people care a lot about their television and about what is shown. However, people who control television channels know that and they try to influence people. They know that what they will show will be heard by thousands of people and that it will have an important influence on the society, and sometimes on the consumption. So, people should seriously take into consideration that television has both good and bad effects on them. These people should warn the youngest about what is good and what is bad, and what to do with the massive information the television gives them. BIBLIOGRAPHY Rozakis, Laurie. Writing Essentials for the Pre-GED Student. Thomson Peterson’s. 2003: 182. †¢ Changing Channels Website. http://www. changingchannels. org/effects1. htm. 16 October 2006. †¢ Media Awareness Network website. http://www. media-awareness. ca/. †¢ Kill Your Television. http://www. turnoffyourtv. com. †¢ How To Talk To Your Kids website. http://howtotalktoyourkids. com/index_base. html †¢ Medias Literacy Review. 15 October 200 6 http://interact. uoregon. edu/medialit/mlr/readings/articles/front. html

Sunday, September 29, 2019

Accreditation Audit Essay

With all of the possible problems that could occur during surgery, a wrong-site, wrong-patient mistake is one that should never arise. Nightingale Community Hospital (NCH) fully understands the importance of doing away with these errors and has set up protocol to work towards this goal. While the protocol is in place, it is not fully compliant with Joint Commission (JC) standards. Standard: UP.01.01.01: Conduct a preprocedure verification process. Nightingale Community Hospital has a Site Identification and Verification policy and procedure. Within this policy, and Preoperative/Preprocedure Verification Process is addressed. There is also a Preprocedure Hand-Off form present. This form is a bit misleading as it is essentially a hand-off form in general with a few extra boxes possible for check-off. To prepare for inspection and audit, NCH should create and implement a form for use within the Operating Theater or wherever procedures are performed, such as bedside procedures. This form needs to be more specific in addressing at least the minimum requirements by JC. The form needs to cite that all relevant documentation is present, such as signed consent form, nursing assessment, preanesthesia assessment, history and physical. The form also needs to specify that the necessary diagnostic and radiology test results, rather they be images and scans, or biopsy reports, and properly displayed and labeled. Finally, to fulfill the minimum requirements by JC, any and all required blood products, implants, devices, and special equipment needs to be labeled and matched to the patient. Standard: UP.01.02.01: Mark the procedure site. NCH covers the procedure site marking standard fairly well within their Site Identification and Verification Policy. It mentions that site marking is needed for those cases involving laterality, multiple structures, or levels. Several times in their policy NCH mentions that it is best to have the patient involved, if at all possible. If the patient is unable to mark the site, the policy states that the physician will be called to mark the site. The policy states that the mark shall be made in permanent black marker so it will remain visible after skin preparation, and also in a location that will remain visible after sterile draping is in place. The policy also  includes circumstances in which the marking will be unable to be performed based on the location of the surgery being in an area that is unable to be marked. Standard: UP.01.03.01: A time-out is performed before the procedure. Nightingale Community Hospital has an adequate procedure in place for the time-out performance. Within the Site Identification and Verification Policy, the Time-Out Procedure complies with JC standards. A time-out is to be conducted immediately prior to performance of the procedure, it is initiated by the nurse or technologist, it involves all personnel involved in the procedure, the team members agree to a minimum of patient identity, correct site, and correct procedure to be performed, and all of this information is documented in the record, including those involved and the duration of the time-out. The only issue not addressed fully is the possibility of multiple procedures occurring on the same patient by different practitioners, and in that case, an additional time-out needs to be done for every new procedure. The Communication priority focus area is an extremely important area for any hospital. This is a common sense area that should be able to reach complete compliance. A wrong-patient, wrong-site issue should never arise and is completely avoidable. In 2010, Joint Commission reported that wrong-patient/site surgeries continued to be the most frequently reported sentinel event(Spath 2011).Jay Arthur states that JC reports between four and six wrong-site surgeries per day(2011). The World Health Organization believes that at least 500,000 deaths per year could be prevented if the WHO Surgical Safety Checklist was correctly implemented. These numbers, when compared with the possibility of 100% compliance, are astounding and completely avoidable. Nightingale Community Hospital is well on their way to avoiding these types of sentinel events through usages of proper protocol, procedures, and policy as is seen by the upward trend from their last year of self-checks. With continued diligence and appropriate modifications made, this can be an area that NCH, and any other hospital can be fully compliant in. References Arthur, J. (2011). Lean six sigma for hospitals: Simple steps to fast, affordable, flawless healthcare. New York, NY: McGraw-Hill. Spath, P. L. (2011). Error reduction in health care: A systems approach to improving patient safety (2nd ed.). Hoboken, NJ: Jossy-Bass. WHO (2013). WHO | Safe surgery saves lives. Retrieved from http://www.who.int/patientsafety/safesurgery/en/ [Last Accessed November 5, 2013]. Accreditation Audit Essay A1. Evaluation Nightingale Community Hospital (NCH) is committed to upholding the core values of safety, accountability, teamwork, and community. In preparation for the upcoming readiness audit, NCH will be launching a corrective action plan in direct response to the recent findings in the tracer patient. Background information on the tracer patient is as follows: 67 year old female postoperative patient recovering from a planned laparoscopic hysterectomy turned open due to complications. Patient developed infection that formed an abscess and was readmitted to the hospital for surgical abscess removal and central line placement for long term IV antibiotics. The tracer methodology was employed when auditors reviewed this patient’s course. Many things were done well and right with this patient and NCH is pleased to know that the majority of items analyzed with this patient proved that NCH was in compliance with regulatory standards; however, there were some troublesome areas that we need to focus on. The primary focus area that we will put our energies into will be the fact that there was not a history and physical completed on the patient within 24 hours of admission, and in fact it was greater than 72 hours before one was completed. See more: My Writing Process Essay The Joint Commission mandates standards that are to be met in order to maintain compliance. Standard PC.01.02.03 states that history and physicals must be documented and placed in the patient’s medical record within 24 hours of admission and prior to procedures involving conscious sedation or anesthesia. History and physicals are also considered in compliance if documented 30 days prior to procedures as long as there are no changes documented or the changes in status are specifically noted. (Joint Commission Update, n.d.) A2. Plan Often, rules and regulations are met with disdain and it is usually because there is no explanation provided as to why the rule exists. The rules for History and physical documentation are in place for a reason and are not just to make things more complicated. History and physicals provide  all health care providers that participate in a patient’s care a glimpse into that patient’s health status and immediate concerns. (Shuer, 2002) The information provided in a history and physical paints a portrait for all other health care team members to follow and treat accordingly. Often, emergent situations may arise where other health care specialty providers may not have the time to glean medical background information from patients and/or their representatives and the history and physical then serves as the go to source of information. Compliance regulations can be hard to understand the reasoning behind them sometimes, but if we all work together to make sure that we meet them, then NCH will continue to embrace the core values that we have worked so hard to instill and embrace. The following outline is a corrective action plan that will ensure compliance with the Joint Commission and bring us up to par for the readiness audit. Action Accountable Parties Timeframe Measurement History and Physical Physicians & physician assistants 1. Within 24 hours of admission. 2. Within 30 days prior to a procedure involving conscious sedation or anesthesia. Chart reviews and if requirements are not met, patients will be held in the surgical admitting unit and procedures will be delayed. There must be 100% compliance. B. Sources Joint Commission Update Study Guide. (n.d.). Retrieved August 31, 2014, from med2.uc.edu/libraries/GME_Forms/Joint_Commision_Upd_1.sflb.ashx Shuer, L. M. (2002). Improvement needed on h&p documentation. Medical Staff Update, 26(5), Retrieved from med.stanford.edu/shs/update/archives/May2002/chief.html

Saturday, September 28, 2019

Impressionism and Naturalism by Robert Herbert Essay

Impressionism and Naturalism by Robert Herbert - Essay Example Manet, according to Herbert was a different sort of flaneur. He was obtuse in reproducing the flaneur's experience of destruction, transformation, and desolation because he was an active observer. His work The Street Singer (1862) and The Balloon (1862) were examples of the artist's silent commentary in the upheaval of the city and its people (Herbert 36). He exemplified realism through artifice and caricature. Flaneurs were keen in capturing the moment of life in its pure form which was why later works offered glimpses of contemporary urban life not through detailed oil paints but caricatures speedily drawn and executed as in Manet's Rue Mosnier Decorated with Flags and Degas's Martelli. Flaneurs, therefore, were inventors and responsible for innovation in art during the 19th century. Flaneurs were also investigators of history for they were keen observers of urban life, noting spectators, daily occupations, behaviors, professions, and intimate and domestic life of that time. Degas's Women on a Cafe Terrace, Evening and Manet's Railroad which the artists investigated to the extent of scientific naturalism could be said to denote this aspect of flaneurs. Flaneurs were also observers of domestic manners. Their detachment from the public and private arena offered them the advantage of narrating emotions and feelings without romantic interference. This could be observed in Cassatt's Cup of Tea (1880) and Morisot's Interior both depicted the artist's detachment yet interactivity with domestic life.

Friday, September 27, 2019

Forensics Research Project 2 Paper Example | Topics and Well Written Essays - 750 words

Forensics Project 2 - Research Paper Example It can be stated that the aforesaid tactics would generally provide broader explanation to various legal aspects relating to the preservation of a computer and its data. Key Steps to Ensuring Legal Success in a Courtroom in a Forensics Case In order to determine the key steps ensuring legal success in a courtroom in a forensics case, it can be affirmed from a broader outlook that forensic readiness is often represented as the capability of an organization to increase its prospective to use modern digital evidences and at the same time, reducing the costs of an investigation by a certain degree. In this similar concern, the key steps towards ensuring legal success in a courtroom in a forensics case have been outlined below. Describing the business circumstances that need digital evidence Identifying current sources & different kind of potential evidence Determining the evidence based collection obligation Establishing a strategy for secure storage plan & handling of possible evidence Identifying the circumstances when a full formal investigation need to be initiated Documenting an evidence-related case explaining the incident and its manifold impact Ensuring legal appraisal to take necessary actions in response to the happening of any incident (Rowlingson, 2004). Conditions That Require Inclusion of Law Enforcement With regard to conclude the conditions that require inclusion of law enforcement, it can be affirmed that computer crime may be conducted through the violation of the policies associated with information technology relating to the preservation of a computer and its data. Generally, there pertain numerous conditions of crime related to the information technology that lays the requirement of the inclusion of law enforcement agencies. In this regard, according to Section 13 of the Cybercrime Prevention Act 2012, law can be enforced against any individual linked with preservation of computer data. The truthfulness of traffic data & subscriber information relating to communication services shall be preserved for a minimum of six months. Moreover, content information shall likewise be preserved for a period of six months from the date of the order received from law enforcement authorities obliging its preservation. It can be stated that law enforcement system may provide extensions for the conduct of the aforesaid activities. It has been apparently observed that one of the conditions that require inclusion of law enforcement is that once computer data is preserved, conveyed or stored by a service provider, the data would be accessible only to authorized users It is the service provider who requires to preserve digital data as well as to keep those confidential and most significantly maintain their compliance by a certain degree. It is to be affirmed that if any computer operator violates or not comply with the order provided by the service provider, then the respective task will be regarded as a crime as and the operator will be punis hed under section 13 of the Cybercrime Prevention Act 2012 (The Office of the President of the Philippines, 2012). Possible Actions to Protect the Employer In accordance with the mentioned case, one of the possible actions could be the introduction of various data prevention plan which might protect the employer by a certain degree. In this similar concern, it can be affirmed that the employer can take the help of a law enforcement group for collecting as well as

Thursday, September 26, 2019

Higher education Essay Example | Topics and Well Written Essays - 750 words

Higher education - Essay Example While university attendance rates increased generally for women, there was significant variance among different segments of the population because of factors such as age, race and ethnicity. This is because especially in this age of globalization we all have multiple identities such as age, race, ethnicity, religion, full or part time student, employment status, recent immigrant or native, etc. Intersectional theory shows that ones’ various identities intersect with each other influencing how we think and act, sometimes in opposite directions. For example, a Muslim woman from a conservative family may want to attend college, but is discouraged from doing so by her strict upbringing stipulating that education is unnecessary for women. Thus while the general trend is for increased female university attendance, the actual increase varies among different population segments because of the intersection of influences prevailing in specific populations which can either encourage or d ampen the trend. Table 5-8 This table in the provided reading analyses the proportion of undergraduate degrees earned in 2004 in the US by gender, race, ethnicity and age. For example, it tells us that 61% of all graduates were â€Å"traditional†, that is full time students under 24 while the remaining 39% were over 25, presumably either part time students possibly employed or full time who may have interrupted their work career in order to upgrade their skills. In the under 24 age group 34% within the 61% were women, and in the older group 24% within the 30% were women, or more than half in each case. However, while the proportion of women exceeded men for all racial/ethnic categories, the margin of difference varied considerably. For example, African Americans of both sexes were 50% of their graduating population for both the under 24 and over 25 age groups. However, for the under 24s 30% within the 50% were women 34% within the over 24 50% were also women, that is in both cases especially in the older group over half of the African American graduates were women. In contrast, for the Asian Americans under 24 33% within the 65% were women, although they constituted 21% within the remaining over 25 proportion. It is also noticeable that for most racial/ethnic groups women are a higher proportion of graduates for the over 25 age group than they are for the younger graduates, although in each case they exceed the male rate. Although the table tells us that there are differences among racial/ethnic groups in the degree to which the proportion of female graduates exceed that of males and the higher proportion is generally even more pronounced for the over 25 age group, it dos not tell us why. Unfortunately, in the social sciences unlike in the physical sciences, one cannot easily manipulate the quantity and quality of variables in a lab experiment to determine the exact effect of each variation. Instead, one must examine different population segments and di fferent hypothesis about what economic and/or social factors are likely driving their behavior, for example, the higher proportion of African American women pursuing higher education compared to females of other races/identities. Is this because they are generally of a lower socio-economic

Wednesday, September 25, 2019

Summarizing the information Assignment Example | Topics and Well Written Essays - 250 words

Summarizing the information - Assignment Example The study described in the article analyzed 178 food samples in China. â€Å"Some processed foods contained a concentration of up to1226 mg/kg, which is about 12 times the Chinese food standard† (Deng et al 248). Food additives and raw materials were investigated and it was revealed that row materials contained low concentration of the element under consideration. High concentration of aluminium was found in food additives that amounted to 0.005–57.4 g/kg. It was revealed that the amount of food additive greatly affected the concentration of the element in food. The type of additives also plays a very important role as some of them contain more aluminium. Basing on the results the researchers came to the conclusion those additives, which contain much aluminium should not be applied and it is necessary to replace them with additives, which contain less aluminium. The article is very useful for the research as it provides valuable data on the concentration of aluminium in

Tuesday, September 24, 2019

How does spaces change perceptions on things and people Essay

How does spaces change perceptions on things and people - Essay Example There are also external factors like human experience which influence these physical characteristics. In effect, these external elements provide biases on how individuals perceive things and people around them. This text sought to provide a thorough analysis on how human perception is created through the human brain and its physical senses. Moreover, this writing discussed the factors which affect how human perceives objects. One of which is the philosophy of space and time. Finally, this text also pursued to provide an answer to the question, â€Å"How does space change perceptions on things and people?†. 1. INTRODUCTION Perception dictates the behavior of people and the human interaction as a whole. It is a person’s perception that affects the person’s response in the form of his or her actions. This is the reason why understanding human behavior has always been associated with the concept of perception. Thus, the study on perception has always been evident in different fields in the social sciences like sociology, psychology, and philosophy. Generally, perceptions depend on an individual’s sensory qualities such sight, hearing, touch, smell and taste ("Problem of Perception."). However, it is the mind that has a crucial control on these sensory qualities. Philosophers further suggest that there exist a â€Å"problem of perception† that is â€Å"created by the phenomena of perceptual illusion and hallucination† ("Problem of Perception."). This means that the way people perceive things and other people is not solely based on the sensory qualities but rather dictated by psychological discernment. There are various factors that influence one’s perception on things and other people through their cognitive discernment. Space is one particular factor that affects perception on things and people. Space plays an important role in the process of perception which consequently created the concept of spatial perception or space perception. Space perception is the process of evaluating the physical orientation of objects in space which is necessary for movement in the environment and for discernment of the relationships between things (â€Å"Space Perception†). Moreover, the concept of space perception also offers insight into how people become oriented in the environment for them to survive in the form of seeking food or avoiding injury (â€Å"Space Perception†). In other words, space perception provides people â€Å"physical reality† which they use to respond in their everyday lives (â€Å"Space Perception†). Thus, this has led to much deeper researches on the role of space in the perception of people on things and other people. These research studies tend to provide in-depth analyses on the complexity of the effect of space towards perceptions. Scholars aim to answer the question, â€Å"How does space change perceptions on things and people?† 2. LITERATURE REVIEW Various related texts have already been published in the social sciences on the subject of perception. One of which is of Matthew MacDonald’s book â€Å"Your Brain: The Missing Manual.† This text offers a discussion about the human brain and the process of perception as performed by the brain. Different articles on human perception are also available such as â€Å"The Meaning of Perception† and â€Å"The Death of the Cyberflaneur† as written by Flemming Funch and Evgeny Morozov, respectively. These texts serve as good reference materials in understanding and

Monday, September 23, 2019

Macroeconomics.....Case Study NEED respond to at least 2 other Essay

Macroeconomics.....Case Study NEED respond to at least 2 other students - Essay Example It would be incomplete to analyze and predict a nation’s history without accepting political underpinnings in events. While it is true that the government is really accountable on how it should spend public funds, balancing a national budget is a complicated task. In fact, ordinary citizens like you and I can’t even handle our own budgets. What I am saying is that things have already happened and the most that we can do as individuals is to reduce our spending and stop blaming the government for all the mess. After all, if we are not part of the solution, then we must be part of the problem. A.Having a President whose main agenda is war is not a good president. Had all the resources been devoted to social services then the country could have survived the crisis. There are just too many issue s that a President can prioritize inside one’s country . It I sjust unfortunate that the political propaganda to project war as a noble cause gripped most of Americans when the fearful thing is not war that happens thousands of miles away but fear of failing health insurance. B. I definitely agree with tax-cut measures too since leadership is situational and at certain times, circumstances call for tax-cuts. It is true that tax-cuts have consequences but it is a lesser evil. Too much strain on American people would not only cause financial hardship but can make them resentful against the government. Civil disobedience can be an ugly

Sunday, September 22, 2019

Prioritizing the IT Project Portfolio Essay Example for Free

Prioritizing the IT Project Portfolio Essay Project portfolio management is the management process tailored to aid the organization gain and review important information concerning all its projects, which are then sorted and prioritized according to some criteria like strategic value, cost, effects on resources and others (Greer, 2009). IT project portfolio management has certain objectives that must prompt its undertaking. However, the evaluation must start with the IT strategy first and not the goals and objectives of the business or organization. The IT strategy should be the linking chain in the business strategy that governs the service or product strategy, which in turn should be able to drive the IT strategy. The following are examples of the strategies of a CRM company. In determining the IT project to undertake, the business strategy must first be analyzed and understood. For example, an organization’s business strategy may be to see a customer base increased by a certain percentage within a specific period. In the business strategy, all the necessary requirements for accomplishment of the strategy must be put in place. An IT organization may provide increase functionality through business analytics as well as executive dashboards. What follows should be the product strategy. An IT firm may want to work with business intelligence software organization to improve analytical capacity of the CRM software. The identification and undertaking of such strategies should be within a time frame. In this linkage chain, the product strategy should be governed and driven by the organization’s business strategies (Greer, 2009). However, the product strategy should be the driving force behind the IT strategy which should come third in this link chain of strategies. An organization’s IT strategy could be to develop a new software platform which would enable easy integration between the organization’s software and the business intelligence software company as well as with other companies. The IT strategy should then be the driver of our IT project prioritization (Machevarapu, 2006). As an IT project a company may undertake to create a Web service-based platform which provides a universal data transmission and exchange between the business intelligence software and the CRM software. This chain should be able to inform us whether the IT projects are in line with our IT strategy and by extension our business strategy. However it would be difficult to verify the specific values our IT projects have on our business. In order to determine particular values of IT projects to the organization in a hierarchical analysis of the strategies, one needs to specifically look at the four drivers that motivate our strategic analysis. The first and the most important of these drivers is the potential reduction in expenses. One of the motivating factors should be the reduction of cost in our business operation. In this case our CRM integration should be able to offer a new platform that would helps us reduce the cost of creating links to other software sellers or vendors. This is because our CRM is formulated on a Web services standard (Entrekin, 2006). Our second and essential motivation should then be the potential revenue increase our project would bring to the business. As a business outfit, our concern should be how to minimize cost and improve our capital base and therefore every project undertaken should be aligned to our business strategy of seeing an increase in revenue. According to Entrekin (2006) in our IT project prioritizing, an increase should be expected in our overall revenue because we would expect a larger client base that would consider our CRM software. The third driver should be the impact of our IT project on our product as well as on our competitors. The project should not only improve our products but should put us above our competitors. This should be our strategic undertaking so that our software platform project directly impacts our CRM product and hence improve the organization’s competitive position. The final and most important driver of our IT project should be the legality of our undertaking. We have to be aware of the various laws and regulatory measures required of IT projects (Greer, 2009). If the laws are in favor of our project, then we have to move swiftly to accomplish our project that would enhance our business strategies. Security of data is an important component of the new CRM software platform and because such data as social security numbers are sensitive, the federal laws for example, permit their storage within the CRM system. Strict compliance with the IT laws would enable us undertake projects that are tenable and are in line with our IT strategy and by extension our business goals and objectives (Entrekin, 2006). Every IT project must be evaluated against the four discussed drivers in order to determine their value as well as priority to the organization. Again, it is important to note that the drivers are not and should be analyzed in isolation from each other. But they should be intertwined in a meaningful and repeatable process in the prioritization process. Analysis of any IT project must therefore be considered under each and every one of these drivers in order to come up with a comprehensive and exclusive value-base project (Entrekin, 2006). Prioritization management is a process and creating a prioritization model would have to take up-bottom approach which then breaks down every driver into different parameters. This process requires a concerted effort of all business leaders from all departments in order to get the insights of the business focus as well as performance measurements. From the example above, the CRM company leaders undertook to break down into four parameters the â€Å"expense reduction† driver. These were customer service expenses, back office efficiency gains, customer acquisition and retention and others (Machevarapu, 2006). This step is followed by scoring every project across all the parameters, in a down-up approach in order to find out the overall score of our project. This process requires a presentation of statements to the business leaders and gauging their degree of agreement with particular criteria assigned to the scoring range in a scale of 1-10. For instance, in the CRM company project, the leaders were asked whether they considered the project to be profitable or not in terms of savings. A score of 1 meant no saving while that of 10 meant a saving in the tune of millions. The bottom-up rating in this case will give us the final scores which will definitely prioritize the project or not. The third step in the prioritization process would be to adjust the 2 prioritization levers through assigning of weights to every driver as well as their particular parameters in accordance with the current priorities in business. Our weights would then be adjusted correspondingly as the priorities change, so that our scores for every IT project remain in line with our business strategies. Such levers must be set in relation to business priorities through out the project portfolio and never changed among projects (Machevarapu, 2006). After the projects have been scored, sorting to determine those that are feasible may be undertaken. The cutoff points in this case may be related to the total number of such projects a business can absorb, the available funds for investments or any other constraints the organization may be facing. The most important thing to every manager is that all prioritization models look well on paper. However, there are no perfect ones and getting accurate results may be the greatest challenge. One cause of this is that most people would try to manipulate the outcomes. It is therefore important that every manager learns some basic steps towards understanding prioritization. For example, one needs to learn about what constitutes a project, which projects are to be subjected to strategic analysis, and which ones are not and finally learning to limit the number of projects undertaken by a particular department.

Saturday, September 21, 2019

Drug Addiction Essay Example for Free

Drug Addiction Essay Drug Addiction The topic that caught my attention when I was reading my choices was â€Å"Drug Addiction†. Drug addiction is something that should not be taken lightly by anyone who has a family member or friend with this issue. We often wonder how and what make people turn into being a drug addict. â€Å"Drug addiction is rooted in long-term adaptations within the brain that promotes escalating drug use, difficulty quitting, and relapse—all despite the awareness of negative consequences.† With that being said I have always wondered what keep a person going back to their addiction and why can’t they quit. When I read the article and it mention how drug addiction is rooted a light bulb clicked in my head. When something is rooted inside of you whether it is for good or bad it is hard to break. â€Å"It was previously hypothesized that addiction was caused in part of an imbalance between an impulsive system that governs appetitive motivation and is driven by immediate rewards on the one hand and a reflective system that regulate and control impulsive according to future pleasurable or aversive consequences.† With this study they were able to predict the hypothesis and see what causes addiction to happen in some cases. I think that the method used in order to see how people become an addict was efficient and it was very precise that made the validity of the experiment a success. It is important to make sure that your study is conducted in a manner that will give you a pretty accurate result when using different research methods. Also in the study they took a small sample size of addicts who quit smoking after a brain injury and they had a criteria that asked them some simple question and due to such a small sample size they were not able to get the laterality effects could not be verified statistically due to the small sample size like I mentioned earlier. The results of this study gave great evidence that insula is critical for psychological processes that maintain addiction to cigarette smoke. I have known people to have drug addiction and it just hurt your heart when you see them all strung out and can’t even help themselves. I have had family members and friends who have battled with this addiction. I want to research this addiction so that I can learn how a person can overcome this type of addiction. Any addiction is hard to break it is going to take a lot of will power from the addict in order for them to be set free of this addiction. You never know what could have pushed the addict into becoming a drug  addict. I want to be able to relate to an addict whether it is a friend, family member or just someone in general whom I would want to be able to help them overcome this addiction. Drug addiction is the most common addiction and I feel that if we get the proper research on how to give treatment to an addict then maybe t heir chances of becoming free will be greater. I just want to be able to talk with them and see what I can do to help them overcome this disease. That is why it is important that we take all necessary measures when it comes to us trying to aid a person who is addicted to drugs. Drug addiction is something that I want to do research so we can come up with a method that can slow it down. I know that it can never exist but we could at least work on slowing it down. In the studies that I chose we will be dealing with how addicts can be treated to prescription pain medication. I hear some people talk about taking other people medication because they like the way it makes them feel and they become addicted to it. Addiction is bad whether it is drugs, food and or alcohol we all know how serious this can be to anyone. In this is article they are launching their large scale national study evaluating a treatment for addiction to prescription drugs. â€Å"NIDA’s National Drug Abuse Treatments Clinical Trials Network (CTN) is conduct ing the multi-site study known as the Prescription Opiate Addiction Treatment Study (POATS).† â€Å"This study is in response to the growing national problem of prescription drug abuse in this country. According to the 2005 National Survey on drug use and health the incidence of new nonmedical users of pain relievers is now at 2.2 million Americans aged 12 and older surpassing the number of new marijuana abusers (2.1million). In 2005, more than six million American’s reported current (in the past month) nonmedical use of prescription drugs – more than the number abusing cocaine, heroin, hallucinogens, and inhalants combined.† This clearly answers my question to how many people are addicted to prescription drugs versus to the other drugs. The data collected in this study shows that there are a lot of people addicted to prescription drugs and how it has increased and the results in this study show it. In this study the results showed that there were more people addicted to prescription drugs rather than your everyday drugs. I feel that this is giving everyone a clear view that prescription drugs that are for pain and can give you some kind of satisfaction can be a ddictive. This study covers a variety of areas like those addicted to painkillers and those who abuse painkillers due to nonmedical reasons. I found this study to be quite interesting because it will give us a general idea of the amount of people that are addicted to prescription drugs and those that take it for chronic pain and that are addicted. This study will enroll a total of 648 participants that will be carried out 11 sites around the country. The outcome of this study will be very efficient for all to know just how many people are suffering from being addicted to some type of drug. The second article let us know how much this disease is deeply rooted inside an addict’s brain. â€Å"Drug addiction is rooted in long-term adaptions within the brain that promotes escalating drug use, difficulty quitting, and relapse – all despite the awareness of negative consequences.† This article will answer my question as to why do addicts keep going back to that same drug trying to get the same high as the first time. In this study it will explain the role of the insula in drug addiction as well. I feel that all data collected from this article will be informative and the results of the study will be concise and easy to follow and understand. I just hope with my two studies that it will conform enough information about drug addiction to where it will give clarity to what steps we can take to stop this trend in drug addiction. When we look back at the studies done on this particular study you try and think back at all the methods that we have learned in this class. It really makes a difference when you read about a research design and now you are able to relate to the experimenter when you come across some of the studies. So far I think that the designs that the two studies I have written about couldn’t research it any better with the methods they have chosen. I came across another study entitled â€Å"Scientific and political challenges in North America’s first randomized controlled trial of heroin-assisted treatment of severe heroin addiction: Rationale and design of the NAOMI study.† This particular article consist of this study targeting long term opiod dependents who have tried treatments conveniently in the past that are not currently in treatment who currently inject heroin. Now the sample size was based off of the outcome of the response and the retention. There was said to be two primary variables in this study they were evaluated with an alpha threshold of 0.025 which determined about 114 evaluable patients per group which yielded about 80% power in order to  detect increase of 20% of retention and response rate amongst the experimental groups provided. I think with the methods that they used they will have a broader testing ground in order to determine if the treatments are helpful to those that are an addict. Th is study involves a meta-analysis of predictors of continued drug use during and after treatment of opiate addiction. In this study they used meta-analytic techniques in order to identify the risk factors that are given in this study. When it came to the design and measurements in this study it was a search of public literature that yielded 69 studies that reported some bivariate association between independent variables. I think when it comes to this type of research the clinicians and researchers try and make a way to prevent relapse and other issues that form when you become an addict. They want to try and come up with an intervention that will be able to help those that are in need. That’s why these studies are very important to them and taken very seriously. The studies in this research for meta- analysis were researched through computerized databases that is said to have indexed published scientific reports and many other sources in order to complete their research. I think as far as the methods that are being used couldn’t chose a better design to do There were also a total of six criteria’s that were met for the meta-analysis of this study. â€Å"The independent variable clearly specified a single patient –related construct, and not a conglomeration of multiple constructs. An example of a conglomeration is Suffet et.al’s (1978) â€Å"conventionality† variable that is based on four distinct constructs (level of heroin use intake to treatment, number of arrest, type of residence and employment). We excluded from consideration independent variables that referred to treatment program types, policies, practices, personal and setting.† This pretty much shows you the results and how they were able to determine if the treatment helped or not. In this study there were a total of 28 independent variables for which there was said to be two studies with the results on the relationship between the independent variable and continued drug use. This study has to do with cocaine addiction treatments to improve or reduce harm by (CATCH). This study is said to investigate possibilities and problems associated with pharmacological treatments. ‘The methods and designs used in this study by (CATCH) consist of three separate randomized controlled, open label, parallel group feasibility trials,  conducted at three separate addiction treatment institutes in the Netherlands. Patients are either new referral are already in treatment. With a total of 216 eligible outpatients are randomized using pre-randomization double-consent design and received either 12 weeks treatment with oral topiramate (n=36; Brijder Addiction Treatment, The Hague), oral modafinil (n=36; Arkin, Amsterdam), or oral dexamphetamine sustained release (n=36; Bouman GGZ, Rotterdam) as an add on to cognitive behavioral therapy (CBT), or receive a 12 week CBT only (controls n= 3 x 36).† With this design I am sure that they will come out with a pretty good analysis of if cocaine harms or improve those that are in treatment for this addiction. In this review it let us know up front that they are not going to be able to cover the whole clinical addiction arena or the parts dedicated to treatments. Although within this study there was a conflict of interest within the testing methods used inside this review. Personally when you are conducting a study on something with a broader view you will tend to have a conflict of interest with in your results. There are so many methods out there that you can use to conduct a study just depends on what you are trying to research. This is a different study and it aims to describe the derivation of recent status scores for the ASI. â€Å"The design was 118 ASI -6 recent status items were subjected to nonparametric item response theory (NIRT) analysis followed by confirmatory factor analysis (CFA). Generalizability and concurrent validity of the derived scores were determined.† The sample size was 607 recent admissions to a variety of substance abuse treatment programs. I think that this study will a have great validity due to the fact that they used a variety of substance to test for this study. This is an interesting study which really caught my attention and this study aimed to explore the existential aspects of living with addiction. The design that was used in this study was called a Hermeneutic which I have never heard of this type of design apparently this was the right design th at was chosen to do this study although I am not familiar with it. When it came down to the methods used it was based on interviews within the first study with rich, personal experienced addicts. After the results it show that people living with addiction struggles with as long as it is existing within them. This study is talking about giving addicts their choice of drug for a variety of reasons. Can’t say that I would want to give an addict the drug of their  choice because I would want to try and help them overcome their addiction and not contribute to it. In this study it mentioned that offering drugs to the addict and by them determining if they can say â€Å"yes or no† to the drug does not help them make progress they need something to undermine them in order to get the results they need. I think that all the studies and reviews mentioned in my paper are really good ones they all have the intention to try and come to a solution to helping those that are addicted to drugs. Drug addiction like I mentioned earlier in my paper is a serious disease and it comes in many forms. I feel like this class has helped us learn that there are many methods and designs out there for us to be able to conduct a study or research and get the validity that we need to complete our study. References Cacciola, J.S., Alterman, A.I., Habing, B., McLellan, A.T. (2011). Recent status scores for version 6 of the addition severity index (ASI-6). Addiction, 106 (9), 1588-1602. Retrieved from http://search.proquest.com Naqvi, N. H., Bechara, A. (2010). The insula and drug addiction: An interoceptive view of pleasure, urges, and decision-making. Brain Structure and Function, 214(5-6), 435-50. Retrieved from http://search.proquest.com Nuijten, M., Blanken, P., Van Den Brink, W., Hendriks, V. Concaine addiction treatments to improve control and reduce harm (CATCH): New Pharmacological Treatment Options for Crack-Cocaine Dependence in the Netherlands. BMC Psychiatry 11.1 (2011): 135. Retrieved from http://search.proquest.com Nutt, D., Lingford-Hughes, A. (2008). Addiction : The clinical interface. British Journal of Pharmacology, 154 (2), 397-405. Retrieved from http://search.proquest.com Oviedo-Joekes, E., Nosyk, B., Marsh, D.C., Guh, D., Brissette, S., Gurtry, C.,†¦Schechter, M.T. (2009). Scientific and political challenges in North America’s first randomized controlled trial of heroin-assisted treatment for severe heroin addiction: Rationale and design of the NAOMI study. Clinical Trials, 6 (3), 261-71. Retrieved from http://search.proquest.com Walker, T. (2008). Giving addicts their drug choice: The problem of consent. Bioethics, 22 (6), 314—20. Retrieved from http://search.proquest.com Wiklund, L. Journal of clinical nursing 17. 18 (Sep. 2008): 2426-2434. Existential aspects of living with addiction – Part I: Meeting challenges. Retrieved from http://search.proquest.com Drug abuse, NIDA launces first scale national study to treat addiction to prescription pain medications. (2007). Mental Health Weekly Digest, 23-23. Retrieved from http://search.proquest.com

Friday, September 20, 2019

Quantization effects in digital filters

Quantization effects in digital filters ABSTRACT: Quantization effects in digital filters can be divided into four main categories: quantization of system coefficients, errors due to A-D conversion, errors due to roundoffs in the arithmetic, and a constraint on signal level due to the requirement that overflow must be prevented in the comparison. The effects of quantization on implementations of two basic algorithms of digital filtering-the first-or second-order linear recursive difference equation, and the fast Fourier transform (FFT) are studied in some detail. For these algorithms, the differing quantization effects of fixed point, floating point, and block floating point arithmetic are examined and compared. The ideas developed in the study of simple recursive filters and the FFT are applied to analyze the effects of coefficient quantization, roundoff noise, and the overflow constraint in two more complicated types of digital filters frequency sampling and FFT filters. Realizations of the same filter design, by means of the frequency sampling and FFT methods, are compared on the basis of differing quantization effects. All the noise analyses in the report are based on simple statistical models for roundoff and A-D conversion errors. Experimental noise measurements testing the predictions of these models are reported, and the empirical results are generally in good agreement with the statistical predictions INTRODUCTION: Digital filters are widely used in modern signal-transmission systems. The first-order filters are used for extracting lower-frequency or upper-frequency signals. Quantization errors due to the finite number of binary digits in the representation of numbers are typical of digital filters. Quantization is a representation of data samples with a certain number of bits per sample after rounding to a suitable level of precision. Quantization errors in a Digital Signal Processing (DSP) system can be introduced from three sources; one source is input quantization, a second is coefficient quantization and the third is the finite precision in the arithmetic operations. The quantization error in the arithmetic operations can be controlled by carefully selecting the size of buffer registers according to the input word length. Quantization errors from input and filter samples are considered in this article. The effects of quantization errors and the tradeoffs required between precision and hardware resources are discussed in relation to the implementation of the DSP in Field Programmable Gate Array (FPGA). This article is divided into three main sections; quantization effects for upconversion, quantization noise due to rounding off arithmetic and quantization effects for digital beamforming (DBF). Fixed length samples cause reduction in the filter dynamic range and gain resolution. Quantization In digital signal processing, quantization is the process of approximating (mapping) a continuous range of values (or a very large set of possible discrete values) by a relatively small (finite) set of (values which can still take on continuous range) discrete symbols or integer values. For example, rounding a real number in the interval [0,100] to an integer 0,1,2,100. In other words, quantization can be described as a mapping that represents a finite continuous interval I = [a,b] of the range of a continuous valued signal, with a single number c, which is also on that interval. For example, rounding to the nearest integer (rounding  ½ up) replaces the interval [c .5,c + .5) with the number c, for integer c. After that quantization we produce a finite set of values which can be encoded by say binary techniques. A. QUANTIZATION EFFECTS ON UPCONVERSION: In multirate systems, upconvcersion can be achieved with oversampling and filtering techniques. For the proposed digital TIGER system, input Gaussian pulses are upsampled to produce higher order Nyquist zones. A high pass FIR filter is employed to acquire a spectral zone at the expanded band edge. In this case, higher efficiency is possible by exploiting filter symmetry. For a higher throughput rate, polyphase implementation of the FIR filters can be employed. Since signal amplification is performed in the analog domain, a high speed 14 bit DAC is used for digital to analog conversion. Finite precision causes similar effects in the input data samples and filter coefficients. Fixed word length effects on filter coefficients, filter length and dynamic range are described in the following sections. 1. Sensitivity of Filter Coefficients to Quantization Finite precision plays a significant role in the dynamic range of filter gain and DC offset. A large number of quantization levels will decrease the quantization error; on the other hand it requires larger silicon space to implement the design. The quantization affects the input Gaussian pulse and the filter coefficients. The pole and zero maps show perturbations in Figure 1 when samples are restricted to finite word length. The filter coefficients in the lower parts are constrained to 14 bit quantized samples and the length of the filter is 100 taps. This constraint arises from the fast DAC of 14 bit width used for converting a digital signal into the analog domain. Since the dynamic range of the quantizer is less than that of the filter coefficients, the quantized coefficients are disturbed from the unit circle. The gain of the quantized filter response is displayed in Figure 1 which is distinctly less than that for the infinite precision filter. For these simulations infinite prec ision representation is regarded as floating point, which provides significantly better precision than the quantization levels discussed here. The zeros around Z = -1 are responsible for passband attenuation and are less displaced. As the dynamic range of the quantizer is increased to match the filter coefficients, the signal to quantization noise ratio (SNR) improves, but at the cost of increased hardware resources. Similar results can be obtained for the input Gaussian pulse when quantized to specified fourteen bit word lengths. Finite precision is hardware efficient since the system data width is less than the infinite precision (or floating point) case. Quantization reduces a few out of 100 coefficients to zero, which will further ameliorate the memory cell and arithmetic processing requirement. Quantization also reduces the filter gain compared to infinite precision samples; however this reduction is acceptable as long as it remains within an attenuation limit. The fourteen bit quantizer provides more than 80dB attenuation which is better than the standard of 60dB used by many communication systems. 2. Quantization Effects on Filter Order For direct conversion transmission, a cascaded design performs better than a single stage. This is because quantization errors are reduced with a lower filter order. Secondly a lower order design requires less logic resources. Quantization errors vary with the length of a filter and we now study the effects of the filter order on the quantization error. A simulated result is shown in Figure 2, where quantization error is plotted against variable filter order. The quantization is performed by rounding the infinite precision samples to the closest fixed point value. The quantization error increases with increased filter order, since the highest power index in the filter polynomial is the most affected by the rounding. When the quantizer is increased with one more bit in the precision, the error is reduced by approximately 6dB as would be expected. The lower order filter provides better dynamic range than the higher order for eight and nine bit quantizers. This fact is also evident in Figure 2. At lower filter order of fifty, accumulative quantization error is around -43dB and at higher order of 200, it is -31dB. The 12dB difference is equivalent to two additional bits in quantization. Non-linear effects of the quantization can be reduced using a smaller filter order in the modulator. Since the cascaded design comprises a filter of lower order, compared with the single model, it introduces less quantization error than the single stage. 3. Quantization and Word length The dynamic range of the scaled filter depends on the number of bits assigned to the quantizer. For maximum signal power, the quantizer range should be equal to the signal magnitude. An FIR filter with filter variance 2 f s and quantization noise variance 2 n s has a signal to noise ratio of This expression can be used to estimate the appropriate word length for the FPGA implementation. A comparison of SNR versus word precision using the above expression has been calculated and is shown in Figure 3. From this graph it is evident that for each bit added to the word length, there is approximately a six decibel improvement in the SNR. For a higher precision level, a system can still be implemented, but at the cost of increased FPGA logic resources. B. QUANTIZATION NOISE DUE TO ROUNDING OF ARITHMETIC: In the poly phase filter, like in any other filter, quantization has to be performed on the result of any arithmetic operation. This is because any such operation requires more bits to represent the result than is required for each of the operands. If the Word length were always to be adjusted to store the data in full precision, this would be impractical, as there would soon be too many bits required to be stored in the available memory. Therefore, the word length of the internal data, has to be chosen, and the result of any arithmetic operation has to be constrained back to using the quantization scheme chosen from the ones shown in the previous section, as appropriate for the given application. The quantization operation may cause a disturbance to the result of the arithmetic operation. For normal filtering operations, such a quantization disturbance can usually be successfully considered as white noise and modeled as an additive noise source at the point of the arithmetic operation with the quantization step equal to the LSB of the internal data, . This certainly is not the case for zero-valued or constant input signals. However, modeling the quantization has-in most cases-the purpose of determining the maximum noise disturbance in the system. Hence, even if the additive quantization noise model gives overestimated values of the noise for very specific signals, this fact does not decrease the usefulness of the approach. After the shape of the quantization noise power spectral density (NPSD) is found, it can be used to identify regions that might cause overloading or loss of precision due to arithmetic noise shaping; also the required input signal scaling and the required internal arithmetic word length can be estimated for a given noise performance. The standard methods of estimating the maximum signal level at a given node are L1-norm (modulus of the impulse response-worst-case scenario), L2-norm (statistical mean-square), and L -norm (peak in frequency domain giving the effect of the input spectral shaping). These norms can be easily estimated for the given node from the shape of the NPSD. The quantization noise injected at each adder and multiplier, originally spectrally flat, is shaped by the noise shaping function (NSF), , calculated from the output of the filter to the input of each of the noise sources, i.e., to the output of each of the arithmetic operators. These functions were calculated for all of the all pass filter structures are shown in Fig. 2. The shapes of the nontrivial of the NFS are shown in Fig. 3. The accumulated quantization NPSD transferred to the output, , is obtained by shaping the uniform NPSD from each of the quantization noise sources by the square of the magnitude of the NFS corresponding to the given noise injection point and can be described by The results show that all structures perform in a way very distinct from the other ones. Structure (a)has the best performance at dc, half-Nyquist , and Nyquist , where the NPSD falls toward minus infinity. Its two maxima are symmetric about and independent of the coefficient value. The peaks are distant from for small coefficient values and approaches it as the coefficient increases. Structure (b) has uniform noise spectral distribution as all the arithmetic operations are either at the filter input-then noise is shaped by the allpass characteristic of the whole filter-or at its output. Structure (d) also has a minimum at v=0.25 . Its average noise power level decreases as the value of the all pass coefficient increases. Structure (c), the best from the point of view of the required guard bits, has its maximum at v=0.25 going toward infinity for coefficient values approaching one. This effect is a result of the denominator of the Nth-order all pass filter causing the poles of the filter to move toward the unit circle at normalized frequencies of v=2pik/N,k=0.N-1 for the coefficient approaching one. If there is no counter effect of the numerator, like for the case of P1(Z) for structure (c) and for structure (a), then the function goes to infinity. Even though structure (c) goes to infinity at v=0.25 for alfa=1, it has the lowest average noise power from all the structures. This structure has a big advantage in terms of the number of required guard bits and ease of cascading a number of them into higher order all pass filters. If the filter coefficients appro ach one, then the increase in quantization noise power could be countered with few additional bits. Using other structures would only replace the problem of dealing with an increase in the quantization noise with the problem of having to increase the number of guard bits required to deal with an increase of the peak gains. The NPSD of the quantization noise at the output of the poly phase structure can be calculated as the sum of the NPSD at the output of all all pass filters in the filter scaled by the 1/N factor N, being the number of paths. If the filter is cascaded with another filter, the NPSD of the first one will also be shaped by the square of the magnitude of the second filter. sources. The intention was to check the correctness of the theoretical equations by applying the white noise sources instead of quantization and by performing the quantization after addition and multiplication (rounding and truncating) to verify the shaping of the quantization noise and its level both for white input noise sources and real-life signals. The shape of the output quantization noise accumulated from all arithmetic elements for a wide-band input signal assuming, for simplicity, no correlation between the noise sources, is shown for all considered all pass structures in Fig. 4. The solid curve indicates the theoretical NSF that is very well matching the median of the quantization noise (curves lying on top of each other). The quantization noise power increase calculated for the given coefficient was 8.5 dB for structure (a), 6 dB for structure (c), 7.3 dB for structure (d), and 9 dB for structure (b). It is clear that the quantization noise differs from the assumed white n oise characteristic. However, the approximation still holds with an accuracy of around 5-10% depending on the structure of the input signal. An example of more accurate modeling of the quantization noise caused by arithmetic operations can be found in (a). The arithmetic quantization noise certainly decreases the accuracy of the filter output. The value of the arithmetic word length has to be chosen such that the quantization noise power is smaller than the stop band attenuation of the filter and the stop band ripples. In certain cases, the design requirements have to be made more stringent to allow some unavoidable distortion due to the arithmetic word length effects. For the case of decimation filters for the based A/D converters, the quantization noise adds to the one originating from the modulator. In such a case, each stage of the decimator has to be designed so that it filters out this noise as well. The verification of the peak gain analysis was performed by applying single-tone signals at the characteristic frequencies- where functions from Fig. 2 have their extremes-and by using wideband signals to make sure that the estimates are accurate. The experimental results confirmed the theoretical calculations. The results of the simulation for the white noise input signal of unity power are given in Fig. 8. The simulation was performed for a white noise input signal of unity power in order to have a uniform gain analysis across the whole range of frequencies. The theoretical shape of the gain is shown by a solid line that is very closely matching the median value of the signal at the test points. C. QUANTIZATION EFFECTS ON DIGITAL BEAMFORMING: The quantization of infinite precision samples into fixed word length degrades the phased signals. As was discussed in the previous section, the use of more levels for higher precision decreases the quantization error at the expense of larger hardware resources. For a reduced precision level, quantization error is spread to the main beams and to the grating lobes as well. In this section we present effects of quantization on beam resolution and associated grating lobes. 1. Quantization effects on Beam Pattern Phased signals have similar quantized effects on main beam resolution as the filter samples. However non-linearity arises in the sidelobes since the quantizer is not of adequate resolution to represent small changes that affect the sidelobe levels. In order to investigate the quantization effects, an example is presented with fixed word length delay samples. The coefficients of the time vector are quantized into four and ten bits; the increased number of bits will reduce the quantization effect. For an actual design the fixed bit width will depend on available hardware resources. The quantized beam in Figure 1 shows that a four bit fixed number does not adequately represent the beam pattern and thus introduces quantization noise. The ten bit numbers will also introduce quantization error, but at a lower level as shown in Figure 1(b). As can be seen from this simple example, the four bit quantization compromises the sidelobes at the 20dB level, while the ten bit quantization provides a reasonably faithful reconstruction of the theoretical sidelobes at this level. Therefore we conclude that for the 14 bit DAC of the proposed system, the sidelobe level will be essentially unaffected by the quantization at the -20dB level. 2. Sensitivity of Sidelobe Levels to Quantization Quantization causes gain errors in sidelobe levels. Higher resolution in quantization introduces lower quantization error. The graph in Figure 1 shows that the four bit samples result in a quantization error which reduces the first sidelobe gain while producing a gain error in the second sidelobe. The quantization error changes the dynamic range of the grating lobes and degrades the adjacent beam resolution for multiple beam systems. A simulated graph is displayed in Figure 2 to demonstrate non-linear behavior of the quantizer in the sidelobe resolution. For a lower order quantizer, the quantization step is not perfectly matched with the sidelobe levels. For the first sidelobe, the quantized resolution is less than the infinite precision case, although it approaches the floating point value with increasing quantized levels. Figure 2(a) shows that for a three bit quantizer, the first sidelobe resolution is at -18dB, while at ten bits it approaches the infinite precision value of -13.5dB. Unlike the first sidelobe, the second sidelobe exhibits higher resolution error at a lower precision level, since the quantizer can not represent the dynamic range adequately. Again, quantization error reduces with an increase in the number of bits. CONCLUSION: In this paper, effect of fixed word lengths on signal upconversion, quantization noise due to round of arithmetic and quantization effects on digital beam forming have been discussed. For the digital up conversion process, the quantization error can be described using pole/zero filter and frequency response plots. Filter resolution and stop band attenuation are degraded when quantization is introduced. For an increase in filter order, the quantization error increases as the highest order in filter polynomial is effected the most. To overcome this limitation, the number of precision levels of a quantizer can be increased, however this will require increased logic resources for FPGA implementation. Quantization effects in phasing are more complex than in the filter quantization since finite precision degrades the side lobe resolution. For lower precision levels, the quantization error exhibits non-linear behavior in the second side lobe. The quantization error is higher for lower precision levels. In order to overcome these non-linear effects, a precision level of more than eight bits is required. Performance of the proposed digital system will be effectively unaffected by the fixed word length limitations since a system data bus of at least 14 bits is suggested. REFERENCES: A. B. Sripad and D. L. Snyder, A Necessary and Sufficient Condition for Quantization Errors to be Uniform and White P. P. Vaidyanathan, On coefficient-quantization and computational roundoff effects in lossless multirate filter banks. Google.com

Thursday, September 19, 2019

Egon Schieles Self-Portrait Essay -- Visual Arts Paintings Art

Egon Schiele's Self-Portrait When I look at this portrait, the first thing that hits me is the way the artist, Egon Schiele, appears to have made himself look animated, like a cartoon. The way in which his right eye is rounded like a cartoon character and his left eye is squinting and almost shut, adds to the idea of a the portrait being a cartoon. The squinted left eye is as if he is sneaking around and evaluating his surroundings. If you cover the right side of the face (with the widely opened eye), it makes you realise that the left side with the squinted eye does not look very lifelike, but the two eyes seem to cancel each other out. The over exaggerated wrinkles on Schiele’s face and neck make him look a lot older than he actually is. He was actually only 20 years old when he painted this portrait but the way in which he was exaggerated the wrinkles makes him look around 40 years old. Schiele may have done this to portray his feelings at the time he painted it; he may have felt old and tired. But Schiele may also have been just experimenting in different ways of painting facial features and expressions. If you look at Schiele’s hands in the portrait, it looks as though Schiele has deliberately elongated the fingers and made them thinner and more withered. This again, makes you think that Schiele has deliberately made himself look older and more animated. The clothes that Schele is wearing in the portrait look to be too big and baggy for him, and therefore seem also to ...

Wednesday, September 18, 2019

Phosphates and dissolved oxygen :: essays research papers

  Ã‚  Ã‚  Ã‚  Ã‚  Phosphates are present in many natural waters, such as lakes and streams. Phosphates are essential to aquatic plant growth, but too much phosphate can lead to the growth of algae and results in an algae bloom. Too much algae can cause a decrease in the amount in dissolved oxygen in the water. Oxygen in water is affected in many different ways by phosphates  Ã‚  Ã‚  Ã‚  Ã‚     Ã‚  Ã‚  Ã‚  Ã‚  Phosphorus is usually present in natural waters as phosphate(Mcwelsh and Raintree, 1998). Phosphates are present in fertilizers and laundry detergents and can enter the water from agricultural runoff, industrial waste, and sewage discharge (Outwater,1996) . Phosphates, like nitrates, are plant nutrients (Phosphates, 1997). When too much phosphate enters a water, plant growth flourishes (Phosphates). Phosphates also stimulate the growth of algae which can result in an algae bloom(World Book Encyclopedia,1999). . Algae blooms are easily recognized as layers of green slime, and can eventually cover the water's surface. As the plants and algae grow, they choke out other organisms. These large plant populations produce oxygen in the upper layers of the water but when the plants die and fall to the bottom, they are decomposed by bacteria which use a lot of the dissolved oxygen in the lower layers (Phosphates). Bodies of water with high levels of phosphates usually have high biological oxygen demand (BOD) levels due to the bacteria consuming the organic plant waste and subsequent low dissolved oxygen levels(Hooper,1998).   Ã‚  Ã‚  Ã‚  Ã‚  The addition of large quantities of phosphates to waterways accelerates algae and plant growth in natural waters (Hooper), enhancing eutrophication and depleting the water body of oxygen. This can lead to fish kills and the degradation of habitat with loss Boyington 5 of species. Large mats of algae can form and in severe cases can completely cover small lakes. Dying plants and algae will create phosphates while decaying, as a result, water can become putrid from decaying organic matter (World Book Encyclopedia). When the concentration of phosphates rises above 100 mg/liter the coagulation processes in drinking water treatment plants may be adversely affected (World Book Encyclopedia). Manmade sources of phosphate include human sewage, agricultural run-off from crops, sewage from animal feedlots, pulp and paper industry, vegetable and fruit processing, chemical and fertilizer manufacturing, and detergents.   Ã‚  Ã‚  Ã‚  Ã‚  Dissolved oxygen is one of the best indicators of the health of a water ecosystem. Dissolved oxygen can range from 0-18 parts per million (ppm), but most natural water systems require 5-6 parts per million to support a diverse population (Phosphates).

Tuesday, September 17, 2019

Concerns Of Gloving Practices Health And Social Care Essay

Globally people populating with HIV positive virus is go oning to turn in 2008, making and estimated of 33.4 million ( 33.1 million-33.4 million ) . The incidence of freshly infected people populating with HIV virus is increased more than 20 % from 2000, that is the current rate is 2.7 million people. Related deceases are about 2 million, and the prevalence was robustly treble higher than in 1990 ( UNAIDS ) . In Asia entirely the HIV positive incidence rate is 4.7 million, and India being a developing state and one among highest in population rate, HIV population prevalence is high in India with 3.2 million. HIV instances incidence is more seen in Asiatic states, the chief contributing factors for this addition rate of Positive HIV instances are chiefly poorness, some cultural and religious beliefs and practises denial by authoritiess are due to miss of proper instruction and agencies of bar ( T. , 1995 ) As the incidence is more seen in these states the wellness attention professionals responsibilities will be at rise, this leads to a inquiry as how much educated, cognizant and ready are nurses to give attention for the HIV and AIDS patients. Nurses being the largest paramedical professional group of people caring for the patients. Nurses play an of import function in patient attention, as in Acquired immune lack syndrome ( AIDS ) caused by human immunodeficiency virus ( HIV ) nurses have a major function in patient attention and intervention. As the prevalence of AIDS are more in homosexual work forces and endovenous drug users, the populace is more concerned with the transmittal of this virus and is besides same with the wellness attention professionals as they are besides in menace group sing the concern and intervention of the HIV infected patients. There are many evidentiary surveies published with concern of wellness attention workers like physicians, nurses and paramedical staff of their negative attitudes and concerns sing HIV/AIDS. To be more specific many of these surveies have been more concentrated in the westernized universe. But in contrast, few surveies were published sing Indian nurse ‘s attitude and concerns sing HIV/AIDS. The chief intent of this survey is to To find the concerns, attitudes and workplace patterns of Indian nurses, caring with HIV- positive patients and other biological fluids. To determine that these concerns and attitudes were inter-connected. To happen out the nurses reasonable cognition in HIV and AIDS. As a consequence of this survey, the nurses can better their cognition and supply attention to the HIV and AIDS infected patients with proper average instruction and besides helps in cut downing anxiousness and fright in handling those patients.LITERATURE REVIEWNurses have the chief function in supplying attention for the patients with HIV/AIDS, but in contrast the surveies conducted in Western states indicate that the nurses are more unwilling, and the wellness professionals does non demo involvement or avoid giving attention to the patients with HIV/AIDS, this is chiefly due to the deficiency of proper insufficient cognition and instruction sing HIV/AIDS. As mainly nurses believe and fear that HIV positive patients are homophobic and contagious. ( Campbell S. , 1991 ) The mean Knowledge and attitudes of the doctors and nurses caring the HIV/AIDS patients turned out to be more in westernised portion of universe United States of America, Canada. And the average mark of cognition and attitude of nurses is really low in the parts of India and Thailand. ( Brachman P. , 1996 ) . Indian and Thailand Nurse are more uncomfortable in handling so HIV Patients when compared to the nurses and doctors from the United States and Canada, This survey clearly identifies the deficiency of proper cognition and attitude of nurses towards the HIV/AIDS patients. A Basic, Post-basic and go oning instruction programme for nurses on HIV/AIDS in western Pacific states like Fiji, Australia, Papua New Guinea, Singapore and Philippines was evaluated and most of these states reportedly have hapless or no criterions for the HIV/AIDS nursing pattern, and due to these unequal installations in the workplace environment, they could non run into the agreed protocols of infection control ( S.B. , 1990 ) . This clearly suggest that the nurse work environment is besides under hazard where nurse play a major portion of function, where they are more susceptible to infection particularly when they are nursing patients like HIV and AIDS patients. A questionnaire developed to indicate out the attitude of nurses towards the attention giving to the patients with HIV seropositive patients, the consequences were clearly demoing the attack of the nurses towards the attention of patients to HIV was declined. As a sum of 323 nurses, more than half of the nurse ‘s respondent that they are non willing to supply attention to the HIV patients if they have an option of giving attention to the patients. And besides the consequences showed that more than one-fourth of nurses wanted to hold an option in giving attention to the patients with HIV and AIDS patients ( Wiley K, 1990 ) . A study conducted by ( P, 1992 ) in an English infirmary with 717 nurses, they were questioned sing the cognition of HIV and AIDS and besides the attitude towards the attention given to the patients to the HIV positive patients. The consequences shows that a 3rd of the nurses respondent that they are non ready to give attention to the patients if they were given an option of caring to patients with HIV and AIDS patients. Another survey conducted to govern out the nurses anxiousness towards caring the HIV positive patients and their ignorance of cognition of the HIV and AIDS, in this the nurses anxiousness was more shown as they refused to care the patients as they demanded that patients should be screened for HIV trial before supplying attention, and besides the wellness workers denying to care the patients with hazard of HIV seropositive ( E, 1988 ) The surveies conducted by ( EC, 1992 ) ( Flaskerud J H, 1989 ) ( Kelly JA, 1988 ) ( D, 1990 ) ( Scherer YK, 1989 ) , indicated chiefly the nurse ‘s fright and anxiousness of HIV transmittal from HIV positive patients to themselves while giving attention. The Centre for Disease Control ( CDC ) estimates that the hazard of transverse infection of HIV transmittal from seropositive patients to wellness attention workers is 1 in 330 as per the co-operative needle stick surveillance group. Stigma and favoritism Acts of the Apostless as a chief barrier to the attention given to the patients with HIV positive patients in India, this favoritism is seen among the medical physicians and nurses in infirmary harmonizing to the research conducted in India by UNAIDS 2001.RationaleNurses play an of import function in supplying quality attention in assisting HIV positive patients both physically and mentally. The literature reexamine clearly provinces that the nurses need much more instruction and knowledge intercession about caring HIV/AIDS, research to day of the months have clearly mentioned the attitude and concern of nurses sing HIV and AIDS, but at that place surveies were chiefly concentrated in western states where nurses are much good equipped with cognition and with proper medical installations where as in Asiatic states the statistics shows that nurses still persists with a fright for caring HIV and AIDS patients, in this research, the Indian nurses cognition and attit ude towards HIV and AIDS are to be taken in the consideration. The nurse in India requires specialised accomplishments, preparation and up to day of the month cognition of all facets of HIV and AIDS. This survey chiefly points out the current and future function of the professional Nurses in supplying disposed attention to the patients with HIV and AIDS.MANAGING DATA/ TIME RESOURCESThe mark samples will be Registered Indian Nurses working in Mahatma Gandhi medical Hospital in India, the nurses selected would be form different field of patterns. A non chance trying or Convenience sampling will be used in choosing nurses. Criteria for inclusion would be the current nurses working in the selected infirmary. Exclusion standards would be nurses with no clinical experience. The information aggregation tool utilised would be a structured questionnaire which would be mailed to respondents. Questionnaire used was antecedently done by the wellness attention workers analyzing writers ( laborat ory staff and nurses ) in New Zealand to find cognition attitudes and concerns of nurses in workplace covering with HIV positive biological fluids. ( Siebers R W L, 1992 ) . The cogency of questionnaire determined the United Kingdom nurses relationship between attitude, cognition and extent of contact with HIV and AIDS. ( Robbins I, 1992 ) The questionnaire consisted of five subdivisions. First subdivision would concentrate on the demographical informations -Age -Education, -Professional preparation -Major medical country of pattern -Years of work experience, and -Any nurses late attended workshops or seminars on HIV/AIDS The Second subdivision consists of Yes or No replying inquiries asking nurses sing their gloving pattern while managing biological fluids. The 3rd subdivision would ask for the responses of nurse ‘s in managing different type of biological fluids and specimens. The 4th subdivision would be utilizing a Likert graduated table for response by supplying statements with options of ‘strongly agree ‘ to ‘strongly differ ‘ . The concluding and 5th subdivision concluded with proving the nurse ‘s consciousness of HIV infected biological specimens and methods to destruct the HIV virus. A pilot survey would be conducted here with some qualified nurses here to look into the proper cogency and dependability of the questionnaire. A pilot survey is a little scale version of the research, the chief map is to size up the research and look into for any mistakes. This pilot survey helps in avoiding major errors subsequently in research ( Polit, 1997 ) . After worth the questionnaire will be sent to nurses in India with the proper alterations if required form the consequences obtained from the pilot survey. Nurses will be informed about the survey and its confidentiality. A covering missive will be given to the nurse sing the confidentiality and briefly depicting the usage of the survey. Consent is assumed by finishing a questionnaire. No hazards have been identified by this survey. Through this survey the nurse ‘s cognition towards the HIV and AIDS and their attitude towards the attention given to the patients can be assessed and besides therefore the proper attention and attitude towards the HIV and AIDS patients can be improved in the close hereafter.DATA MANAGMENTDatas collected through questionnaire will be analysed interpreted by utilizing variables and relevant statistics. Entire clip of three months will be needed, directing and acquiring back of the consequences from the survey group.PROPOSED RESEARCH METHODOLOGYQuantitative Study ( Burns, 2001 ) will be used for this survey to quantify factors placing the cognition and attitude of nurses towards the attention given to the HIV and AIDS patients. Quantitative is concerned with the Numberss of facts about people, events or things and set uping the relationship between variables, Descriptive design is utile for this survey as it is placing the current pattern jobs for the nurses caring for HIV and AIDS patients. The chief purpose of the descriptive design is to give an penetration of the respondents about the present survey. ( Burns, 2001 ) .Survey method will be used to administer the ques tionnaire, by the methods of get offing or emailing the inquiries to the Nurses ( Polit, 1997 ) . Non Probability or convenience sampling is utile for this survey as size of the population is impossible to place. De Vos ( 1998:191 ) .SummaryAs seen from the addition incidence of HIV and AIDS patients all over the universe. The wellness professionals should be good equipped and knowing to confront the fortunes. As nurses being the largest group in paramedical services they play a major function in caring the HIV positive patients. This survey chiefly concentrates on the degree of cognition and attitudes and concerns of the nurses handling the HIV and Aids patients, by which it helps in understanding the barrier of attention. By which nurses can be given proper in service instruction and cut out the spread between the attention given to the patients with HIV and AIDS. Nurses being in a medical profession should hold up to day of the month cognition sing a disease status and demand to be ready to give attention to the patient.