Thursday, October 31, 2019

International Financial Reporting Essay Example | Topics and Well Written Essays - 5500 words

International Financial Reporting - Essay Example The notion â€Å"true and fair view† has originated from British accounting. There has numerous definition of â€Å"true and fair view† as there is no exact definition available of the term either by the standard setters or in law or even through court verdicts. The notion â€Å"true and fair view† has never been defined officially and as the principle is dynamic, it is neither desirable nor possible to give an exact definition for the concept. (Evans 2003:312) However , in a French case that was decided in 1994 did mention about the phrase that termed it as a trust on adhering with accounting regulations, which does not automatically guarantee a true and fair view. IASB demands that financial reports should offer a true and fair view. US accounting regulation's demand that accounts should be presented as per generally accepted accounting principles. (Walton & Aerts 2006:69). As per IFRS, the general intention of financial reports is to offer a just and fair presentation of the modification or changes in the financial performance and position of a business organisation or a company. The Conceptual Framework of IASB advocates that fair presentation could also be mentioned as offering a â€Å"true and fair view†. However, IAS 1.15 standard specifically states that publication of financial reports, which is based on the chief hypothesis that the application of IFRS with further or additional dissemination, if needed, is supposed to present financial reports mirroring a â€Å"fair presentation.† ... needed, is supposed to present financial reports mirroring a â€Å"fair presentation.† Further, IAS 1 demands in the same tenor an unreserved and explicit of adherence with IFRS to be comprised in the notes to the accounts. (Walton & Aerts 2006:69). As per Walton, the three classes of significance of â€Å"true and fair view† are a residual, legal clause; a generally accepted accounting concept and an independent concept. Further, under the view of GAAP, the proposition for European harmonisation is that before the fourth directive, each member nation had its own â€Å"true and fair view.† Thus, to establish a synchronized â€Å"true and fair view† would need a common meaning or GAAP; and that it should be noted that actual words are just signifiers only. As regards to â€Å"true and fair view†, Walton’s general view is that it is having both a probable large political meaning and an operational meaning, when accountants are enhancing or defe nding their professional position. The notion â€Å"true and fair view† has been formulated in the UK footed on the following three fundamentals namely an independent concept; a legal residual clause and generally accepted accounting principles. (Evans 2003). A† true and fair view â€Å" is needed to accomplish â€Å" the goal of financial reports â€Å" which is to offer info about the status of the financial standing , financial performance and any modification or change in the financial status of a business or a company that is advantageous to a broader choice of stakeholders or users in arriving at financial decisions. (ISAB: IAS Framework 2001). In financial reporting, the â€Å"true and fair view† can be mirrored by four qualitative uniqueness namely undesirability; relevance; comparability and reliability. Financial reporting is the

Tuesday, October 29, 2019

Information and Communication Technologies (Icts) In University Essay

Information and Communication Technologies (Icts) In University - Essay Example This essay stresses that traditional approaches that were initially used in education involved actual classes, actual lectures and the use of individually written notes as opposed to the softcopy notes currently available. Information and communication technology has enhanced the educational discourse and learning process in the world in a number of ways that the traditional approaches did not. This paper makes a conclusion that the change in educational pedagogy due to the introduction of information technology enabled approaches has created significant impacts and enhanced learning in universities across the globe. With the use of technology, university students have today been granted a number of advantages that others did not have several years ago. The traditional educational environment was not well suited to address the emerging and dynamic educational needs of the students and this affected the level of academic growth and maturity. Graduates who were previously exposed to the traditional pedagogical educational instruction at university level have continued to face challenges with the information technology enabled work environment. While traditional pedagogical methods educational approaches used printed books and publication for education purposes, the introduction of ICT has diversified sources of information for students. Tools such as e-learning, e-books and o nline journals have enabled students and lecturers to access information with ease.

Sunday, October 27, 2019

Cost Estimation and Management Strategies

Cost Estimation and Management Strategies Introduction Cost is one of the three pillars supporting project success or failure, the other two being Time and performance. Projects that go significantly over budget are often terminated without achieving the construction project goals because stakeholders simply run out of money or perceive additional expenditures as throwing good money after bad. Projects that stay within budget are the exception, not the rule. A construction project manager who can control costs while achieving performance and schedule goals should be viewed as somewhat of a hero, especially when we consider that cost, performance, and schedule are closely interrelated. The level of effort and expertise needed to perform good cost management are seldom appreciated. Too often, there is the pressure to come up with estimates within too short a period of time. When this happens, there is not enough time to gather adequate historical data, select appropriate estimating methods, consider alternatives, or carefully apply proper methods. The result is estimates that lean heavily toward guesswork. The problem is exacerbated by the fact that estimates are often not viewed as estimates but more as actual measurements made by some time traveller from the future. Estimates, once stated, have a tendency to be considered facts. Project managers must remember that estimates are the best guesses by estimators under various forms of pressure and with personal biases. They must also be aware of how others perceive these estimates. It requires an understanding of costs far beyond the concepts of money and numbers. Cost of itself can be only measured, not controlled. Costs are one-dimensional representations of three-dimensional objects travelling through a fourth dimension, time. The real-world things that cost represents are people, materials, equipment, facilities, transportation, etc. Cost is used to monitor performance or use of real things but it must be remembered that management of those real things determines cost, and not vice versa. Cost Management Cost management is the process of planning, estimating, coordination, control and reporting of all cost-related aspects from project initiation to operation and maintenance and ultimately disposal. It involves identifying all the costs associated with the investment, making informed choices about the options that will deliver best value for money and managing those costs throughout the life of the project, including disposal. Techniques such as value management help to improve value and reduce costs. Open book accounting, when shared across the whole project team, helps everyone to see the actual costs of the project. Process Description The first three cost management processes are completed, with the exception of updates, during the project planning phase. The final process, controlling costs, is ongoing throughout the remainder of the project. Each of these processes is summarized below. Resource Planning Cost management is begun by planning the resources that will be used to execute the project. Figure 6-2 shows the inputs, tools, and product of this process. All the tasks needed to achieve the project goals are identified by analyzing the deliverables described in the Work Breakdown Structure (WBS). The planners use this along with historical information from previous similar projects, available resources, and activity duration estimates to develop resource requirements. It is important to get experienced people involved with this activity, as noted by the expert judgment listed under Tools. They will know what works and what doesnt work. In trying to match up resources with tasks and keep costs in line, the planners will need to look at alternatives in timing and choosing resources. They will need to refer back to project scope and organizational policies to ensure plans meet with these two guidelines. Except for very small projects, trying to plan without good project management software is tedious and subject to errors, both in forgetting to cover all tasks and in resource and cost calculations. The output of this process is a description of the resources needed, when they are needed, and for how long. This will include all types of resources, people, facilities, equipment, and materials. Once there is a resource plan, the process of estimating begins. Estimating Costs Estimating is the process of determining the expected costs of the project. It is a broad science with many branches and several popular, and sometimes disparate, methods. There are overall strategies to determining the cost of the overall project, as well as individual methods of estimating costs of specific types of activity. Several of these can be found in the resources listed at the end of the chapter. In most software development projects the majority of the cost pertains to staffing. In this case, knowledge of the pay rates (including overhead) of the people working on the project, and being able to accurately estimate the number of people needed and the time necessary to complete their work will produce a fairly accurate project cost estimate. Unfortunately, this is not as simple as it sounds. Most project estimates are derived by summing the estimates for individual project elements. Several general approaches to estimating costs for project elements are presented here. [3] Your choice of approach will depend on the time, resources, and historical project data available to you. The cost estimating process elements are shown in Figure. Figure 6-3 Cost Estimating Elements Cost estimating uses the resource requirements, resource cost rates, and the activity duration estimates to calculate cost estimates for each activity. Estimating publications, historical information, and risk information are used to help determine which strategies and methods would yield the most accurate estimates. A chart of accounts may be needed to assign costs to different accounting categories. A final, but very important, input to the estimating process is the WBS. Carefully comparing activity estimates to the activities listed in the WBS will serve as a reality check and discover tasks that may have been overlooked or forgotten. The tools used to perform the actual estimating can be one or more of several types. The major estimating approaches shown in Figure 6-3 are discussed here. While other approaches are used, they can usually be classed as variations of these. One caution that applies to all estimating approaches: If the assumptions used in developing the estimates are not correct, any conclusions based on the assumptions will not be correct either. Bottom-Up Estimating Bottom-up estimating consists of examining each individual work package or activity and estimating its costs for labour, materials, facilities, equipment, etc. This method is usually time consuming and laborious but usually results in accurate estimates if well prepared, detailed input documents are used. Analogous Estimating Analogous estimating, also known as top-down estimating, uses historical cost data from a similar project or activities to estimate the overall project cost. It is often used where information about the project is limited, especially in the early phases. Analogous estimating is less costly than other methods but it requires expert judgment and true similarity between the current and previous projects to obtain acceptable accuracy. Parametric Estimating Parametric estimating uses mathematical models, rules of thumb, or Cost Estimating Relationships (CERs) to estimate project element costs. CERs are relationships between cost and measurements of work, such as the cost per line of code. [3] Parametric estimating is usually faster and easier to perform than bottom-up methods but it is only accurate if the correct model or CER is used in the appropriate manner. Design-to-Cost Estimating Design-to-cost methods are based on cost unit goals as an input to the estimating process. Tradeoffs are made in performance and other systems design parameters to achieve lower overall system costs. A variation of this method is cost-as-the-independent-variable , where the estimators start with a fixed system-level budget and work backwards, prioritizing and selecting requirements to bring the project scope within budget constraints. Computer Tools Computer tools are used extensively to assist in cost estimation. These range from spreadsheets and project management software to specialized simulation and estimating tools. Computer tools reduce the incidence of calculation errors, speed up the estimation process, and allow consideration of multiple costing alternatives. One of the more widely used computer tools for estimating software development costs is the Constructive Cost Model (COCOMO). The software and users manual are available for download without cost (see COCOMO in the Resources.) However, please note that most computer tools for developing estimates for software development use either lines of code or function points as input data. If the number of lines of code or function points cannot be accurately estimated, the output of the tools will not be accurate. The best use of tools is to derive ranges of estimates and gain understanding of the sensitivities of those ranges to changes in various input parameters. The outputs of the estimating process include the project cost estimates, along with the details used to derive those estimates. The details usually define the tasks by references to the WBS. They also include a description of how the cost was derived, any assumptions made, and a range for estimate (e.g. $20,000 +/- $2000.) Another output of the estimating process is the Cost Management Plan. This plan describes how cost variances will be managed, and may be formal or informal. The following information may be considered for inclusion in the plan: Cost and cost-related data to be collected and analyzed. Frequency of data collection and analysis. Sources of cost-related data. Methods of analysis. Individuals and organizations involved in the process, along with their responsibilities and duties. Limits of acceptable variance between actual costs and the baseline. The authority and interaction of the cost control process with the change control process. Procedures and responsibilities for dealing with unacceptable cost variances. Cost Budgeting Once the costs have been estimated for each WBS task, and all these put together for an overall project cost, a project budget or cost baseline must be constructed. The budget is a spending plan, detailing how and at what rate the project funding will be spent. The budgeting process elements are shown in Figure 6-4. All project activities are not performed at once, resources are finite, and funding will probably be spread out over time. Cost estimates, WBS tasks, resource availability, and expected funding must all be integrated with the project schedule in a plan to apply funds to resources and tasks. Budgeting is a balancing act to ensure the rate of spending closely parallels the resource availability and funding, while not exceeding either. At the same time, task performance schedules must be followed so that all tasks are funded and completed before or by the end of the project schedule. The spending plan forms the cost baseline, which will be one of the primary measures of project health and performance. Deviations from this cost baseline are major warning signs requiring management intervention to bring the project back on track. Various tools and techniques are available to assist in the budgeting process. Most of these are implemented in some form of computer software. Budgeting is usually a major part of project management software. Cost Control Cost control is the final step of the cost management process but it continues through the end of the project. It is a major element of project success and consists of efforts to track spending and ensure it stays within the limits of the cost baseline. The following activities make up the cost control process: Monitor project spending to ensure it stays within the baseline plan for spending rates and totals. When spending varies from the plan determine the cause of variance, remembering that the variance may be a result of incorrect assumptions made when the original cost estimate was developed. Change the execution of the project to bring the spending back in line within acceptable limits, or recognize that the original estimate was incorrect, and either obtain additional funding or reduce the scope of the project. Prevent unapproved changes to the project and cost baseline. When it is not possible to maintain the current cost baseline, the cost control process expands to include these activities: Manage the process to change the baseline to allow for the new realities of the project (or incorrectly estimated original realities.) Accurately record authorized changes in the cost baseline. Inform stakeholders of changes. The cost control process compares cost performance reports with the cost baseline to detect variances. Guidance on what constitutes unacceptable variance and how to deal with variance can be found in the cost management plan, developed during the estimation activities. Few projects are completed without changes being suggested or requested. All change requests should run the gauntlet of cost control to weigh their advantages against their impact to project costs. Cost control tools include performance measurement techniques, a working cost change control system, and computer based tools. A powerful technique used with considerable success in projects is Earned Value Management , if used appropriately. It requires a fully defined project up front and bottom-up cost estimates, but it can provide accurate and reliable indication of cost performance as early as 15% into the project. The outputs of cost control include products which are ongoing throughout the life of the project: revised cost estimates, budget updates, corrective actions, and estimates of what the total project cost will be at completion. Corrective actions can involve anything that incurs cost, or even updating the cost baseline to realign with project realities or changes in scope. Cost data necessary to project closeout are also collected throughout the life of the project and summarized at the end. A final product, extremely important to future efforts, is a compilation of lessons learned during the execution of the project. Tools for analyzing/evaluating Cost Management Some construction insurance projects dont only exceed their budget because they turn out to be bigger than originally estimated. They often blow the budget because the estimates were badly managed. As a result, the profitability analyses are not well quantified because the estimates of future return were not accurate. Accurate estimates turn to be really important, as they are frequently required for three principal reasons: The first is to well-define the costs/budget of the project. The second is to justify the project. It enables the cost to be compared with the anticipated benefits. The third is to evaluate and control the actual costs vs. estimated and take corrective actions when needed to make the project succeed. Applying Activity-Based Costing (ABC) to construction projects can help insurance companies to better understand their costs and maximize construction resources. Combined with the Earned Value Management, construction projects can be tracked and controlled effectively in terms of time and budget. Activity-Based Costing (ABC) Activity Based Costing (ABC) is a method for developing cost estimates in which the project is subdivided into discrete, quantifiable activities or a work unit. The activity must be definable where productivity can be measured in units (e.g., number of samples versus man hours). After the project is broken into its activities, a cost estimate is prepared for each activity. These individual cost estimates will contain all labour, materials, equipment, and subcontracting costs, including overhead, for each activity. Each complete individual estimate is added to the others to obtain an overall estimate. Contingency and escalation can be calculated for each activity or after all the activities have been summed. ABC is a powerful tool, but it is not appropriate for all cost estimates. This chapter outlines the ABC method and discusses applicable uses of ABC. ABC methodology is used when a project can be divided into defined activities. These activities are at the lowest function level of a project at which costs are tracked and performance is evaluated. Depending on the project organization, the activity may coincide with an element of the work breakdown structure (WBS) or may combine one or more elements of the WBS. However, the activities must be defined so there is no overlap between them. After the activity is defined, the unit of work is established. All costs for the activity are estimated using the unit of work. The estimates for the units of work can be done by performing detailed estimates, using cost estimating relationships, obtaining outside quotes for equipment, etc. All costs including overhead, profit, and markups should be included in the activity cost. Earned Value Management (EVM) An interesting phenomenon exists in the construction industry. The industry probably uses parts of Earned Value management about as well as any industry. But, what makes it interesting is that in construction work, practitioners rarely use the term Earned Value. The Earned Value Management (EVM) technique is a valuable tool to measure a projects progress, forecast its completion date and final cost, and provide schedule and budget variances along the way. Earned Value management is a technique that can be applied, at least in part, to the management of all capital projects, in any industry, while employing any contracting approach. The employment of Earned Value requires a three-dimensional measurement of project performance, ideally from as early as possible—perhaps as early as 15 percent complete, up to 100 percent final completion. However, two of the three dimensions of Earned Value—the baseline plan and the physical performance measurement—will apply to all capital projects, in any industry, using any contracting method. Using Earned Value metrics, any project can accurately monitor and measure the performance of projects against a firm baseline. Using the three dimensions of Earned Value, the project management teams can at all times monitor both the cost and the schedule performance status of their projects. The Earned Value Management (EVM) technique is a valuable tool to measure a projects progress, forecast its completion date and final cost, and provide schedule and budget variances along the way. EVM provides consistent indicators to evaluate and compare projects and give an objective measurement of how much work has been accomplished. It lets the project manager combine schedule performance and cost performance to answer the question: What did we get for the money we spent? Using EVM process, management can easily compare the planned amount of work with what has actually been completed, to determine if cost, schedule, and work accomplished are progressing as planned. It forces the project manager to plan, budget and schedule the work in a time-phased plan. The principles of ABC and EVM techniques provide innovative cost and performance measurement systems, allowing productivity improvements, and therefore can enhance the projects profitability and performance. Quality Management (QM) The process of planning, organizing, implementing, monitoring, and documenting a system of management practices that coordinate and direct relevant project resources and activities to achieve quality in an efficient, reliable, and consistent manner. Quality Management Plan (QMP) A project-specific, written plan prepared for certain projects which reflects the general methodology to be implemented by the Construction Manager during the course of the project to enhance the owners control of quality through a process-oriented approach to the various management tasks for the program. The Quality Management Plan complements the Construction Management Plan (CMP) and forms a basis of understanding as to how the project team will interrelate in a manner that promotes quality in all aspects of the program, from the pre-design phase through completion of construction. Its purpose is to emphasize the quality goals of the project team in all issues associated with the work. This pertains not only to the traditional QA/QC of constructing elements of the work, but also addresses the quality needs of management tasks such as performing constructability reviews during design, checking estimates, making appropriate decisions, updating schedules, guiding the selection of sub contractors and vendors from a quality-oriented basis, to dealing with the public when applicable. Owners, for certain projects, require that a separate Quality Management Plan be prepared by the Construction Manager. In these cases, the QMP is a project-specific plan which reflects the approach of the CM towards achieving quality in the constructed project. It is developed with heavy reliance on many of the sections included in these Guidelines, and fully supports the Construction Management Plan (CMP). When a separate QMP is prepared, most of the quality-oriented issues and discussion of processes, check lists, audits, etc., are contained in the QMP rather than the CMP. The CMP then addresses the day-to-day performance of the various functions and outlines the methods by which the Construction Managers forces will perform their services. The QMP typically will include some of the following: †¢ Overall project organization †¢ Project QA/QC organization †¢ QA/QC representatives of design team and contractors †¢ Management decision flow chart †¢ Formats for various elements of the CM services (i.e., formats for job meeting minutes, progress payment applications, field observation reports, shop drawing logs, notice of proposal change order, etc.) †¢ Detailed check lists or audit plans to provide for quality in the practice of CM functions (i.e., check lists for approving contractors schedules, approving revisions to schedules, reviewing change order costs, obtaining approval within the owner organization for changes, approval to start foundation construction, approval to start concrete pour, approval to start steel erection, preliminary and final acceptance, etc.). †¢ Project Quality Audit forms The CM will prepare quality management narratives for the use of his staff for each of the check lists and quality procedures contained in the QMP to provide for an acceptable level of quality at all levels of CM practice. Inputs to Quality Planning. Quality policy . Quality is the overall intentions and direction of a construction organization with regard to quality, as formally expressed by top management. The quality policy of the performing organization can often be adopted as is for use by the project. However, if the performing organization lacks a formal quality policy, or if the project involves multiple performing organizations (as with a joint venture), then the project management team will need to develop a quality policy for the project. Regardless of the origin of the quality policy, the project management team is responsible for ensuring that the project clients are fully aware of it. Scope statement. The scope statement is a key input to quality planning since it documents major project deliverables, as well as the project objectives that serve to define important client requirements. Project description. Although objectives of the project description may be embodied in the scope statement, the project description will often contain details of technical issues and other concerns that may affect quality planning. Standards and regulations. The project management team must consider any application area-specific standards or regulations that may affect the project. Other process outputs . In addition to the scope statement and project description, processes in other knowledge areas may produce outputs that should be considered as part of quality planning. For example, procurement planning may identify contractor quality requirements that should be reflected in the overall quality management plan. Tools and Techniques for Quality Planning. Benefit/cost analysis . The quality planning process must consider benefit/cost tradeoffs. The primary benefit of meeting quality requirements is less rework, which means higher quality, lower costs, and increased client satisfaction. The primary cost of meeting quality requirements is the expense associated with project management activities. It is axiomatic of the quality management discipline that the benefits outweigh the costs. Benchmarking. Benchmarking involves comparing actual or planned project practices to those of other projects to generate ideas for improvement and to provide a standard by which to measure performance. The other projects may be within the performing organization or outside of it, and may be within the same application area or in another. Flowcharting. A flow chart is any diagram that shows various elements of a system relate. Flowcharting techniques commonly used in quality management include: Cause-and-effect diagrams A cause-and-effect diagram is an analysis tool that provides a systematic way of looking at effects and the causes that create or contribute to those effects. It was develop by Dr. Kaoru Ishikawa of Japan in 1943 and is sometimes referred to as an Ishikawa Diagram or a Fishbone Diagram because of its shape. Cause-and-effect diagrams, also called Ishikawa diagrams or fishbone diagrams, which illustrate how various factors might be linked to potential problems or effects. A Cause-and-Effect Diagram is a tool that helps identify, sort, and display possible causes of a specific problem or quality characteristic. It graphically illustrates the relationship between a given outcome and all the factors that influence the outcome. A Cause-and-Effect Diagram is a tool that is useful for identifying and organizing the known or possible causes of quality, or the lack of it. The structure provided by the diagram helps team members think in a very systematic way. At the head of the Fishbone is the defect or effect, stated in the form of a question. The major bones are the capstones, or main groupings of causes. The minor bones are detailed items under each capstone. Applying cause-and-effect diagram A cause-and-effect diagram is a tool that is useful for identifying and organizing the known or possible causes of quality, or the lack of it. The structure provided by the diagram helps team members think in a very systematic way. Some of the benefits of constructing a cause-and-effect diagram are that it: helps determine the root causes of a problem or quality characteristic using a structured approach; encourages group participation and utilizes group knowledge of the process; uses an orderly, easy-to-read format to diagram cause-and-effect relationships; indicates possible causes of variation in a process; increases knowledge of the process by helping everyone to learn more about the factors at work and how they relate; and identifies areas where data should be collected for further study. System or process flow charts, which show how various elements of a system, interrelate. Flow chart is used to provide a diagrammatic picture using a set of symbols. They are used to show all the steps or stages in a process project or sequence of events. A flowchart assists in documenting and describing a process so that it can be examined and improved. Analyzing the data collected on a flowchart can help to uncover irregularities and potential problem points. Flowcharts, or Process Maps, visually represent relationships among the activities and tasks that make up a process. They are typically used at the beginning of a process improvement event; you describe process events, timing, and frequencies at the highest level and work downward. At high levels, process maps help you understand process complexity. At lower levels, they help you analyze and improve the process Pareto Analysis A Pareto Chart is a series of bars whose heights reflect the frequency or impact of problems. The bars are arranged in descending order of height from left to right. This means the categories represented by the tall bars on the left are relatively more significant than those on the right. The chart gets its name from the Pareto Principle, which postulates that 80 percent of the trouble comes from 20 percent of the problems. It is a technique employed to prioritize the problems so that attention is initially focused on those, having the greatest effect. It was discovered by an Italian economist, named Vilfredo Pareto, who observed how the vast majority of wealth (80%) was owned by relatively few of the population (20%). As a generalized rule for considering solutions to problems, Pareto analysis aims to identify the critical 20% of causes and to solve them as a priority. The use Pareto Charts You can think of the benefits of using Pareto Charts in economic terms. A Pareto Chart: breaks big problem into smaller pieces; identifies most significant factors; and helps us get the most improvement with the resources available by showing where to focus efforts in order to maximize achievements. The Pareto Principle states that a small number of causes accounts for most of the problems. Focusing efforts on the vital few causes is usually a better use of valuable resources. Applying Pareto Chart A Pareto Chart is a good tool to use when the process you are investigating produces data that are broken down into categories and you can count the number of times each category occurs. No matter where you are in your process improvement efforts, Pareto Charts can be helpful, .early on to identify which problem should be studied, later to narrow down which causes of the problem to address first. Since they draw everyones attention to the vital few important factors where the payback is likely to be greatest, (they) can be used to build consensus. In general, teams should focus their attention first on the biggest problems-those with the highest bars. Making problem-solving decisions isnt the only use of the Pareto Principle. Since Pareto Charts convey information in a way that enables you to see clearly the choices that should be made, they can be used to set priorities for many practical applications in your command. S

Friday, October 25, 2019

The Death of Bloody Mary Tudor and Good Queen Beth :: essays research papers

" 'BLOODY MARY,' a sour, bigoted heartless, superstitious woman, reigned five years, and failed in everything which she attemptcd. She burned in Smithfield hundreds of sincere godly persons, she went down to her grave, hated by her husband, despised by her servants, loathed her her people, and condemned by God. 'Good Queen Bess' followed her, a generous, stout-hearted strong-minded woman, characteristically English, and reigned forty-five years. Under her wise and beneficent rule her people prospered she was tolerant in religion and severe only to traitors, she went down to her grave after a reign of unparalleled magnificence and success, a virgin queen, secure in the loyalty of her subjects, loved by her friends, in favour with God and man. " So we can imagine some modern Englishman summing up the reigns of these two half-sisters who ruled England successively in the sixteenth century -- an Englishman better acquainted with history-books than with history, and in love with ideas rather than facts. It is interesting, therefore, to pursue our investigations a little further, and to learn in what spirit each of these two queens met her end, what was the account given by those about them, what were the small incidents, comments, and ideas that surrounded the moments which for each of them were the most significant of their lives. Death, after all, reveals what life cannot, for at death we take not only a review of our past, but a look into the future, and the temper of mind with which we regard eternity is of considerable importance as illustrating our view of the past. At death too, if at any time, we see ourselves as we are, and display our true characters. There is no use in keeping up a pose any longer. We drop the mask, and show our real faces. We should expect, then, if we took the view of the ordinary Englishman, that Mary Tudor would die a prey to superstition and terror, the memory of her past and the prospect of her future would surely display her as overwhelmed with gloom and remorse, terrified at the thought of meeting God, a piteous spectacle of one who had ruled by fear and was now ruled by it. Elizabeth, on the other hand, dying full of honour and years, would present an edifying spectacle of a true Christian who could look back upon a brilliant and successful past, a reign of peace and clemency, of a life unspotted with superstition and unblameable in its religion, and, forward to the reward of her labours and the enjoyment of heaven.

Thursday, October 24, 2019

Woman as the Other and as the Other Woman

Simone de Beauvoir (1908-1986), French existentialist, writer, and social essayist, passed on just over two decades ago. Putting it this way makes her ideas so much more alive. She did not just write about how she lived. She wrote, and she lived what she wrote about: she refused to be the Other, but she was also, in a manner of putting it, the Other Woman.Simone’s Life and Love(s) in Philosophy Simone de Beauvoir is now noted and appreciated as a philosopher. She was not always considered a philosopher however, but a writer, and has only been given the distinction of being a noted philosopher in more recent years.Her works became considered â€Å"philosophical† only after her death. Beauvoir was born in France in 1908. She belonged to a bourgeoisie family, and had one sister. As a teenager, she declared herself an atheist, and devoted her life to feminism and writing (Marvin, 2000). Apparently, her parent’s disposition and stature were a major influence on her. H er father was extremely interested in pursuing a career in theater, but because of his societal position (and with a noble lineage), he became a lawyer (which was expected), and hated it. Her mother, on the other hand, was a strict Catholic.Some authors have noted that Simone struggled between her mother’s religious morals and her father’s more pagan inclinations, and this purportedly led to her atheism and shaped her philosophical work. As a child, Simone was religious and had a relationship with God. She wrote in early work about her thankfulness that heaven had given her the immediately family that she had, but this feeling (at least the religious aspects of it) dissipated as she aged (Flaherty, 2008). When she was around 15, Simone de Beauvoir decided she would be a famous writer.She did well in many subjects, but was especially attracted to philosophy, which she went on to study at the University of Paris. There she met many other young creative geniuses, includin g Jean-Paul Sartre, who became her best friend and life-long companion. The group of friends that she spent her time with was considered a â€Å"bad† group, a circle of rebels. Such perceptions did not matter however for Simone and Sartre whose fondness for each other only grew over the years. Their works were frequently linked as they read and critiqued each other’s writings, and she was sort of considered as his ‘student’ — the Other.However, she was not just the Other, she was a significant Other, as it were. Their relationship became intimate and Sartre even proposed to her. She however declined the proposal because she felt that marriage was such a constricting institution and that they should, instead, be free to love â€Å"others† (Flaherty, 2008). After graduating from the university, Simone lived with her grandmother and taught at a lycee, or high school. She taught philosophy at several schools throughout her life, which allowed her to live comfortably. She spent her free time going to cafes, writing, and giving talks.In Berlin, she spent time with Sartre and they got linked with two female students, the sisters Olga and Wanda Kosakiewicz. Sartre initially pursued Olga but later had an affair with Wanda. Note that he and Simone had agreed that they would be free to love others. During this time, Simone got very sick and spent some time in a sanitarium. By the time she left the sanitarium, Olga was married, and Wanda and Sartre were no longer lovers (Flaherty, 2008). This phase in her life, one could perhaps say, highlighted her journey as the Other Woman. Simone traveled around the world later in her life, lecturing.She came to the United States in the 1940s and met another man, Algren. He proposed to her, but she opted to stay with Sartre instead. Also during her travels, Simone participated, with Sartre, in the 1967 â€Å"Bertrand Russell Tribunal of War Crimes in Vietnam. † There she met several note d leaders, including Khrushchev and Castro; however, unlike Sartre, she did not particularly enjoy being in the public spotlight. (Gascoigne, 2002) In 1981, when Sartre died, Simone wrote a memoir about him. After this, she continued to take drugs and drink alcohol, which contributed to her mental decay.She and Sartre had always taken drugs and alcohol. Simone frequently became drunk throughout her life. She died in 1986, and was buried beside Sartre’s remains (Gascoigne, 2002). Beauvoir’s Views: My Reflections Beauvoir strictly considered herself a writer, not a philosopher. Others did not see her as a philosopher because, in what may today be described as sexism, she was a woman and thus inferior in some ways. Moreover, she was also seen as merely a student of Sartre and not as a philosopher in her own right. On top of it all, she was a woman who wrote about women.It must be pointed out that this field of study was not truly accepted in the academe until very recentl y; hence, Beauvoir’s work was not accepted as being philosophical during her time. She was indeed heavily overshadowed by Sartre, especially because some of her work reflects his (Bergoffen, 2004). Beauvoir’s philosophical ideas focused on how truths in life were revealed in literature. She wrote several essays, including â€Å"Literature and the Metaphysical Essay† (1946) and â€Å"Mon Experience d’Ecrivain,† which translates to ‘My Experience as a Writer’ (1956).Her works include both fiction and non-fiction, all in regards to studying literature in reaction to human relationships and thoughts (Bergoffen, 2004). Truly life is mirrored by literature, but literature is also a part of life, and life can be shaped by literary work. In the life and works of this trailblazing feminist writer-philosopher, one can see the reality of literature as a potent force not only of self-expression but also of life changing. Feminism was of primary im portance to Beauvoir, and she is considered to be one of the pioneers of the movement.In fact, Beauvoir is best known for her feminist work, â€Å"The Second Sex,† now a classic of feminist literature (Eiermann). In this work, she looks at the role of women in society, and the advantages and disadvantages that she, herself, faced. It was initially not thought of as a philosophical work because it dealt with sex, which, during the Victorian era, was not a subject openly discussed. In reality, the book closely examines patriarchal society and its impact on women, and calls for women to take action against these oppressions.It fired up women of later generations to fight for political, social, and personal change. The book remains debated to this day because of the way it addresses the issues, but it is still considered a major early book on feminism (Bergoffen, 2004). Here she put an exclamation point on her observations of Woman in society being seen and treated merely as the Other. Beauvoir is also known for an earlier work, Force of Circumstance. â€Å"Within this piece she discussed vital issues of the day-confusion and rage regarding human freedoms and the French/Algerian War† (Flaherty, 2008).Human freedom was a big issue that was crucial in Beauvoir’s work. She was particularly concerned that people needed to be free. This is reflected in the way she lived her own life, and in the way she lectured others. She walked her talk, and was for some time describable perhaps (albeit from a rather sexist perspective) as being the Other Woman, with no rancor, in Sarte’s life. She Came to Stay (1943) is another work that deals with freedom. This is a novel that deals with â€Å"reflections on our relationship to time, to each other, to ourselves† (Bergoffen, 2004).The work doesn’t fit a traditional philosophical framework, where questions are brought to a close and fully answered. Instead it only explores questions by lookin g at the lives and interactions of the main characters. In this novel, a murder is committed because of a character’s desire for freedom, and the novel examines if the murder was just or not, among other issues surrounding the situation. This work is frequently considered her first true philosophical work (Bergoffen, 2004). How many times have this student been asked this question in real life by friends and particular circumstances: freedom or life?There is something profoundly unsettling in the questions that Beauvoir’s works raises. In She Came to Stay, purportedly a fictionalized chronicle of Beauvoir and Sartre's relationship with the sisters Olga and Wanda, we are treated to an exploration of complex personal relationships. Olga was one of her students in the Rouen secondary school where she taught during the early 30s. In the novel, Olga and Wanda are made into one character with whom fictionalized versions of Beauvoir and Sartre have intimate relationships.The novel delves into Beauvoir and Sartre's complex relationship. She wrote about her life, and she lived her writings. With what she wrote, she pursued her questioning, her philosophizing. Pyrrhus and Cineas (1944) is Beauvoir’s first philosophical essay and a major turning point in her life as a writer. This essay looks at questions like â€Å"What are the criteria of ethical action? † â€Å"How can I distinguish ethical from unethical political projects? † â€Å"What are the principles of ethical relationships? † â€Å"Can violence ever be justified?† The essay looks at the moral, political, and other implications of these questions, and further explores the notion of freedom, relationships, and violence. Simone was not sure if violence was truly justified, but concludes that it is ‘neither evil nor avoidable. ’ The questions are not truly resolved in this work, much like in her previous work (Bergoffen, 2004). Then there is Ethics of A mbiguity (1947), which further looks at ethical questions regarding freedom, and the difference between childhood and adulthood.According to Beauvoir, children ‘live in mystery,’ and they should. However, she posits that children should also be forced to be adults and there could be violations of freedom involved in this. This work expands on the idea of freedom from the previous work, and looks at new dimensions of it (Bergoffen, 2004). Two themes seem to appear most prominently in the work of Beauvoir: Freedom and Feminism. The Feminine is made an agent of freedom and is problematized so in the work of Beauvoir. Today, many still turn to her work for we can see the realities that her work reflects.We still find Woman as the Other — in some societies with her multiple burdens given her second-class status. Even in the supposedly modern nation that is the U. S. we find gender an unsettling concern in electoral politics. More broadly, freedom remains a problematic ideal in the globalizing world. Many states (e. g. , North Korea, China, Cuba, the young Republics in Eastern Europe) remain unstable at their core having had to grapple with forces of change and freedom from within and from outside their societies and territories.At another level, the world is not lacking with individuals and groups with their various advocacies aimed at expanding the limits of freedom in civil society. Today the woman question has become the bigger concern that is Gender. This student now more fully realizes that gender is a social-psychological thing while sex is a biological or physical matter. The Woman is more than her body after is all. To be Woman is a choice, is a matter of freedom. The definition of gender lies not in the body. Gender is the realization of what you think and feel you are, and what you prefer as a lifestyle, to put it broadly.

Wednesday, October 23, 2019

Different Between Adaptive and Rational Expectation

Working Paper No. 00-01-01 Are Policy Rules Better than the Discretionary System in Taiwan? James P. Cover C. James Hueng and Ruey Yau Are Policy Rules Better than the Discretionary System in Taiwan? James Peery Cover Department of Economics, Finance, and Legal Studies University of Alabama Phone: 205-348-8977 Fax: 205-348-0590 Email: [email  protected] ua. edu C. James Hueng Department of Economics, Finance, and Legal Studies University of Alabama Phone: 205-348-8971 Fax: 205-348-0590 Email: [email  protected] ua. edu and Ruey Yau Department of Economics Fu-Jen Catholic University Taiwan Phone: 619-534-8904 Fax: 619-534-7040 Email: [email  protected] csd. edu Correspondence to: C. James Hueng Department of Economics, Finance, and Legal Studies University of Alabama, Box 870224 Tuscaloosa, AL 35487 Phone: 205-348-8971 Fax: 205-348-0590 Email: [email  protected] ua. edu Are Policy Rules Better than the Discretionary System in Taiwan? ABSTRACT This paper investigates whether th e central bank of Taiwan would have had a more successful monetary policy during the period 1971:1 to 1997:4 if it had followed an optimal rule rather than the discretionary policies that were actually employed.The paper examines the use of two different instruments—the discount rate and the monetary base—with several different targets — growth of nominal output, inflation, the exchange rate, and the money growth. The results show that most of the rules considered would not have significantly improved the performance of the Taiwanese economy. The only rule that is clearly advantageous is one that targets inflation while using the interest rate instrument. Keywords: monetary policy rule, small open economy, dynamic programming JEL classification: E52, F41 1.Introduction How well has the Central Bank of Taiwan implemented monetary policy during the past three decades? With the exception of two inflationary episodes during periods of oil-price shocks (1973-1974 and 1979-1981), as far as inflation is concerned, the historical record suggests that monetary policy in Taiwan has been very successful. Figure 1 shows that during other periods the rate of inflation in Taiwan typically has been relatively low, nearly always being between 2% and 7% per year. But could the Central Bank of Taiwan have performed much better than it actually did?That is, could it have achieved a lower and less variable rate of inflation at little or no cost in terms of lost output? Because Taiwanese monetary policy has been discretionary, rather than based on a formal rule, there is a strand of macroeconomic theory that suggests the answer to this question must be yes. If the structure of the Taiwanese economy is such that an unexpected increase in the rate of inflation causes output to increase, then policy makers have an incentive to increase inflation. This implies that a discretionary monetary policy will have an inflationary bias [Kydland and Prescott (1977) and Barr o (1986)].The existence of this inflationary bias makes it difficult for policy makers to lower expected inflation without first earning a reputation for price stability. If the only way to earn this reputation is through actually achieving low inflation, then the cost of reducing inflation is a significant loss of output. A solution to this reputation or credibility problem is for the monetary authority to follow an explicit formal rule that eliminates its discretion to inflate. It therefore follows that a monetary policy implemented according to a rule will achieve lower inflation than a discretionary monetary policy.For example, Judd and Motley (1991, 1992, 1993) and McCallum (1988) have examined the empirical properties of nominal feedback rules and find that the use of simple feedback rules could have produced price stability for the United States over the past several decades without significantly increasing the volatility of real output. 1 This paper examines whether the cent ral bank of Taiwan would have had a more successful monetary policy if it had followed an explicit rule rather than the discretionary policies it actually implemented.Of the rules considered here, only one yields both an output variance and an inflation variance appreciably lower than those actually realized by the Taiwanese economy. Hence this paper concludes that the discretionary policies implemented by the central bank of Taiwan were very close to being optimal. Svensson (1998) divides proposed rules for monetary policy into two broad groups, instrument rules and targeting rules. Instrument rules require that the central bank adjust its policy instrument in response to deviations between the actual and desired value of one or more variables being targeted by the monetary authority.Examples of this type of rule are those proposed by both Taylor (1993) and McCallum (1988). A rule that requires the Fed to raise the federal funds rate (its instrument of monetary policy) whenever the growth rate of nominal GDP is unexpectedly high (the rate of growth of nominal GDP being the target variable) regardless of other information available to the Fed is an example of an instrument rule. But because instrument rules do not use all information available to the monetary authority, as shown by both Friedman (1975) and Svensson (1998), they are inferior to monetary policy rules that do use all available information.If a monetary policy rule minimizes a specified loss function while allowing the monetary authority to use all available information, then Svensson (1998) calls it a targeting rule. If the monetary authority is following a targeting rule, then it will respond to all information in a manner that minimizes its loss function. The loss function formalizes how important the monetary authority believes are deviations of its various target variables from their optimal values. The policy rule is derived from the optimal solution of the dynamic programming problem that m inimizes the loss function subject to the structure of the economy.The resulting rule expresses the growth of the policy instrument as a function of the predetermined variables in the model. That is, the policy instrument responds not only to the target variables but also to all other variables in the model. Hence a targeting rule would not 2 always require the Fed to raise the federal funds rate when the growth rate of nominal GDP is unexpectedly high because other information might imply that the relatively high rate of growth of nominal GDP is the result of an increase in the growth rate of real GDP (rather than an increase in inflation).Although there appears to be a growing consensus that price stability should be the central long-run objective of monetary policy, there are still continuing debates about the proper selection of the policy instrument and the best target variables. But clearly the choice of the best policy instrument and the best target(s) is an empirical issue. Furthermore, the best choices can vary from country to country because the controllability of any particular policy instrument and the effectiveness of each target most likely vary across countries.Therefore, this paper examines two different policy instruments and several targets to search for the best policy rule for Taiwan. The rest of this paper is organized as follows. Section 2 discusses the instrument and the targets of monetary policy that this paper considers. Section 3 describes the method used to derive the policy rules and conduct the simulations. Section 4 describes the data and presents the simulation results, while Section 5 offers some conclusions. 2. Instruments and Targets of Monetary Policies In discussing how monetary policy should be implemented it is helpful to draw a istinction between the instruments and the targets of monetary policy. The targets of monetary policy are those macroeconomic variables that the monetary authority ultimately desires to influence through its policy actions [Friedman, 1975]. For this reason Svensson (1998) prefers to call target variables only those variables that are important enough to be included in the monetary authority's loss function. The targets of monetary policy therefore are a way to formalize the overall objectives of a monetary authority.On the other hand, the instrument of monetary policy is the variable that the monetary authority chooses to control for the purpose of meeting its overall objectives, i. e. minimizing its loss function. 3 Monetary policy instruments basically fall into two categories: the monetary base and short-term interest rates. Proponents of using the monetary base as the instrument of monetary policy argue that the base is the variable that determines the aggregate level of prices, and therefore is a natural instrument for the control of inflation [McCallum (1988)].But most central banks, including the central bank of Taiwan, use a short-term interest rate as their instrume nt of monetary policy. Proponents of an interest rate instrument point out that it insulates the economy against instability in the demand for money, that interest rates are a part of the transmission channel of monetary policy, and that no useful purpose is served by wide fluctuations in interest rates [Kohn (1994)]. This paper presents simulation results using both types of instruments. The results support the central bank of Taiwan's decision to use an interest rate instrument.This paper examines four target variables: a monetary aggregate, the exchange rate, nominal income and the rate of inflation. 1 The targeting of a monetary aggregate often is advocated by those who believe that business cycles largely result from changes in the growth rate of a monetary aggregate [Warburton (1966), M. Friedman (1960)]. Another reason for choosing a monetary aggregate as the target variable for monetary policy is its ability to serve as a nominal anchor that can prevent policies from allowin g inflation to increase to an unacceptable level.Although this allows a monetary aggregate to communicate long-run policy objectives to the general public, as Friedman (1975) points out, it is by its very nature an inferior choice as a target variable because the monetary authority is only concerned with monetary aggregates to the extent that it provides them with information about inflation and output growth. 2 1 Recent For a more complete discussion about different target variables, see Mishkin (1999). That is, monetary aggregates are intermediate targets rather than true targets of monetary policy. Friedman (1975) shows that the use of intermediate targets is not optimal. Although Svensson's (1998) idea of using forecasts of the target variable as a synthetic intermediate target is implicit in Friedman's (1975) discussion. 4 instability in the velocity of money for the time being has ended any possibility that a monetary aggregate will be used as a target for monetary policy in t he United States. McKinnon (1984) and Williamson and Miller (1987) argue that monetary policy should target the exchange rate in an open economy.For example, the exchange rate has been the sole or main target in most of the EMS countries. Pegging the domestic currency to a strong currency prevents changes in the exchange rate from having an effect on the domestic price level. But exchange rate targeting results in the loss of an independent monetary policy. The targeting country cannot respond to domestic shocks that are independent of those hitting the anchor country because exchange rate targeting requires that its interest rate be closely linked to that in the anchor country.McCallum (1988) suggests a nominal GDP targeting rule because of its close relationship with the price level. The nominal GDP target has intrinsic appeal when instability in velocity makes a monetary target unreliable. As long as the growth rate of real GDP is predictable, there is a predictable relationship between nominal GDP and the price level. However, recent studies on the time series properties of real GDP raise questions about the predictability of real GDP.If real GDP does not grow at a constant rate, then a constant growth rate for nominal GDP does not guarantee a stable price level. Recently there has been a great upsurge of interest in direct inflation targeting, a policy that has been adopted by the central banks of New Zealand, Canada, the United Kingdom, Sweden, Finland, Australia, and Spain. Although this policy has been implemented with apparent success in the above countries, there are theoretical concerns with inflation targeting.One problem with inflation targeting is that the effect of monetary policy actions on the price level occurs with considerably more delay than its effects on financial variables. The use of a financial variable such as monetary aggregates or exchange rates as the target would provide an earlier signal to the public that policy has deviated fr om its goals. In addition, attempts by the central 5 banks to achieve a predetermined path for prices may cause large movements in real GDP, but only if the price level is sticky in the short run.But the apparent success of inflation targeting, where it has been tried, suggests that these concerns are misplaced. 3 Also, because the effect of monetary policy on long-term trends in output and employment is now considered to be negligible, many economists are now advocating that monetary authorities should use only inflation (or the price level) as the sole target for monetary policy. According to this view the main contribution that monetary policy can make to the trend in real output is to create an environment where markets are not distorted by high and volatile inflation.The central bank of Taiwan appears to have accepted this position. It has repeatedly stated that its number one priority is price stability and the reaction function estimated by Shen and Hakes (1995) confirms that it has behaved as if price stability is an important policy goal. So what combination of policy instrument and target variable would result in the best rule for monetary policy in Taiwan? Would the adoption of such a rule have improved Taiwanese monetary policy during the past three decades?To answer these questions this paper experiments over two policy instruments (monetary base and interest rate) and four target variables (the rate of inflation, the growth rate of nominal GDP, the growth rate of the monetary base, and the change in exchange rate) in an attempt to find what would have been the best targeting rule for Taiwan during the period 1971:1-1997:4. The historical performance of the Taiwanese economy is then compared with the performance predicted by the â€Å"best† targeting rule to evaluate how good Taiwanese monetary policy has been.This comparison is made by comparing the volatility of the relevant variables resulting from the proposed rules with those from the historical data. 3 A careful reading of Friedman (1975) and Svensson (1998) also suggests that these concerns are misplaced. 6 Although, as noted above, by their very nature targeting rules are superior to instrument rules. Hence this paper emphasizes targeting rules. But just how much better targeting rules are than instrument rules is an empirical question of some practical importance because instrument rules are more transparent than targeting rules.Hence, for completeness, this paper also presents results for instrument rules using the rate of interest and the monetary base as instruments and the rate of inflation as the target variable. 3. The Model and Methodology 3. 1 The instrument rule An instrument rule adjusts the growth of the policy instrument in response to deviations between the actual and desired value of the target variable. That is, ? It = (? xt-1 – ? xt-1*), (1) where It represents the policy instrument, ? xt is the target variable, the superscript * denot es the target value desired by the central bank, and ? efines the proportion of a target miss to which the central bank chooses to respond. In this paper, variables are expressed as deviations from their own means. Therefore, there is no cost in terms of generality to set the targeted growth rate desired by the central bank to zero. The economy is characterized by an open-economy VARX model which includes five variables: the growth rate4 of real income (? yt), the rate of inflation (? pt), the change in the logarithm of the exchange rate (? et), the growth rate of the monetary base (? mt), and the change in the interest rate (? rt).Since the purpose of this paper only requires a model that fits the Taiwanese economy well during the sample period, we use a general VARX model with a 4 Growth rates in the empirical work are calculated by taking log-first differences. 7 maximum lag length of four and adopt Hsiao’s (1981) method to determine the optimal lags for each variable. 5 S pecifically, the general VARX model can be written as: ? Xt = A0 + A1? Xt-1 + A2? Xt-2 + A3? Xt-3 + A4? Xt-4 + i =0 ? ai ? I t ? i 4 + ? t, (2) where ? Xt is the 4? 1 vector that contains variables other than the growth of the policy instrument.The policy instrument has immediate effects on other variables if the 4? 1 vector a0 is not zero. For example, if the instrument is rt and the target is ? pt, then Xt = [ yt, pt, et, mt ] and equations (1) and (2) can be written as: ? rt = ? ?pt-1, ? Xt = A0 + A1? Xt-1 + A2? Xt-2 + A3? Xt-3 + A4? Xt-4 + (1)’ i =0 ? ai ? rt ? i 4 + ? t. (2)’ Previous studies such as Judd and Motley (1991, 1992, 1993) and McCallum (1988) estimate equation (2) and assume that the economy faces the same set of shocks that actually occurred in the sample period.The estimated equation, the historical shocks, and the policy rule (1) are used to generate the counterfactual data. Statistics calculated from the counterfactual data are then compared to the historical experiences. In these studies, the response parameter ? is arbitrarily set and the results from different ? ’s are compared. However, given linearity of the model and the variance-covariance matrix of historical shocks, one can analytically solve for the value of ? that minimizes the variance of the inflation rates. Specifically, substituting (1) into (2) yields a VAR(5) in ?Xt. For convenience, the VAR(5) system can be written as a more compact expression: 5 We tried to adopt Ball's (1998) open-economy Keynesian type model to Taiwan, but this model was not supported by the Taiwanese data. 8 ?Wt = B0 + B1? Wt-1 + ? t, (3) where Wt = [ Xt, Xt-1, Xt-2, Xt-3, Xt-4 ] and ? t = [? t, 0] are both 20? 1. Assume that ? Wt is stationary. Denote V? W as the variance-covariance matrix of ? Wt and V? the variance-covariance matrix of ?t. Equation (3) implies V? W = B1 V? W B1†² + V?. (4) Given the regression results of (2), the variance of ? t is a function of ? only. Th erefore, the value of ? that minimizes the variance of ? pt, given historical shocks, can be calculated. The advantages of an instrument rule include its simplicity, transparency to the public, and the fact that it is always operational. The central bank responds to observed deviations from the target and does not need to base its policy actions on forecasts that require knowledge of the structure of the economy. However, as noted above, instrument rules are not optimal in the sense that they do not use all available information.The policy instrument only responds to the target variables, which is usually inefficient compared to rules that allow the instrument to respond to all the variables in the model. The following section uses an optimal control problem to derive the optimal policy rule, instead of specifying the rule in advance. 3. 2 The targeting rule A targeting rule is derived from the minimization of a loss function. This loss function reflects the policymaker’s des ired path for the target variable. A commonly used one is a quadratic loss function which penalizes deviations of the target variable from its target value.The policymaker’s optimization problem can be solved with the knowledge of the dynamics of the economic structure, which is equation (2). That is, equation (2) is used as the constraints in the dynamic programming problem. To simplify analysis, equation (2) is written as a first-order system, Zt = b + B Zt-1 + C ? It + ? t, (5) 9 where Zt = [? Xt, ? Xt-1, ? Xt-2, ? Xt-3, ? It, ? It-1, ? It-2, ? It-3]. The constant vector b is 20? 1, B is 20? 20, C is 20? 1, ? t is 20? 1, and their arguments should be obvious. Therefore, the central bank's control problem is to minimize a stream of expected quadratic loss function: T 1 E0 ?Zt ‘ K Zt, T t =1 (6) subject to Zt = b + B Zt-1 + C ? It + ? t, (5) where the expectation E0 is conditional on the initial condition Z0. Again, without loss of generality, the target value is set t o zero since all the variables are expressed as deviations from mean. The elements in the matrix K are weights that represent how important to the central bank are deviations of the target variables from their target values. For example, if the central bank wants to target the inflation rates, then the [2,2] element of K is 1 and the other elements are all zeros.The loss function is equivalent to (1/T) E0 ?t =1? pt 2 . T If the central bank wants to target the nominal GDP, then the 2? 2 block on the upper left corner of K is a unity matrix and the other elements are all zeros. The loss function in this case is (1/T) E0 ?t =1(? yt + ? pt ) 2 . T Now the problem is to choose the policy instrument ? I1, . . . , ? IT that minimizes (6), given the initial condition Z0. By using Bellman's (1957) method of dynamic programming the problem is solved backward. That is, the last period T is solved first, given the initial condition ZT-1.Having found the optimal IT, we solve the two-period prob lem for the last two periods by choosing the optimal IT-1, contingent on the initial condition ZT-2, and so on. Letting T > ? , the optimal policy rule can be expressed as [see Chow (1975, ch. 8) for derivation details]: ? It = G Zt-1 + f , with (7) 10 G = -(C ‘ HC) ? 1 (C ‘ HB), f = -(C ‘ HC) ?1 C ‘ (Hb-h), H = K + (B+CG) ‘ H (B+CG), and h =[I-(B+CG) ‘ ] ?1 [- (B+CG) ‘ Hb]. The rule defines the policy instrument as a function of the predetermined variables in the model. The economy is assumed to face the same set of shocks that actually occurred in the historical period.Therefore, the estimated equations, the policy rule, and the historical shocks are used to generate the counterfactual data. The resulting statistics are compared. Even though it is usually more efficient to let the instrument respond to all the relevant variables than to let it respond only to the target variables, the ad hoc instrument rules are more widely discussed in th e literature. The reason for the preference for simple instrument rules may be that the targeting rule is more sensitive to model specifications. For example, the assumption of full information is generally maintained for the computation of an optimal rule.This tends to make the targeting rule less robust to model specification errors than are the simple instrument rules. In addition, the optimal rule may require larger adjustments of the instrument because it responds to more variables. This would in turn yield undesired higher volatility of the other variables such as output growth. Therefore, again, the choice between the instrument rule and the targeting rule cannot be determined by theory alone and is an empirical issue. 4. Empirical Results 4. 1 Data This paper uses Taiwanese national quarterly time series data for the period 1971:11997:4.The sample starts in 1971:1 because of data availability. All data are taken from two databanks: the National Income Accounts Quarterly and the Financial Statistical Databank. 11 The rediscount rate is used as rt because it indicates the policy intentions of the central Bank of Taiwan most directly. The monetary base mt is defined as the reserve money. The exchange rate target is the NT/US dollar rate. The variable yt is real GDP in millions of 1991 NT dollars, and pt is defined as the GDP deflators. Except interest rates, all variables are in logarithms. All variables are in first-difference form and expressed as deviations from their means.The Augmented Dickey-Fuller (ADF) test is used to ensure that the variables are transformed into stationary processes6. The top row of Table 1 presents the historical standard deviations of the variables in the model in order to allow comparison with the values obtained from the simulations. 4. 2 Estimation results under instrument rules Panel A in Table 1 presents the standard deviations obtained using an instrument rule with inflation as the target variable. The first row of Panel A presents simulation results under an interest rate instrument, while the second row presents results under a monetary base instrument.The simulations using an interest rate instrument yielded standard deviations for output growth, the change in the exchange rate, and money growth that are only slightly higher than those for the historical data, while the standard deviation of inflation is slightly lower than its historical value. The only standard deviation in the first row of Panel A that differs substantially from the historical data is that for the change in the interest rate, which is much lower in the simulation.These results indicate that actual policy in Taiwan achieved results almost as good as those that would have been obtained under an optimal interest-rate instrument rule with the 6 The lag lengths in the ADF regressions are determined by the Akaike Information Criterion (AIC) and the Schwartz's (1978) criterion. The maximum length is set to 12. A time trend is includ ed in the yt, pt, and mt regressions. All results indicate that the original time series are integrated of order one. The results of the tests are available from the authors upon request. 12 xception that the optimal rule would have yielded a more stable rate of interest. The simulation using the monetary base as the instrument yielded slightly higher standard deviations for all variables except the rate of inflation. Those for output growth, the change in the exchange rate, and the rate of interest were only slightly higher than the historical values, while the standard deviation of the growth rate of the monetary base was much higher than its historical value. The standard deviation of the inflation rate is slightly lower than the historical value but is higher than that in the interest rate instrument rule.These results suggest that the discretionary policy implemented in Taiwan was superior to an optimal monetary base instrument rule. They also indicate that an instrument rule u sing the rate of interest would have been superior to one employing the monetary base as instrument, though not by a large margin. 4. 3 Estimation results under targeting rules Panel B of Table 1 presents standard deviations of the variables under the various targeting rules considered here. The first four rows of Panel B present results obtained using an interest rate instrument.In the first row of Panel B the standard deviation of nominal GDP is minimized; in the second row the standard deviation of inflation is minimized; etc. The last three rows of Panel B present results under a monetary base instrument. Notice that for both instruments, if nominal GDP is the target, then the standard deviations of all variables are higher than their historical values. This implies that the growth rate of nominal GDP would not have been a suitable target variable for Taiwan. Furthermore, notice that for all targets under the monetary base instrument the standard deviation of output growth is mu ch higher than its historical value.This effectively rules out consideration of the monetary base as the instrument of monetary policy under a targeting rule for Taiwan. Now notice from the fourth row of Panel B that if the monetary base is the target under an interest rate instrument, the standard deviations of output growth and inflation are both higher 13 than their historical values. This effectively rules out the use of the monetary base as an appropriate target for monetary policy in Taiwan. Finally, by comparing rows â€Å"? pt Target† and â€Å"? t Target† of Panel B, one sees that if the rate of inflation is the target, then the standard deviations of output growth and inflation are lower than if the exchange rate is the target. Also, if inflation is the target, the standard deviations from the simulations for inflation and output are lower than their historical values. Hence it is concluded that Taiwanese monetary policy would have been better than its histor ical performance if it had used an optimal targeting rule with the rate of interest as instrument and inflation as the target. 5. Conclusion Taiwan has been very successful in using discretionary monetary policies.This paper attempts to see whether there exist policy rules that can improve the Taiwanese economy for the past several decades. This paper evaluates several monetary policy rules using Taiwanese quarterly data from 1971:1 to 1997:4. Two types of policy rules are examined. Instrument rules adjust the growth of the policy instrument in response to deviations between the actual and desired values of the target variable. Unlike those in the previous studies where arbitrary instrument rules are proposed, this paper solves analytically for the optimal instrument rules that minimize the standard deviation of the rate of inflation.Targeting rules are derived from the solution to the dynamic programming problem that minimizes a loss function subject to the structure of the economy . The rule expresses the growth of the policy instrument as a function of all the predetermined variables in the model. Two policy instruments (interest rate and monetary base) and four targets variables (nominal GDP growth, inflation rate, changes in exchange rates, and money growth rate) are examined in the paper. Simulations of a simple VARX model and the policy rules suggest that, 14 ompared to the historical policy, the use of a policy rule in Taiwan would not have reduced substantially the volatility of inflation rate. The only policy rule that would appeal to the authority is the direct inflation targeting rule with the interest rate as the instrument. This rule would have reduced the standard deviation of the inflation rate in Taiwan by 0. 7% while maintained similar volatility of the other variables to those in the historical data. 15 References Ball, L. (1998), â€Å"Policy Rules for Open Economies,† NBER Working Paper 6760. Barro, Robert J. (1986). Recent Developme nts in the Theory of Rules Versus Discretion,† The Economic Journal Supplement, 23-37. Bellman, R. E. (1957), Dynamic Programming, Princeton, N. J. : Princeton University Press. Chow, G. C. (1975), Analysis and Control of Dynamic Economic System, John Wiley & Sons Press. Friedman, Benjamin (1975), â€Å"Rules Targets, and Indicators of Monetary Policy,† Journal of Monetary Economics, 1, 443-73. Friedman, Milton (1960), A Program for Monetary Stability. Fordham University Press, New York. Hsiao, C. (1981), â€Å"Autoregressive modelling and money-income causality detection,† Journal of Monetary Economics, 7, 85-106.Judd, J. P. and B. Motley (1991), â€Å"Nominal feedback rules for monetary policy,† Federal Reserve Bank of San Francisco Economic Review (Summer), 3-17. Judd, J. P. and B. Motley (1992), â€Å"Controlling inflation with an interest rate instrument,† Federal Reserve Bank of San Francisco Economic Review 3, 3-22. Judd, J. P. and B. Motley (1993), â€Å"Using a nominal GDP rule to guide discretionary monetary policy,† Federal Reserve Bank of San Francisco Economic Review 3, 3-11. Kohn, D. L. (1994), â€Å"Monetary aggregates targeting in a low-Inflation economy–Discussion,† in J. C.Fuhrer, ed. , Goals, Guidelines, and Constraints Facing Monetary Policymakers, 130135. Federal Reserve Bank of Boston. Kydland, F. E. and Prescott, E. C. (1977), â€Å"Rule rather than discretion: The inconsistency of optimal plans,† Journal of Political Economy 85, 473-491. McCallum, B. T. (1988), â€Å"Robustness properties of a rule for monetary policy,† CarnegieRochester Conference Series on Public Policy 29, 173-204. 16 McKinnon, Ronald (1984). An International Standard for Monetary Stabilization, Washington: Institute for International Economics. Mishkin, F. S. (1999). International experiences with different monetary policy regimes,† NBER Working Paper #6965. Schwartz, S. G. (1978), â€Å"Est imating the Dimension of a Model,† Annals of Statistics 6:461-464. Svensson, Lars E. O. (1998), â€Å"Inflation Targeting as a Monetary Policy Rule,† NBER Working Paper #6790. Shen, C. H. and Hakes, D. R. (1995), â€Å"Monetary policy as a decision-making hierarchy: The case of Taiwan,† Journal of Macroeconomics 17, 357-368. Taylor, John B. (1993). â€Å"Discretion versus Policy Rules in Practice,† Carnegie-Rochester Conference Series on Public Policy, 39: 195:214.Warburton, Clark (1966), â€Å"Introduction,† Depression, inflation, and Monetary Policy: Selected Papers, 1945-1953. Johns Hopkins Press, Baltimore. Williamson, John and Miller, Marcus (1987). Targets and Indicators, Washington: Institute for International Economics. 17 Table 1:Standard Deviations of the Variables (in Percentage) Output Growth ? yt Historical Data: Simulated Data: (A) Instrument Rules: Interest Rate Instrument: ? pt Target Monetary Base Instrument: ? pt Target (B) Targeti ng Rules: Interest Rate Instrument: ? (yt + pt) Target ? pt Target ? et Target ? t Target Monetary Base Instrument: ? (yt + pt) Target ? pt Target ? et Target 5. 346 3. 862 3. 798 4. 964 1. 972 3. 449 2. 767 5. 950 2. 139 14. 63 27. 781 6. 794 0. 185 0. 198 0. 159 4. 348 2. 993 3. 047 4. 446 4. 314 2. 092 3. 064 6. 880 3. 076 2. 469 2. 361 2. 771 5. 421 4. 473 4. 281 4. 058 0. 485 0. 175 0. 332 0. 431 -2. 38 3. 308 2. 748 2. 718 6. 540 0. 178 3. 185 Inflation Rate ? pt 2. 793 Change in Exchange rate ? et 2. 415 Monetary Base Growth ? mt 4. 315 Change in interest rate ? rt 0. 162 Optimal ? : 0. 0133 3. 201 2. 633 2. 601 4. 454 0. 035The sample period is from 1971:1 to 1997:4. The variable ? yt is real GDP growth rate, ? pt is inflation rate, ? et is change in exchange rates, ? mt is monetary base growth rate, and ? rt is change in interest rates. All data are from the National Income Accounts Quarterly and the Financial Statistical Databank data banks. The response parameter ? in the instrument rules defines the proportion of a target miss to which the central bank chooses to respond. 18 Figure 1 Inflation Rate (annual rate %) 70 60 Inflation Rate (% per year) 50 40 30 20 10 0 -10 70 74 78 82 Year 86 90 94 98 19