Tuesday, October 29, 2019

History of fast food restaurants in America Research Paper

History of fast food restaurants in America - Research Paper Example Fast foods include tacos, ice creams, hot dogs, fried chicken, juices, chicken nuggets, meat pies, pizzas, sausages, chips and sandwiches. Other foods that are often served in fast food restaurants are mashed potatoes, salads and chilli. One of the main characteristic of fast food restaurants is that they often maintain a limited menu, with or without seating space. This paper will analyse the history of fast food restaurants in the US tracing its development especially from the 1920s to date. Before the fast food restaurants gained ground, such foods as hamburger sandwiches and hotdogs had been big business in the early 1900s, their popularity bolstered by the holding of the St Louis World’s Fair. The first pizzeria in the United States of America opened in 1905 setting stable ground for the establishment of fast food restaurants (Famouswhy, 2010). Before the establishment of what is today known as the fast food restaurant in the US, White Castle, founded in Kansas’ Wichita was already established in 1921 (Howstuffworks, 2010). Then, most people assumed that burgers that were being sold in circuses, lunch counters, carts and fairs were of low quality. The assumption was based on the belief that hamburgers were made of soiled meat and scraps gotten from slaughterhouses. Taking note of this damning misconception, White Castles owners endeavoured to ensure that this misconception was destroyed. The founders of White Castle started to prepare their hamburgers in a manner that customers would appreciate. Their restaurants prepared the hamburgers such that clients saw how the ingredients were being mixed and the food cooked (Howstuffworks, 2010). They also painted their restaurants white and gave them names that suggested high levels of hygiene. With time, the popularity of the restaurant chain grew especially in the East and Midwest parts of the US. The

Sunday, October 27, 2019

Alternative Volatility Forecasting Method Evaluation

Alternative Volatility Forecasting Method Evaluation For many financial market applications, including option pricing and investment decisions, volatility forecasting is crucial. Therefore, the research of volatility forecasting has been an active area of study since the past years. In recent years, the emergence of many financial time series methods for volatility forecasting has proved the importance of understanding the nature of volatility in any financial instruments. Often, people will think price is used as an indicator of the stock market performance. Due to the non-stationary nature of price series of the stock market, most researchers actually transformed series of price change (return) or absolute price changes (absolute return) in their studies. There is a difference between the term return and the term volatility. The term volatility is used as a crude measure of the total risk of financial assets. Actually, volatility is the standard deviation or the variance of returns whereas return is merely the changes of prices. An increasingly commonly adopted tool for the measurement of the risk exposure associated with a particular portfolio of assets known as Value at Risk (VaR) involves calculation of the expected losses that might result from changes in the market prices of particular securities (Jorion, 2001; Bessis, 2002). Thus, the VaR of a particular portfolio is defined as the maximum loss on a portfolio occurring within a specified time and with a given (small) probability. Under this approach, the validity of a banks internally modeled VaR is backtested by comparing actual daily trading gains or losses with the estimated VaR and noting the number of exceptions occurring, in the sense of days when the VaR estimate was insufficient to cover actual trading losses, with concerns naturally arising where such exceptions frequently occur, and that can result in a range of penalties for the financial institution concerned (Saunders Cornett, 2003). A crucial parameter in the implementation of parametric VaR calculation methods is an estimate of the volatility parameter that describes the asset or portfolio, or more accurately a forecast of that volatility where the simplifying assumption of constancy is relaxed and time-varying volatility is acknowledged. While it has long been recognized that returns volatility exhibits clustering, such that large (small) returns follow large (small) returns of random sign (Mandelbrot, 1963; Fama, 1965), it is only following the introduction of the generalized autoregressive conditional heteroskedasticity (GARCH) model (Engle, 1982; Bollerslev, 1986) that financial economists have modeled and forecast these temporal dependencies using econometric techniques, and a variety of adaptations of the basic GARCH framework are now widely used in modeling time-varying volatility. In particular, the significance of asymmetric effects in stock index returns has been widely documented, such that equity re turn volatility increases by a greater amount following positive shocks, usually associated with the leverage effect, whereby a firms debt-to-equity ratio increases when equity values decline, and holders of that equity perceive future income streams of the firm as being more risky (Black, 1976; Christie, 1982). Such variance asymmetry has been successfully modeled and forecast in a variety of market contexts (Henry, 1998) using the threshold-GARCH (TGARCH) model (Glosten et al., 1993), and the exponential-GARCH (EGARCH) model (Nelson, 1991) in particular. Problem Statement While risk management practises in financial institutions often rely on simpler volatility forecasting approaches based on heuristics and moving average, smoothing or RiskMetrics techniques, symmetric and asymmetric GARCH models have also recently begun to be considered in the VaR context. However, the standard GARCH model and variants within that class of model impose rapid exponential decay in the effect of shocks on conditional variance. In contrast, empirical evidence has suggested that volatility tends to change slowly and that shocks take a considerable time to decay (Ding et al., 1993). The fractionally integrated-GARCH (FIGARCH) model (Baillie et al., 1996; Chung, 1999) has provided a popular means of capturing and forecasting such non-integrated but highly persistent long memory dynamics in volatility in the recent empirical literature, as well as its exponential (FIEGARCH) variant (Bollerslev Mikkelsen, 1996) which parallels the EGARCH extension of the basic GARCH form, an d therefore provides a generalization capable of capturing both the volatility asymmetry and long memory in volatility which are potential characteristics of emerging equity markets. Research Objectives This paper therefore seeks to extend previous research concerned with the evaluation of alternative volatility forecasting methods under VaR modeling in the context of the Basle Committee criterion for determining the adequacy of the resulting VaR estimates in two ways. First, by broadening the class of GARCH models under consideration to include more recently proposed models such as the FIGARCH and FIEGARCH representations described above, which are capable of accommodating potential fractional integration and the associated long memory characteristics of return volatility, as well as the more simple and computationally less intensive methods commonly used in financial institutions. Second, extending the scope of previous research through evaluative application of these methods to daily index data of nine stock market indexes. Significance of this study The extensive research of volatility forecasting plays an important role for investment, financial risk management, security valuation, and also business decision-making process. Without a proper forecasting tools and research on this field, many financial decision making process will be difficult and risky to be implemented. The positive contribution of volatility forecasting in the field of finance is no doubt a fact as it given many practitioners a mean of guidelines to estimate their management risk such as option pricing, hedging and estimating investment risk. Therefore, it is crucial to study on the performance of different approaches and methods of forecast model to determine the best suitable practical application for different situation. The most common form of financial instrument is the stock market. The stock indices consist of a particular countrys most prominent stocks. Thus, in this study our aim is to focus on forecasting the stock indices volatility of eight different stock indices that provide us the ability to test the forecast approaches. There are quite a number of forecast models since the recent years. However, the new concern is on the performance of these forecast model when incorporated with higher frequency data with the realized volatility method. There are still gap for researching the intra-day data effects on forecasting model which is comparative new as compared to daily data volatility forecasting. The significant role of this study also include whether intra-day data can really help at improving the performance of forecast model to estimate volatility for the stock index. Review of Chapters In this proposal, the report is mainly subdivided into three chapters. Chapter 1 is about the overview of this research which includes the background of the study, the research objective, problem statement, and the significance of this study. Chapter 2 presents the literature review of volatility forecasting, GARCH models, exponentially smoothing and realized volatility. CHAPTER 2: LITERATURE REVIEW 2.1 Volatility forecasting Volatility forecasts are produced by either market-based or time-series methods. Market-based forecasting involves the calculation of implied volatility from current option prices by solving the Black and Scholes option pricing model for the volatility that results in a price equal to the market price. In this paper, our focus is on the development of a new time series method. These methods provide estimates of the conditional variance, à Ã†â€™2t = var(rt | It-1), of the log return, rt, at time t conditional on It à ¢Ã¢â€š ¬Ã¢â‚¬Å" 1, the information set of all observed returns up to time t à ¢Ã¢â€š ¬Ã¢â‚¬Å" 1. This can be viewed as the variance of an error (or residual) term, ÃŽÂ µt, defined by ÃŽÂ µt = rt à ¢Ã¢â€š ¬Ã¢â‚¬Å" E(rt | It à ¢Ã¢â€š ¬Ã¢â‚¬Å" 1 ), where E(rt | It à ¢Ã¢â€š ¬Ã¢â‚¬Å" 1 ) is a conditional mean term, which is often assumed to be zero or a constant. ÃŽÂ µt is often referred to as the price à ¢Ã¢â€š ¬Ã…“shockà ¢Ã¢â€š ¬? or à ¢Ã¢â€š ¬Ã…“newsà ƒ ¢Ã¢â€š ¬?. 2.2 Overview of standard volatility forecast model 2.2.1 GARCH model GARCH models (Engle, 1982; Bollersle, 1986) are the most widely used statistical models for volatility. GARCH models express the conditional variance as a linear function of lagged squared error terms and lagged conditional variance terms. For example, the GARCH(1, 1) model is shown in the following expression: à Ã†â€™2t = à Ã¢â‚¬ ° + ÃŽÂ ±ÃƒÅ½Ã‚ µ2t à ¢Ã¢â€š ¬Ã¢â‚¬Å" 1 + ÃŽÂ ²Ãƒ Ã†â€™2t à ¢Ã¢â€š ¬Ã¢â‚¬Å" 1, where à Ã¢â‚¬ °, ÃŽÂ ±, and ÃŽÂ ² are parameters. The multiperiod variance forecast, , is calculated as the sum of the variance forecasts for each of the k periods making up the holding period: where is the one-step-ahead variance forecast. Empirical results for the GARCH(1, 1) model have shown that often ÃŽÂ ² à ¢Ã¢â‚¬ °Ã‹â€  (1 à ¢Ã¢â€š ¬Ã¢â‚¬Å" ÃŽÂ ±). The model in which ÃŽÂ ² = (1 à ¢Ã¢â€š ¬Ã¢â‚¬Å" ÃŽÂ ±) is term integrated GARCH (IGARCH) (Nelson, 1990). Exponential smoothing has the same formulation as the IGARCH(1, 1) model with the additional restriction that à Ã¢â‚¬ ° = 0. The IGARCH(1, 1) multiperiod forecast is written as Stock return volatility is often found to be greater following a negative return than a positive return of equal size. This leverage effect has promted the development of a number of GARCH models that allow for asymmetry. The first asymmetric formulation was the exponential GARCH model of Nelson (1991). In this log formulation for volatility, the impact of lagged squared residuals is exponential, which may exaggerate the impact of large shocks. A simpler asymmetric model is the GJRGARCH model of Glosten et al. (1993). The GJRGARCH(1, 1) model is given by , where à Ã¢â‚¬ °, ÃŽÂ ±, ÃŽÂ ³, and ÃŽÂ ² are parameters; and I[.] is the indicator function. Typically, it is found that ÃŽÂ ± > ÃŽÂ ³, which indicates the presence of the leverage effect. The assumption that the median of the distribution of ÃŽÂ µt is zero implies that the expectation of the indicator function is 0.5, which enables the derivation of the following multiperiod forecast expression: GARCH parameters are estimated by maximum likelihood, which requires the assumption that the standardized errors, ÃŽÂ µt / à Ã†â€™t, are independent and identically distributed (i.i.d.). Although a Gaussian assumption is common, the distribution is often fat tailed, which has prompted the use of the Student-t distribution (Bollerslev, 1987) and the generalized error distribution (Nelson, 1991). Stochastic volatility models provide an alternative statistical volatility modelling approach (Ghysels et al., 1996). However, estimation of these models has proved difficult and, consequently, they are not as widely used as GARCH models. Andersen et al. (2003) show how daily exchange rate volatility can be forecasted by fitting long-memory, or fractionally integrated, autoregressive and vector autoregressive models to the log realized daily volatility constructed from half-hourly returns. Although results for this approach are impressive, such high frequency data are not available to many forecasters, so there is still great interest in methods applied to daily data. A useful review of the volatility forecasting literature is provided by Poon and Granger (2003). 2.2.2 Exponentially Smoothing Exponentially Weighted Moving Average (EWMA) is simple and well-known volatility forecast method. The method is based on the simple average of past squared residuals to estimate its variance forecasts. The EWMA allows the latest observations to have a stronger weighted impact on the volatility forecast of past data observations. The equation for the EWMA is shown and written as exponential smoothing in recursive form. The ÃŽÂ ± parameter is the smoothing parameter. The equation: There is no proper guideline or statistic model for exponential smoothing. Generally, literature suggested using reduction in the sum of in-sample one-step-ahead estimation of errors (Taylor, 2004 cited from Gardner, 1985). In RiskMetrics (1996), volatility forecasting for exponential smoothing is recommended to use the following minimisation: In the above equation, ÃŽÂ µ2t is the in-sample squared error which acted as the proxy for actual variance whereby it is said to be not observable. By using ÃŽÂ µ2t as a proxy for variance, the actual squared residual, ÃŽÂ µ2t, is said to be biased and noisy. In Andersen et al. (1998), the research showed the evaluation of variance forecasts using realised volatility as a more accurate proxy. The next section would discuss more on the literature of realised volatility. The usage of high frequency data for realised volatility in forecast evaluation can be applied in parameter estimation for exponential smoothing with the following minimisation expression: . 2.2.3 Realised volatility The recent researchs interest in using a comparative volatility estimator as an alternative has emerged a significant literatures on volatility models that incorporated high frequency data. One of the emerging theories for a comparative volatility estimator is the so called Realized Volatility. Realized volatility is referred as the volatility calculated using a short period time series or using higher frequency periods. In Andersen and Bollerslev (1998) showed that high frequency data can be used to compute daily realize volatility which showed a better true variance than the usual daily return variance. This concept is adopted in Andersen, Bollerslev, Diebold Labys (2003) to forecast the daily stock volatility which found that the additional intraday information are provide better result in forecasting low volume and up market day. The application of realized volatility has also been employed by Taylor (2004) in parameters estimation for weekly volatility forecasting using realised volatility derived from daily data. An encouraging result were showed by using the smooth transition exponential smoothing method whereby the research used eight stock indices to compare the weekly volatility forecast of this method with other GARCH models (Taylor, 2004). The concept of realized volatility has been employed by many researchers in forecasting of many other financial assets such as foreign exchange rates, individual stocks, stock indices and etcetera. One of the early application of realized volatility concept has used spot exchange rates of Deutschemark-US dollar and Japanese Yen-US dollar to show the superiority of using intraday data as realized volatility measure. The sum of squared five-minute high frequency returns incorporated in the forecasting model proved to outperform the daily squared returns as a volatility measure (Andersen et al., 1998). Another similar study done by Martens (2001) has adopted realized volatility in forecasting daily exchange rate volatility using intraday returns. The results showed that using highest available frequency of intraday returns leads to superior daily volatility forecast. Furthermore, realized volatility approach has also been extended to studies for risk and return trade-off using high frequency data. In Bali et al. (2005), the research provided strong positive correlation between risk and return for stock market using high frequency data. The usage of daily realized which incorporated valuable information from intraday returns produce more accurate measure of market risk. In addition to this study, Tzang et al. (2009) as applied the realized volatility approach as a proxy for market volatility rather than squared daily returns to assess the efficiency of various model based volatility forecast. Finally, the findings from a research done by Andersen, Bollerslev, Diebold Labys (2001) shown that realized volatility in certain conditions is free for measurement error and unbiased estimator for return volatility. The proven research has prompted many recent works in forecasting intra-day volatility to applied realized volatility for their studies. This can be observed in McMillan Garcia (2009), Fuertes et al. (2009), Frijns et al.(2008) and Martens (2001). Many researchers exploit the advantage of realised volatility as an unbiased estimators measure for intra-day data and also as a simplified way to incorporated additional information into other forecast models. McMillan et al. (2009) utilised realised volatility to capture intraday volatilities itself as opposed to most researchers that uses realised volatility for daily realised approach. The study showed Hyperbolic Generalized Autoregressive Conditional Heteroscedasity (HYGARCH) as the best forecast model of intra-day volatility. 2.3 Forecast Models used in this study The forecast models that are presented in this study include: Random Walk (RW) 30 days Moving Average (MA30) Exponentially Weighted Moving Average (EWMA) with =0.06 (RiskMetrics) Exponentially Smoothing with ÃŽÂ ± optimised (ES) Integrated General Autoregressive Conditional Heteroskedastic using daily data (IGARCH) Exponentially Weighted Moving Average (Riskmetrics) on daily realised volatility calculated from intraday data. (EWMA-RV) Exponentially Smoothing with ÃŽÂ ± optimised on daily realised volatility calculated from intraday data. (ES-RV) General Autoregressive Conditional Heteroskedasticity model with intraday data using realised volatility approach (INTRAGARCH) Integrated General Autoregressive Conditional Heteroskedasticity with intraday data using realised volatility approach (IGARCH) General Autoregressive Conditional Heteroskedasticity with daily realised volatility (RV-GARCH) CHAPTER 3: DATA AND METHODOLOGY 3.1 Sample selection and description of the study Various comparative forecast models are used in order to evaluate the performance of incorporating intraday data. This study used dataset from nine stock indices include Malaysia (FTSE-BMKLCI), Singapore (STI), Frankfurt-Germany (DAX30), Hong Kong (Hang Seng Index), London-United Kingdom (FTSE100), France (CAC40), Shanghai-China (SSE), Shenzhen-China (SZSE), and United States (SP 100). These series consisted of daily closing prices and also the intraday hourly last price of their respective indices. The daily closing prices were retrieved using à ¢Ã¢â€š ¬Ã…“DataStream Advance 4.0à ¢Ã¢â€š ¬? and also from Yahoo Finance (http://finance.yahoo.com). Whereas, the hourly intraday last prices of these stock indices were retrieved from Bloomberg Terminal from Bursa Malaysia. Each stock index has their respective trading hours last price which produced a different number of observations for each series. The total number of trading hours within the day differed among different stock index. However, the sample period used in this study spanned approximately for 300 trading days, from 15 October 2009 to 15 March 2011. In order to simplify the study, the focus is based on a one-step-ahead volatility forecast. The first 200 trading days log returns were applied to estimate the parameters for various forecast models which is known as the in-sample forecast. The remaining 100 trading days log returns were used for post-sample evaluation. This study aimed to forecast volatility in daily log returns for various forecasting methods and used daily realised volatility as proxy for actual volatility. The next subsections presented the data description and the 10 forecast methods which will be considered in the study. 3.2 Data Analysis 3.2.1 Forecasting Methods This subsection describes the methodology to forecast the in-sample and out-sample performance of various forecast models. The forecast model includes Random Walk (RW), Moving Average, GARCH models, and Exponential smoothing techniques. 3.2.1.1 Standard volatility forecast model using daily returns This project paper adopted the simple moving average of squared residuals from the recent past 30 daily observations which is labelled as MA30 and the Random Walk (RW) for the standard volatility forecast model as performance benchmark. The 30 day simple moving average is given by: Whereby, ÃŽÂ µ2 = (rt ÃŽÂ ¼)2 shown in the previous section. The moving average is able to smooth out the short running fluctuations and emphasize on the long run trends or cycles through a series of averaging different subsets of datasets. On the other hand, the Random Walk (RW) is explained as the forecast result is equal to the actual value of the recent period. The actual value in this study used is the squared residual denoted as, ÃŽÂ µ2t. The equation is as shown below:à ¯? ¥ Tomorrows forecasted value = yesterday actual value ()à ¯Ã¢â€š ¬Ã‚ ½ 3.2.1.2 GARCH models for hourly and daily returns There are many different GARCH models for forecasting volatility that can be included in this research. However, the consideration in this study is limited to 2 forecast GARCH models which are the GARCH and IGARCH for practicality. The GARCH models in this study have applied GARCH (1, 1) specifications. The three forecast model used were labelled as IGARCH, INTRA-IGARCH, and INTRA-GARCH models. The IGARCH model is estimated using daily residuals as daily data is easily obtained from the source mentioned above. The general IGARCH forecast model used is given by: à ¯? ¢Ãƒ ¯? ¥ à ¯? ³ But, the parameter estimate generate by EVIEW 7 will be using the following expression: à ¯? ³ à ¯? ¢ à ¯? ¡Ãƒ ¯? ¥ à ¯? ³ However, the INTRA-IGARCH and INTRA-GARCH models used hourly residual data to estimate the forecast for daily realised volatility. The forecast for volatility of these models over an N-trading hours span period would be recognised as the forecast of daily volatility. The N trading hours span period is dependent on the trading hours of a specified stock index. In order to calculate the daily realised volatility, the equation is for N trading hours in a day for a particular stock index is given by: Where period i is the higher frequency of hourly data and the ÃŽÂ µ2t, is the squared residual of the particular hour. For example, if KLCI index has a 7 trading hours per day, the realised daily volatility is calculated from the sum of squared residual of these 7 hours. Additionally, forecast models such as INTRA-IGARCH and INTRA-GARCH applied equation 3 to obtain the daily realised volatility by replacing the squared residual, ÃŽÂ µ2t with values that is forecasted using these models. 3.2.1.3 GARCH model using realised volatility The GARCH model can be estimated using daily realised volatility which is derived from the hourly squared residual with equation 3. In order to apply RV for GARCH forecast model, equation 3 has to be modified to be squared root to be able to obtain the parameter estimates that is needed using EVIEW 6. The equation is as follow: As for this project paper, the GARCH model that used daily realised volatility as input data is labelled as RV-GARCH. 3.2.1.4 Exponential smoothing and EWMA methods The forecast model for exponential smoothing method has been implemented into two approaches. The first is by using minimisation of equation 3 to optimise the parameter and it is labelled as ES for this project paper. The actual value (squared residual), ÃŽÂ µ2t is obtained from the daily data. The second approach which is said to be the better proxy variance forecast has applied equation 4 for the minimisation. The forecast model for this exponential smoothing method is termed as ES-RV which adopted daily realised volatility from hourly data. Apart from that, the study also considered the smoothing parameter ÃŽÂ ± as a fixed value of 0.06 as recommended by RiskMetrics (1996) for model using daily data and daily realised volatility data derived from hourly data. The forecast model is termed as EWMA and EWMA-RV respectively. By using equation 2 as shown previously, the EWMA used daily squared residual as ÃŽÂ µ2t 1 parameter input while the EWMA-RV used the daily realised volatility as the ÃŽÂ µ2t 1 parameter input. 3.3 Research Design (Gantt Chart) Jul Aug Sep Oct Nov Dec Jan Feb Mar Literature Review Methodology Research proposal Data collection Data analysis Discussion and conclusion

Friday, October 25, 2019

What are the effect of bad parenting :: essays research papers

What are the affects of poor parenting: I always believed that you could see the effects of bad parenting, by studying the youth of today opposed to the youth of sixty years ago. The effects of bad parenting can be measured in many different ways. One of the things that we all forget about is â€Å"lead by example†. What we as adults, teach our children, is what our future generations will be as people. Another way you can observe the results is by looking at our prisons and jails. How many of the inmates really had an idealistic life, as opposed to the one’s that had a hard time growing up? Would their lives be any different today if; for example mom hadn’t worked or if dad didn’t drink. Who’s to say what works and what doesn’t work. Kids learn by watching adults and other children do the things that they do. You’re not going to be to convincing, if you tell impressionable children not to do something when they themselves are doing what they preach not to do. I have talked to a few people about this subject and these are some of the response’s that I have gottenâ€Å" If your not taught at home right and wrong, how are you supposed to learn† Brian twenty three and has no children, Maria thirty six, two children says â€Å" You have to listen to what your children are saying, and don’t talk at them† finally, Ken fifty one, one son said â€Å" I remember when my parents weren’t around if I was doing something I shouldn’t have been doing, my neighbors had the right to correct my actions in place of my parents, today people turn a blind eye for threat of negative ramifications. Whether that is angry parents or social services, to day people just aren’t involved like they used to be.   Ã‚  Ã‚  Ã‚  Ã‚  Some people blame the school system, their kids friends, society, television, video games, the Internet, and being from a different culture but they never blame themselves for the poor behavior their children grow up to have. Raising children anywhere has to be a full time job, being a positive influence to some people just doesn’t seem to mean as much to people anymore. I could go on and on about this subject, listing the reasons why and what happens when bad

Thursday, October 24, 2019

Pennsylvania Organization Essay

The overall initiative to increase equity and funding in public educational institutions in Pennsylvania brought about the creation of the Good Schools Pennsylvania Organization (GSP). Since its creation, the organization has undergone many changes and lobbied for the creation of an environment wherein important actors and legislation coincides with the needs of the growing student population. These struggles brought about new trends wherein it both opened up avenues for change to occur and encourage new parameters wherein goals and objectives are integrated with mandated standards. Recognizing the relevant contribution that GSP has done to the citizenry and target group, this proposal aims to increasingly widen the capability of the organization to address student needs. The prescribed programs for 2010 under three (3) categories seeks to intensify the approach wherein it both looks into the possibilities and scenarios that may happen during that time and appropriating the needed strategies that can intensify the needed outcomes for change. Likewise, by allowing these diverse program alternatives, it seeks to compensate the loopholes that may be associated in the process of planning, implementation and facilitation. At the same time, these proposals take into consideration the needs of all relevant actors who serve as potential members and benefactors in the success of the initiatives of GSP. By taking into account the relevance of these people, GSP can increase its arm in reaching out and fostering the commitment towards community building and empowerment. Such ideals clearly allows each parties to recognize their individual capabilities in the process of participation With all of these, the organizers seek to intensify the efforts given by GSP. It is in this light that the organization can remain committed in addressing the increasing challenges and trends of 21st century education. I. Introduction and Background The pursuit for creating an environment for equal opportunities among students via appropriate delegation and funding has been the integral foundation for the creation of the Good Schools Pennsylvania. Since its inception in 2001, the organization has continued to recognize the needs of students by making sure students in the district get the sufficient and quality education needed to help them combat the trends of today’s society (Good Schools Pennsylvania, 2008). These efforts have paved the way for new alternatives in addressing public education and opened up arenas for every actor in the community to take part in the process of collaboration and change. Looking at it, the initiative of Good Schools Pennsylvania GSP can be described in one sentence. It involves seeking new ways wherein legislators provide equitable funding in public education to ensure that students get the quality of education they need and foster increased accountability and responsibility among actors involved in the process (Good Schools Pennsylvania, 2008). With these renewed interest in seeking outcomes for development in budget appropriation and education, it has opened up the door for greater cooperation among the state and the integration of vibrant members who have paved the way for the organization’s growth. Dwelling further, the role of the GSP is to encourage organizers to come up with new approaches that can motivate different sectors of society to come up and take the stand towards achieving equal education for all. Good Schools Pennsylvania argues that â€Å"among the constituents who have stood with us are students and retirees, clergy and lay leaders, parents and teachers, school board members and superintendents, and business and civic leaders† (p. 1). With these continuing initiatives to seek out new members who are committed towards changing the level of education of Pennsylvania, the GSP has been vibrant in addressing the needs of public education in the community. Realizing these relevant objectives of the organization, it greatly coincides with our purpose to find alternatives that can strengthen and improve the capability of GSP to facilitate and clamor for new ways to increase accountability and equity in public education. By elaborating on several new strategies and outlining them in three possible scenarios, our group can maximize the potential of GSP and implement the new ideas that can expand the scope of practice, intensify efforts for commitment, and be adaptive to the state’s current trends. II. Research For the initiative in 2010 to be completely realized, it is essential that the organizers understand and comprehend the developments happening within the GSP. Under this facet, we need to know the truths surrounding education and the current legislation that outlines public education funding. This is relevant because it helps us integrate new policies in-line with specific standards and objectives mandated by law. Another significant element that needs to be considered is the cost of study in Pennsylvania. Its importance revolves around the capability to make budget estimates in proposed initiatives to actively create adequate funding to support and help students in public education. Lastly, there needs to be an understanding on what programs are in place that the GSP provides the citizenry. By synthesizing these programs, it can create an active environment wherein it can be changed, renewed or developed to suite the needs of members, potential members and target contributors. Dwelling into the first facet, it can be seen that Pennsylvania has undergone several legislative changes that had improved the way budget is allocated. This has been brought about by the realities and scenarios that public education has faced in the region. Good Schools Pennsylvania argues that in â€Å"2005, nearly 50 percent of Pennsylvania’s eleventh graders scored below proficient in math and 35 percent of eleventh grade students scored below proficient in reading on the state’s standardized tests† (p. 2) At the same time, there has been disparities in the way schools have been addressing the way they had adhered to the objectives mandated by NCLB of 2001. Likewise, these statistics also denote increased risk among students of getting pregnant, imprisoned or engaging into substance abuse (Good Schools Pennsylvania, 2006). In response to this, the government has adopted several policies and amendments that sought to address these realities. For example, in 2004, Governor Rendell â€Å"appropriated funding for the first time to support early childhood education – both through the first ever state funding to expand the General Head Start pre-kindergarten program, and through an Accountability Block Grant Program that allowed school districts to target money to educational practices with a huge track record of helping students to achieve academic standards† (p. 3). This has been considered a first step towards realizing the states role in pursuing public education that is equitable and responsible in nature. Another significant change was made in 2006 when a formal budget was introduced and implemented together with a defining the cost associated with public education. Good Schools Pennsylvania mentions that the â€Å"2006-2007 budget includes a first-time appropriation of $650,000 to fund a comprehensive study of the educational resources and associated costs of providing each student an education that is line with academic standards† (p. 3). These have been significant because it can address an equal measurement of how much student needs in order to actively achieve education under mandated standards. Operating on the second element, it is crucial to decipher the numbers associated with public education because it can determine the budget that shall be allocated per district depending on the ratio and student population within a specific area. Looking at the current research, considerable development has been seen in determining the costing-out study of Pennsylvanian students. Good Schools Pennsylvania mentions that â€Å"by understanding these costs the state can adjust its funding system to close the gap between high-spending and low-spending school districts† (p. 1). Upon careful consideration and research, it sees that for a student to actively achieve the given state standards, an average amount of $11, 926 must be provided (Good Schools Pennsylvania, 2007). This formula has been instrumental in determining the appropriation needed to sustain the further needs of students. Lastly, looking at the projects GSP is engaged in, it can be argued that they comprise of different models geared towards addressing the needs of its target audiences. These initiatives center on (1) engaging into legislative awareness and debate, (2) fostering community involvement, and (3) speaking out in different ways possible (Good Schools Pennsylvania, 2007). These three main facets cover a myriad of initiatives and programs that outlines the significant arenas wherein each actor can actively take part. By allowing and integrating these diverse ways of addressing the issue of public education and funding, each one can contribute and provide help in each ones capacity. Seeing all these facets, the development of 2010 programs revolve around intensifying on these three relevant ideas. Though these findings have been supplemental in carrying out objectives in the previous years, it is also necessary that further studies be conducted on these issues. This is relevant because it can seek to determine the potential challenges that public education may face amidst these new developments. For example, the formula for computing the cost of student may change over a period of a year for it is dependent on factors such as inflation, increasing expenditures, and other elements that are relevant to its computation. That is why further research on these topics remains to be an important concern to consider. III. Opposition Research It can be seen that the GSP has made significant progress since its inception in creating the consensus about creating drastic changes in the legislative level as far as addressing public education is concerned. Though this may prove to be a valid analysis, there are still setbacks that continue to hinder the organization from functioning according to its prescribed goals and objectives. Seeing this, it is essential that the organization recognize its shortcomings and try to incorporate new methods to increase the possibility of adapting to the trends of 21st century education. Looking at one institutional obstacle that hinders its capabilities is the presence of other organizations that have the same objectives and purpose. Though at a glance this may seem to be rather significant in further elaborating the needs of public education in Pennsylvania, it also denotes the limited functions associated with GSP. Being unable to synchronize its relevant goals and actions towards its counterparts would mean alienating itself with the potential of further collaboration and cooperation. Examples of these institutions that cater the same agenda as the GSP include: Education Law Center and ACCESS. These institutions in turn have overlapping objectives and ideals that are similar and related to the standpoint of GSP. Another setback that is relevant in the GSP is its dependency on contributors and benefactors. Even if its members had shown significant improvement and effort through the years in gaining potential donors, it cannot guarantee its existence primarily on this. They must have a significant support and foundation that will make them adaptable even in times of little contribution or monetary support. By allowing this scenario to occur, they shall not solely depend on these contributions but can continue to develop on new ways in continuing to promote its specific goals and objectives. Recognizing the setback in the previous section, it can be also observed that GSP is also susceptible to economic downturns. Since its continued existence revolved around contributions among benefactors, having a slow economy can slow and hamper its capability of recognizing its objectives. Seeing this, the organization has to constantly double its efforts and adapt new strategies that can intensify and implement new approaches for change. Similarly, during these periods of slowdown, the organization refocuses its approach and center on programs that creates limited budget but with increasing results. Such case only results in a limited scope in both application and practice. IV. Our Plan After reviewing the relevant history and studies associated with GSP, it is now relevant to point out and outline the scope and objectives for the plan in 2010. Among the core elements of this proposal is to (1) Effectively get our message out to prospective and current members as well as potential contributors in the community, (2) facilitate active communication among different parties and (3) Intensify the efforts brought about by the Who-Ville Presentation. These three objectives can be recognized and incorporated by including them within the scope and parameters of each specified initiative. Under these specific objectives, the proposal shall outline detailed ways wherein these approaches can be realized and achieved. In the first objective, it seeks to cater on ways wherein advertising can be made and integrating the cause and how it can contribute to the needs of the organization. While on the second facet, it outlines the communicative patterns that can maximize the potential of each project and provides new approaches that can realize the specified goals and objectives. The last part deals with seeking new opportunities that will further recognize the contribution of Who-Vile in addressing the needs of today’s public education as well as the tenets promoted by GSP. With these objectives in place, the next part involves pointing out several programs that can be applied in the year 2010. Under this framework, three proposals shall be given and denoted by the current trends present in the target year. These include (1) period of net gain, (2) period of stasis, and (3) period of net loss. The formation of these three initiatives is one important aspect to make the program feasible in any scenario that the state may face during the prescribed year. At the same time, it moves to adapt to the existing trends that may come along in the selected time frame. The formulation of these three facets can minimize the setbacks that may be incurred in the timeline provided. Good School Pennsylvania (GSP) is a non-profit organization that is based upon membership as well as the employment of competent personnel that fit the requirements of the specific position of the job that they are looking for. In relation to this, GSP is also regarded as a grassroots campaign that is affiliated with other non-governmental organizations such as: the Public Education Network, the National Council of Churches, and the Children’s Defense Fund, and the Children’s Defense Fund. Even though GSP is not regarded as an organization that is solely based upon membership, this factor still plays an important role in its overall operation. The GSP has an active membership as they have devised many ways by which people all over the state could be able to participate in the attainment of the organizations objectives. Some of the means that the organization implemented in order to influence the people participate is by convincing to take action by writing to their legislators, speaking out through various mediums like essays and photo contest, and by connecting to their community that could help in furthering the cause of GSP (Good Schools Pennsylvania). Active membership in this organization involves helping in the offices throughout the state. GSP need active volunteers that would help them with mailings, database maintenance, and other administrative functions. In connection to these, the organization also benefit a lot from members that could help them in the next phase of their work, which involves sustaining and expanding the state policy reforms with regards to enhancing the quality and equality of education among the students in Pennsylvania. Furthermore, active members also entail aiding them in supporting the local stakeholders in making various schools accountable in the equal and effective distribution and utilization of resources (Good Schools Pennsylvania).

Wednesday, October 23, 2019

The life of Leopold and Loeb

Chicago teenagers attempted to commit the perfect crime. Nathan Leopold and Richard Loeb kidnapped 14-year-old Bobby Franks, bludgeoned him to death in a rented car, and then dumped the boys body in a distant culvert. Although they thought their plan was foolproof, Leopold and Loeb made a number of mistakes that led police right to them within only a number of days. The trial, which featured the famous Chicago attorney Clarence Darrow, made headlines and was referred to as â€Å"the trial of the century.Who Were Leopold and Loeb? Nathan Leopold Nathan Leopold was an extremely brilliant young adult. He had an IQ of over 200 at the age of only 19, and had already graduated from college and was in law school. However, despite being brilliant, Leopold was very socially awkward and spent a lot of time by himself. Richard Loeb was also very intelligent, but not to the same calibre as Leopold. Loeb, who had been pushed and guided by a strict governess, had also been sent to college at a yo ung age. However, once there, Loeb did not excel; instead, he ambled and drank.Unlike Leopold, Loeb was considered very attractive and had impeccable social skills. It was at college that Leopold and Loeb became close friends. Their relationship was both stormy and intimate. Leopold was obsessed with the attractive Loeb. Loeb, on the other hand, liked having a loyal companion on his risky adventures. The two teenagers, who had become both friends and lovers, soon began committing small acts of theft, vandalism, and arson. Eventually, the two decided to plan and commit the â€Å"perfect crime. â€Å"