The Status and Future of Evaluation in Turkish Educational Decision Making: An Introduction

Evaluation Series – I

The Status and Future of Evaluation in

Turkish Educational Decision Making: An Introduction 


In 2012, The Republic of Turkey’s Ministry of National Education (MoNE) launched the “School Milk Project” in cooperation with the Ministry of Health and the Ministry of Food, Agriculture, and Livestock. The aim of this project was to support students’ engagement and learning in school by improving their nutritional habits.  A few months after the project started, hundreds of students had been admitted into hospitals because of spoiled milk and food poisoning. Major newspapers and network news in Turkey covered these cases, and the incidents were considered a public disgrace, all the more humbling because Turkey was considered to be on the road to European Union (EU) membership. This incident revived the longstanding debate about decision-making and implementation of government programs and policies in Turkey. Lurking behind this debate were three common questions: Did the government do the right thing in designing this project? Did they implement it correctly? What could they have done better?

These questions are linked to a broader debate about the role and utility of evaluation (or lack thereof) in decision making by Turkish educational officials. Evaluation is simply to assess the process and outcome(s) of a public policy, program, and/or project according to a set of criteria and standards. In a broader sense, evaluation helps decision makers in public realm answer the following questions: What works and/or what does not work for whom and under what circumstances? Scholars suggest that evaluations play a strategic role in the decision-making process by generating a continuous flow of solid information about the merit, shortcomings, and outcomes of public programs and policies, contributing to their effectiveness and betterment (Weiss, 1998; Patton, 2012; Mark & Henry, 2004; Fitzpatrick, Sanders, & Worthen, 2004).  In spite of evaluation’s apparent resonance with improved governmental decisions, Turkey has lagged behind countries with comparable levels of development (e.g., Brazil, Korea) in establishing evaluative praxis as an integral part of educational decision-making.  The available evidence suggests that formal policy evaluations are rare in Turkey (Education Reform Initiative, 2009; Russon & Russon, 2000).  As a country that aims to cope with competitive pressure within the EU and the knowledge economy in the world, Turkey may frequently lose the opportunity to use evaluation to improve educational policies and programs as a result.  To this end, Turkish scholars have increased their calls for home evaluations of national programs and policies grounded in Turkey’s social, economic, and cultural context (Aydagul, 2008).

This article is the first of a series of research briefs and papers to be published via ResearchTurkey that will help illuminate the current status of evaluation as a field and practice in Turkish governmental life and civil society. The purpose of the current article is to shed light on the status and potential future of evaluation specifically in the educational decision-making context.  The article provides a preliminary, descriptive context for the topic by borrowing from the evaluation literature developed in the global Northern context. Major definitions, terms, and concepts will also be introduced throughout the article to provide guidance in understanding the field and practice of evaluation and its application into decision-making domains in Turkey.  Although it focuses on the educational context, there is no obvious reason why the results could not demarcate the utility of evaluation in other decision domains (i.e., health, transportation, employment, development, etc.).

The Field and Practice of Evaluation: Theoretical Background

Inferences about the role and utility of evaluation in Turkish educational context will be much stronger when backed up by some deeper thinking about the definition, purposes, and uses of evaluation as a field of practice. Thus, this section aims to underline the significance of evaluation in decision-making contexts, or more broadly, seeks to answer why we should bother. 

Western evaluation scholars and practitioners alike have provided many definitions and discussed several purposes for evaluation (e.g., King & Stevahn, 2012; Patton, 2012; Stufflebeam & Shinkfield, 2007; Rossi, Lipsey, & Freeman, 2004).  The most widely used definition of evaluation is “the systematic process of determining the merit, worth, or value of something” (Scriven, 1991, p. 139).  Many practitioners in low and middle income countries favor Carol Weiss’ definition of program/policy evaluation (see UNDP, 2011), a distinguished U.S. policy and evaluation scholar who passed away recently: “the systematic assessment of the operation and/or outcomes of a program or policy, compared to a set of explicit or implicit standards as a means of contributing to the improvement of the program or policy” (Weiss, 1998, p. 4).  Here, it is vital to distinguish program evaluation from other social science research.  Weiss (1998) provided a comprehensive comparison of these two inquiry traditions.  For our purposes—evaluation for educational decision-making—her insight about the utility and client is more immediately relevant.  Weiss (1998) posits that evaluations are conducted with use in mind for a specific client (i.e., policy makers, managers, staff, etc.) who has decisions to make.  In sum, evaluations—unlike other social science research—are intended to be used by policy or program communities who need information to base their decisions.

Notwithstanding several definitions, existing literature demonstrates the growing interest in using evaluation as a decision-making instrument for designing, implementing, and improving organizational goals at the local, national, and international levels (Fitzpatrick, Sanders, & Worthen, 2004).  Western researchers have long argued that evaluations are influential forces to improve public services, programs, and policies (Segone, 2008; Mark, Henry, & Julnes, 2000; Scriven, 1991).  They agree that evaluations contribute to institutional learning and effectiveness of decisions such that evaluation processes and findings may create changes in thinking about programs’ and policies’ design, implementation, logic, and desired outcomes, and ultimately shifts in action by building a knowledge base (Weiss, 1998; Preskill, 2008; Patton, 2012).  Fitzpatrick et al. (2004) summarized the significance of evaluations in decision making as follows:

Evaluation serves to identify strengths and weaknesses, highlight the good, and expose the faulty, but it cannot singlehandedly correct problems, for that is the role of management and other stakeholders, using evaluation findings as one tool that will help them in that process. (p. 27)

Patton (2012), an evaluation pioneer both in the global Southern and Northern contexts, summarizes the purposes of program evaluation into six categorizes (see Table 1), and argues that evaluation’s purpose and use are defined by the intended users’ information needs, and priorities.

Table 1. Primary purposes of evaluation

Purpose of EvaluationFocusPrimary Users of Evaluation Results
Summative, JudgmentTo determine the overall value of the program or policyFunders and policy-makers
Formative improvement, LearningTo improve the program or policyProgram administrators and staff
AccountabilityTo demonstrate the efficient use of resourcesExecutive, legislative authority
MonitoringTo provide data for program managementProgram managers
DevelopmentalTo adapt the program or policy in complex environmentsSocial innovators
Knowledge GenerationTo identify patterns of effectivenessProgram designers

Source: Adapted from Patton (2012, pp. 129-132)

In sum, this utilitarian view of evaluation assumes that evaluations can provide useful information for decision makers that help guide program design, implementation, and improvement, which is why evaluations can be appealing to managers, administrators, or even policymakers (Fitzpatrick, Sanders, & Worthen, 2004).  

Uses of Evaluation for Decision-Making

The agreed-upon, prevalent uses of evaluation noted in the literature are three-tiered (Johnson et al., 2009). First, instrumental use refers to using evaluation findings for immediate decision making to modify, expand, or terminate the evaluation object (product, program, policy or personnel) (Mark & Henry, 2004; Johnson et al., 2009).  This use assumes a rational decision-making process where policymakers have access and desire for scientific evidence to use for their decision (Almeida & Bascolo, 2006).  Second, conceptual use refers to indirect use of evaluation findings that illuminate policy problems and solutions in a new way and change our understanding.  Carol Weiss (1998) calls this enlightenment whereby the evaluation findings build knowledge and become a part of the policy discourse over time. This view assumes that evaluation information affects policy decisions in more subtle and indirect ways over time, becoming a part of discursive dialogue, hence the “new common wisdom” in policy arena (Weiss, Graham, & Birkeland, 2005, p. 13). Third, symbolic use refers to using evaluation findings to justify the existing practices, persuade others about certain positions or delay action in political arenas (Almeida & Bascolo, 2006).

Among these three major uses of evaluation, many scholars and practitioners have questioned the instrumental use of evaluation findings and processes for decision making, claiming that there are many sources of evidence available to decision makers; thus, it is unrealistic for evaluations to purport direct influence on decisions (Weiss, 1998; Chelimsky, 2006).  These scholars mainly question the direct use of scientific, sociological knowledge in the political arena, arguing that knowledge utilization does not take a form of immediate, direct application, but rather a longer and indirect transformation through various mechanisms (Balthasar & Rieder, 2000).  

The debate about the instrumental use of evaluation information is indeed linked to a larger discussion regarding evidence-based decision-making.  Changes in public administration culture in Western democracies over the last century have highlighted effectiveness and efficiency as common denominators in providing public services (Clarke, 2008).  This shift triggered wide adoptions of results-based management and accountability mechanisms to demonstrate value for money and ultimately stipulated rigorous, scientific evaluations to produce credible, empirical evidence to be used for decision-making (Donaldson, 2008).  Proponents of evidence-based policy and practice assumed that scientific evaluation evidence–presumably obtained from randomized control trials or quasi-experimental designs–will have a direct impact on the design and implementation of programs and policies, contributing to their betterment (Shadish, Cook, & Campbell, 2002).

Nevertheless, scholars often highlighted the limitations of direct use of evaluation evidence for decision making due the political context.  For example, Weiss (1998) submits that evaluations may not have a direct effect on policy decisions due to various reasons.  First and foremost, the scholar posits that evaluation evidence competes against many other sources of information available to policy makers. Evaluation is not the primary source of evidence in policy arena (Weiss & Bucuvalas, 1980; Chelimsky, 2006).  The following poem summarizes the realist approach of Weiss and her colleagues (Weiss, Graham, & Birkeland, 2005) to use of evaluations for decision-making:

Evaluation is fallible

Evaluation is but one source of evidence

Evidence is but one input into policy

Policy is but one influence on practice

Practice is but one influence on outcomes. (pp. 12-13)

Second and more importantly, Weiss argues that there is no single decision maker who easily welcomes evaluative evidence without reservations and makes decisions without any conflict. To Weiss (1987), the policy decision-making context is indeed unstable, involving many decision makers who have different opinions, conflicting interests, and opposing needs that make it harder for evaluative evidence to directly inform decisions.  Cook (1997) also explicated this point years ago:

The politician’s prime goal is to be reelected rather than to respect technical evidence; that personal and party political ideology often entail that evidence is used in markedly selective ways; and that politicians experience a greater need to be a part of budget allocation rather than of program review. (pp. 40-41)

These studies point to a prevalent argument in the evaluation literature; that is, evaluations take place in political contexts (Weiss, 1998; Mark, Henry, & Julnes, 2000; Greene, 2006; Datta, 2011).  Weiss (1987) submits that programs that evaluations are concerned about are the very byproducts of politics, and evaluations make political statements about the value of these programs and determine their fate.

Although Western scholars have come to a consensus a long time ago that regular evaluative activities improve public policies – and many developing countries have already caught up with this notion (i.e., South Africa, Nigeria, Uganda, etc.), the political nature of knowledge creation and use in governmental arena poses setbacks for evaluation’s smooth evolution not only in Turkish context but elsewhere. Perhaps, Carol Weiss’ realist approach to uses of evaluation in decision domains is much more aligned with Turkish political landscape splintered with many fault lines. Specifically, Turkey’s educational context – often a contested ground consisting of various players with differing opinions, personal whims, and interests – could better accommodate the enlightenment use of evaluations as a means to gradually improve educational policies. While the efforts to bridge the gap between scientific information – largely obtained from social science research and evaluations in public arena – and public policy constitute a multi-million dollar enterprise worldwide, it would be naïve to hope that Turkish educational officials regularly conduct evaluations and use their results directly for their decisions while these actors are bombarded with multiple of sources of information to influence their policy decisions – a major one being the dominant political agenda.

Evaluation in Turkish Educational Context

Nevertheless, geographically located in the Middle East and institutionally closer to the global North, Turkey provides a unique case for exploring the value of evaluation in decision-making contexts. While the global North and the global South have inherited distinct features of evaluation in line with their distinct cultures of governmental decision-making, Turkey may be the testing ground for illuminating future directions of the field of evaluation in middle income country contexts.

Given evaluation’s significance, a concerted effort by many global Northern institutions and evaluators to build evaluation systems and practice in developing countries contributed to the expansion of the field of evaluation in contexts outside of global North. Numerous sessions, workshops, and conferences have been organized to build evaluation capacity in developing country governments, and many national evaluation organizations and associations have been established (Mertens & Russon, 2000).  EvalPartners, an international evaluation partnership initiative to strengthen civil society evaluation capacities to influence public policy based on evidence, attempted to map existing Voluntary Organizations for Professional Evaluation (VOPEs) around the world and found some information on a total of 158 VOPEs, out of which 135 are at national level, while 23 are at regional and international levels (e-mail communication, Segone, January 2013).  Some LMICs have established government-wide evaluation systems to improve their public programs and policies (e.g., Brazil, Korea, Mexico, etc.) (see UNDP, 2011).  Most recently, an emerging interest in developing evaluation as a profession in developing countries beyond development assistance has evolved (Carden, 2010).  As a result, the field of evaluation in the twenty-first century is characterized by its international and cross-cultural expansion (Patton, 2010).

In the most thorough study to date on evaluation cultures across twenty-one countries, Furubo, Rist and Sandahl (2002) recognized that evaluation approaches and models were disseminated from the larger aid organizations and added, “Latecomers have adopted these ideas, perhaps to show that they also subscribe to the modern and rational public management school of thought.  But the conclusion here is that adherence to these ideas in most cases has been mainly lip service” (p. 17).  There is a growing body of research investigating the ways to increase country ownership of evaluation processes and findings, but relatively little attention has been given to an equally important topic: how can Northern-based evaluation capacity building activities become a part of national decision making beyond development assistance? Capacity building does not guarantee that evaluation will become a routine part of daily decision-making processes (Sanders, 2002).  While Bamberger (1991) calls the donor imposition in the field of evaluation “cultural imperialism” (p. 337), Picciotto (2007) describes it as “business-as-usual” whereby resources to enhance evaluation capacity at the country level remain embedded in donor agencies; thus, donors’ imposition for one-sided accountability continues (p. 512).

This brief account of evaluation history in low and middle-income countries is informative for Turkish case as well. Historically, international donors’ evaluations of educational development programs have largely informed and influenced the evolution of evaluation systems (or lack thereof) in Turkey whereby the country mostly measured its performance on educational programs against donors’ needs based on donor criteria (e.g., USAID, 2001; OECD, 2005; World Bank, 2011; UNDP, 2011).  This poses a challenge for the evolution of evaluation systems and practice in Turkey because as some scholars argue that the dominance of northern-based institutions’ values and priorities might disable learning from evaluation for in-country decision-making (Hay, 2010; Conlin & Stirrat, 2008). Indeed, evaluations of donor-led educational programs might have led Turkey to utilize program evaluation that checks and monitors (audit review) rather than evaluation that seeks and improves programs and policies (formative inquiry) (see Wadsworth, 2001; Gasper, 2000).

Nevertheless, the nature of Turkey’s involvement with Northern-based aid institutions is changing due to Turkey’s increasing economic and political power in its region. Turkey has been a member of Organization for Co-operation in Economy and Development (OECD) since 1961; associated with the European Economic Community (EEC) since 1963; a Europen Union candidate since 2005; ranked among the 20 largest economies of the world in 2012 (CIA Factbook, 2012); and ranked at the 38th place in the 2012 World Competitiveness Scoreboard. The country plays a pre-eminent role in its politically volatile region (Ozturk, 2002).  In addition, once an only aid recipient, Turkey is now an emerging donor. “In 2010, Turkish net ODA (Official Development Assistance) reached USD 967 million, an increase of 24.8% over 2009 in real terms” (Atwood, 2012, para.5).  Thus, the calls for improving national educational policies and programs to compete in the global knowledge economy have rightfully increased.

Indeed, the historical development of evaluation culture in Korea and Brazil suggests the potential for utilizing program evaluation as a decision-making tool in Turkey to provide useful information about programs’ effectiveness in improving education outcomes.  With the launch of “Government for People” in 1998 as a response to severe economic crisis in Asia, Korea has developed government-wide evaluation systems to create and implement national reform packages based on national needs and priorities (Lee, 2002).  Despite the insufficient number of evaluators, Korea is today conducting several major evaluations, ranging from evaluations of ministries’ major programs and policies to meta-evaluations of each institution’s policy making and evaluation capacity (Furubo, Rist, & Sandahl, 2002).  Similarly, the evaluation field in Brazil has grown dramatically in recent years with 453 post-graduate evaluation courses and a 90% increase in publicity on government evaluations, fostering better programming and budgeting (UNDP, 2011).  Although the association between increased evaluation activities and better development outcomes is not empirically documented, anecdotal evidence suggests a positive relationship (Segone, 2008, 2009).  Turkey’s Ninth National Development Plan (2007 – 2013) also praised social and economic developments taking place in Korea and Brazil and argued that their influence in international decisions will considerably increase in the coming decades (Ministry of Development, 2006).

In contrast to Korea and Brazil, little is known about the implementation and impact of many educational policies, programs, and projects in Turkey, although the continuous improvement of educational practices in Turkey is of the utmost importance to the country’s long-term aspirations (Education Reform Initiative, 2009; Erguder, 2013).  The Ninth Development Plan (2007–2013) envisions Turkey as an information society that will assume a more competitive, global role and complete her coherence with the European Union (Ministry of Development, 2006).  The Plan underlined quality education services as prerequisite to realizing the country’s vision; hence, there was an increased public investment in education to 21.9% in 2012 (compared to 14% in the base year of 2006).  Although the Plan stipulates monitoring and evaluation (M&E) of all government services, it does not clarify how M&E information will be obtained and used, limiting our understanding of the role of evaluation in decision-making.  In the absence of evaluative information, MoNe’s internal research studies constitute one source of information for educational decision-making. Turkish scholars, however, consider these research activities inadequate because they have been unsystematic to have an impact on decision-making (Aydagul, 2008).

Indeed, there is a recent, emergent interest within civil society and MoNE in strengthening the connection between evaluation and decision making although little is known about the overall value of evaluation.  First and foremost, the salience of the lack of knowledge about the impact and shortcomings of educational policies and programs motivated the Istanbul Policy Center at Sabanci University, one of the leading research universities in Turkey, to launch the Education Reform Initiative (hereafter ERI) in 2003.  ERI aims to improve educational decision making and cultivate a new policy-making culture in the country through research, advocacy, and training (see  This initiative is based on the premise that “it is of critical importance that decisions are based on data and evaluation, and on a transparent and participatory interaction among the state, civil society organizations and citizens” (Education Reform Initiative, 2009, p. 6).  Thus, this initiative aims to facilitate a participatory, democratic public dialogue about educational policies and programs by bringing together representatives from civil society organizations, academia, schools, and public and private organizations.  The reform stipulates the importance of informed, evidence-based decision-making, best practices, and creative and transparent solutions for alleviating pressing educational problems.  The Reform hopes to influence decision makers’ priorities and practices in order to help Turkey achieve its long-term global aspirations by providing quality education for all (Educational Reform Initiative, 2010, 2011).  Yet the impact of this initiative in informing governmental decision-making is unknown to this date.

Another leading actor in bridging the gap between evaluative information and education policy is SETA (Foundation for Political, Economic, and Social Research) – a nonprofit, nonpartisan think-tank organization located in Ankara, Turkey. Unlike ERI, SETA covers a broad range of public policy issues from international security to energy, one of which concerns education. Similar to ERI, the Foundation too aims to produce accurate knowledge through research activities to better inform policy makers and the public at large (see SETA seeks to adhere to both international standards of equity, rule of law, and justice, as well as national context and cultural contours underlying the Turkish political arena. The Foundation’s comprehensive report on National Education System in Turkey (Gur & Celik, 2009) outlines the most pressing educational issues in the country and provides potential policy solutions. Similar to ERI, despite widespread dissemination of their research findings, SETA’s influence on educational decisions is yet to be tested.

Parallel to ERI and SETA, a policy window of opportunity has opened within the Turkish government to utilize program evaluation as a decision-making tool in educational programming and decision-making.  In light of EU laws and regulations and in response to increasing calls for effective public administration, the Turkish National Grand Assembly enacted the Public Financial Management and Control Law (PFMC) No. 5018 in 2003 and required every public institution—ministries and public universities—to prepare and implement a strategic plan to improve administrative performance (Ministry of Finance, 2006).  This increased emphasis on accountability in government attempts to link performance measurement to budgeting decisions.  As a result, MoNE created a strategic plan as a tool to design, implement, and improve its institutional goals, principles, policies, and programs.  The Ministry’s first and only strategic plan introduced monitoring and evaluation as a must to improve organizational learning and strengthen accountability (Turk, Yalcin, & Unsal, 2006).  Institutional objective No 17.4 of this strategic plan clearly indicates that the Ministry needs to build institution-wide monitoring and evaluation systems and practice to improve strategic planning and the decision-making process (MoNE Activity Report, 2012, p. 20).  Since it was a new construct for Turkish public administration, Turk, Yalcin, and Unsal (2006) conducted a survey study with 134 senior officers at the Ministry to understand their perceptions and opinions about the feasibility of strategic planning. Almost half of the study participants indicated that they did not have enough knowledge about the process of strategic planning.  Yet, they believed that strategic planning would improve institutional learning and management, contributing to the betterment of educational policies and programs.  Despite the common belief about the usefulness and value of strategic planning, the Ministry’s Activity Report (2012) revealed that the tools and mechanisms to assess the achievement of educational targets did not go beyond performance-based budgeting.

Following the spirit of PMFC, MoNE has undergone a serious institutional restructuring thanks to a statutory decree No. 652 issued during the era of former Minister Omer Dincer, who is also known as the master mind of this dramatic change. The decree envisioned a less bureaucratic Ministry to enable smoother, faster, and effective policy-making process. The resulting new structure now houses only 17 general directorates – as opposed to 32 previously – whose roles and responsibilities have been clarified to reduce overlap and double dipping in policy process. Within this new structure, MoNE now has Monitoring and Evaluation (M&E) units under almost each directorate that originally aims to assess the effectiveness and efficiency of educational activities undertaken by the Ministry. Although the long-term impact of the decree on the fate of educational policies is perceived to be significant (Dincer, Personal communication, October 3, 2013), it is yet unknown if M&E units will aid or abet the evolution of evaluation practice in Turkish governmental life.

In addition to national context, Turkey’s bid for European Union (EU) membership is a significant, external force that informs and shapes educational policies and programs in Turkey since the Helsinki Summit in 2005.  The EU’s educational policies aim at strengthening mutual understanding and cultural ties between the people of Europe; cultivating educated, competitive European citizenry; and encouraging technological innovation and development (Barkcin, 2002).  The Lisbon Treaty (2000) underscores the EU’s overarching goal to become the most competitive player in global knowledge economy and invites all members and candidates to align their educational programs and policies accordingly.  To realize this aim, MoNE received a 3.7 million € Capacity Building Grant in 2006 to embrace new modalities of decision making so that Turkey’s educational system would better harmonize with the EU policies and regulations (European Commission, 2006).  The EU’s Capacity Building Support for the Ministry of National Education (2008-2010) created another opportunity for the Ministry to design and implement better policies and programs based on evaluative information.  The aim of this pre-accession assistance, totaling $4.9million, was to improve the efficiency and effectiveness of the Turkish education system by developing MoNe’s planning, implementation, and monitoring capacity so that educational policies and programs would be harmonized with the EU priorities.  One of the central objectives of the grant was to strengthen human resources capacity at the educational system.  This was done by a series of training courses and workshops on topics including data collection, analysis and protection, problem solving and decision making, performance management, monitoring and evaluation, and the use and interpretation of statistics in education (European Commission, 2006).  Still, information about if and how the Ministry has taken solid actions to enhance decision-making based on this pre-accession assistance is limited (Education Reform Initiative, 2011).

Additionally, the discourse in some government documents in the last decade (i.e., Government Action Plans, 2008, 2011; National Education Councils, 2006, 2010; National Development Plans, 2000, 2006) related to education also indicates that evaluation has gradually infused the decision making context from various sources with some capacity, yet the perceived utility and role of evaluation in educational programming is undocumented.

First of all, Government Action Plans provide the primary guidance for the design and implementation of all public programs and policies in Turkey.  Today, the need for a well-educated citizenry in Turkey is much more pronounced in policy discussions due to rapid economic growth as reflected in the current government’s agenda.  The most recent ones, namely the 60th and the 61st Government Action Plans (2008, 2011), emphasized the quality of education as a prerequisite to realize national goals, including full EU membership. To actualize this commitment, both plans assigned the biggest number of activities and allocated the highest public spending to MoNE.  The Ministry’s budget allocation from the national budget in 2011 was the highest in its history, totaling almost 35 million TL (approximately 20 million US$), and constituting 3.8% of GDP (Ministry of National Education, 2011).

In light of the Government Action Plans, the Ministry of Development (previously called the State Planning Organization) prepared national development plans to operationalize the overall vision for the public organizations, providing a foundation for their programs and policies for a specified period of time.  The plan was prepared by the participation of many government officials, academics, and experts from public and private sectors; approved by the Grand National Assembly, and supervised by the Ministry of Development.  The plans cover almost all sectors and industries (i.e., economy, transportation, health, education, culture, energy, welfare system, agriculture, and so on), providing “a long-term perspective and unity in objectives not only for the public sector, but also for the society” (Ninth Development Plan, 2006, p. 12).  Both the 8th and the 9th Development Plans attempted to secure and justify Turkey’s place in a rapidly changing, globalized world where the importance of knowledge, competition, efficiency, and effectiveness is underlined.  For example, the Eighth Development Plan (2001-2005) stated:

Countries adapting themselves to the faster change in the world, equipping their individuals with the capabilities required by this new environment, having access to, producing and using information shall have an impact and will be successful in the 21st century. (p. 244)

Both plans consider quality education among the priority areas as a prerequisite to enhance Turkey’s international competition.  To this end, the Ninth Development Plan envisions increasing the share of public investment in education to 21.9% in 2013 from 14% in 2006 (Ministry of Development, 2006).  The documents commonly mention access to and quality of education, vocational and technical schools, and curriculum issues, life-long learning, and administrative/structural issues as target areas and also the top challenges in Turkey.  Consequently, the Ninth Plan sets the educational targets as follows: schooling rate will increase to 50% in pre-school, 100% in primary and secondary school, and 33% in higher education.

To achieve these goals, both plans make some references to program evaluation activities and its cognate terms (i.e., performance measurement, quality assurance, monitoring).  The Eighth Development Plan demands that, “An effective monitoring and evaluation system at project level as well as national level shall be established for a prompt identification of changing conditions and bottlenecks incurred” in order to increase efficiency in public investments (p. 228).  The Ninth Development Plan has an explicit section on monitoring and evaluation activities–presumably the first time in a governmental document in Turkey. Implementation, Monitoring, Evaluation and Coordination section of the Plan (pp. 113-120) envisages informing the public about the progress in development.  This section also aims to harmonize Turkey’s evaluation activities with the EU norms.  Yet, the section does not specify how these evaluation activities will be performed based on what criteria and how the results will be used to improve educational practices.

Moreover, another influential force on educational programs and policies is the National Education Council. According to the MoNE’s by-laws, the National Education Council is the Ministry of National Education’s highest advisory body that informs and shapes the national educational policies and programs in Turkey. This advisory body embraces a national participatory process whereby elected politicians, appointed bureaucrats, academics, civil society organizations, school principals and teachers–sometimes students–gather together to discuss the past, present, and future of education in the country, identify areas of consideration in moving forward, and propose changes and action steps.  The Council does not have the legislative power; decisions are enforced if and when the Board of Education and Discipline under the Ministry check their appropriateness and applicability according to educational laws and regulations, and then present the Education Minister for approval.

The Council’s lack of legislative power ignited debate about inadequate implementation of the decisions taken by civic participation (Deniz, 2001).  Aydin (1996) conducted a survey with the participating members of the 15th National Education Council (1996) with regards to their opinions about the impact of Council decisions on education policies and programs.  A majority of the survey participants indicated that the influence of Council decisions on education policies is limited.  Survey participants noted that the Council’s place in the Ministry’s hierarchy should be strengthened for decisions to have a greater weight.  Yet, the Council has led to significant changes in the Turkish educational system since 1939. For example, the duration of compulsory basic education was raised to 8 years with the Law No: 4306 entered into force in 1997 (Deniz, 2001).

The Council decisions are important venues to explore the discourse around evaluation activities.  A few recommendations during the 16th National Education Council (1999) clearly stated that evaluation systems need to be established and used to improve the quality and the quantity of vocational and technical training based on changing context and needs.  Seven years later, the 17th National Education Council convened in 2006 with the participation of 850 elected and appointed members.  Unlike the 16th Council (1999), recommendations made during the 17th Council covered a variety of issues ranging from special education to testing and examination systems. Several recommendations touched upon the importance of evidence-based practices to improve educational quality and quantity. Additionally, some of the 17th Council’s (2006) recommendations specifically touched upon monitoring and assessment of educational practices.  In one case, the 17th Council recommended establishing accreditation systems to ensure educational quality in educational institutions.  Compared to the 16th and 17th Councils, the references to evidence-based practices, performance monitoring and evaluation were much more limited during the 18th Council (2010). Some recommendations made an explicit case about bringing national context and values to the foreground in improving the national education system in a globalized world.

In conclusion, current policy discourse around international competitiveness and the global knowledge economy highlights the significance of effective policies and programs in cultivating an educated and competitive citizenry in Turkey.  Especially the Ministry of National Education, the Ministry of Development, and the National Education Councils all emphasize addressing today’s educational challenges with improved planning, programming, and monitoring.  Although the need for improved decision making has implicitly pointed to the need for evaluation systems in the country, the perceived value of evaluation as a decision-making tool from the primary Turkish stakeholders’ perspectives has not been explored yet.

Despite these recent developments in the Turkish educational decision arena, few studies have explored the value of program evaluation as a decision-making tool in Turkey.  Although there is considerable anecdotal evidence about how some low and middle income countries utilize program evaluation for their governmental decisions, systematic studies of this phenomenon for Turkey are undocumented.  Indeed, during the last two decades, the evaluation community has witnessed a dramatic growth of the field in contexts outside of the global North (Chelimsky & Shadish, 1997).  Many Western evaluation scholars had projected the global expansion of the practice, arguing that evaluations are essential in any society (Patton, 2010; Fitzpatrick, Sanders, & Worthen, 2004).  Some scholars and practitioners probed the meaning and boundaries of evaluation systems and practice in low and middle countries (LMICs) (Carden & Alkin, 2011; Furubo, Rist, & Sandahl, 2002; Russon & Russon, 2000).  Yet evaluation remains as a fairly new construct in many LMICs, including Turkey, which requires further investigation into the perceived utility of program evaluation within the developing country decision-making context from the country stakeholders’ perspective.

Conclusion: Significance of Evaluation for Turkish Educational Decision Making

This article argues that evaluations will significantly contribute to educational policy making in Turkey.  Despite many education reforms Turkey has passed over the past few decades, educational policies and programs have largely fallen short of remedying educational problems. Certainly there are numerous reasons and determinants of why the education reforms and program are not working, but one challenge that prevents Turkey from effectively addressing educational problems is the gap in the base of knowledge about which policies work best to improve educational programs for whom and under what circumstances (see Yasar, 1998; Court & Young, 2004).  This is a significant problem because decisions based on inadequate information about policies’ merit may lead to poor use of social resources (Weiss, 1998).  Because the return on educational investment is economically large (i.e., welfare savings, reduction in poverty rates, increased work-life earnings, less crime) (Yeh, 2009; U.S. Census Bureau, 2002), it is worthwhile to provide decision makers with systematic information as to whether the various educational policies are worth the money they cost, whether they should be continued, and how they can be improved to meet the societal needs (Bamberger, Rugh, & Mabry, 2012).

In addition, this article is the first modest step to expand the knowledge base surrounding the efforts to build evaluation systems and practice in low and middle-income countries to improve national decision-making.  Thus far, only a few studies have addressed the value of evaluation systems and practice in decision-making contexts from the developing country perspective.  As noted by Hay (2010): “Evaluation research, innovation, and leadership should not remain exclusive to northern based institutions. We need to examine how evaluation research is developing and the role southern evaluators and organizations are playing in this process” (p. 226).  Thus, without a clear understanding of how a developing country views the value of program evaluation as a decision-making tool to improve its educational practices, the field lacks future directions about how to contribute to social betterment worldwide. This paper argues that, located between the global North and the global South, Turkey could offer a challenging case to investigate this phenomenon.

More broadly, Turkey seems to follow an expected pattern to evolve national evaluation systems and practice similar to those of other middle-income countries, although it has significantly lagged behind in kicking off the process. After the introduction of evaluation as a decision-making tool by Northern-based and created aid organizations – such that OECD invited Turkey to prepare a Turkish evaluation glossary (see Kocaman & Guven, 2008), the country appears to look for its own niche in the field of evaluation, which begs for an empirical study for refining theoretical and practical alternatives (Cakici, 2014, in preparation). The recent ministerial, governmental, and intergovernmental initiatives discussed earlier suggest that evaluation’s time has come to improve educational policies in Turkey. Yet, there is a gap in our knowledge as to who will conduct national evaluations of policies how and to what end. These questions will further be analyzed in the following papers in ResearchTurkey’s Evaluation Series.

Hanife Çakıcı, Project Assistant, Centre for Policy and Research on Turkey (ResearchTurkey)

Please cite this publication as follows:

Çakıcı, Hanife (January, 2014), “The Status and Future of Evaluation in  Turkish Educational Decision Making: An Introduction ”, Vol. III, Issue 1, pp.6-24, Centre for Policy and Research on Turkey (ResearchTurkey), London, Research Turkey. (


Atwood, J.B. (2012). Development co-operation report 2012. Retrieved November, 2012, from

Aydagul, B. (2008). No shared vision for achieving Education for All: Turkey at risk. Prospects, 38 (3), 401-107.

Aydin, A. (1996). Milli Egitim Politikaları ve Suralar. Ankara: Ministry of National Education.

Bamberger, M. (1991). The politics of evaluation in developing countries. Evaluation and Program Planning14 (4), 325-339.

Bamberger, M., Rugh, J., & Mabry, L. (2012). RealWorld evaluation: Working under budget, time, data and political constraints. Thousand Oaks, CA: Sage Publications, Inc.

Barkcin, F. (2002). Avrupa Birligine giris sureci icinde VII. ve VIII. bes yillik kalkinma planlarinda egitim politikalari. Istanbul: Alman Liseliler Kultur ve Egitim Vakfi.

Carden, F. (2010). Introduction to the forum on evaluation field building in South Asia. American Journal of Evaluation, 31(2), 219-221.

Carden, F., & Alkin, M. C. (2011). Evaluation roots: An international perspective. Journal of MultiDisciplinary Evaluation8(17), 102-118.

Chelimsky, E., & Shadish, W. R. (Eds.). (1997). Evaluation for the 21st century: A handbook. Thousand Oaks, CA: Sage Publications, Inc.

CIA (2012). The world factbook: Turkey. Retrieved February 2013, from,

Clarke, A. (2008). Evidence-based evaluation in different professina domains: Similarities, differences and challenges. In I. Shaw, J. C. Greene, & M. M. Mark (Eds.). Handbook of evaluation: Policies, programs and practices (pp.559-581). Sage Publications.

Conlin, S., & Stirrat, R. (2008). Current challenges in development evaluation. Evaluation, 14, 193-208.

Cook (1997). Lessons learned in evaluation over the past 25 years. In E. Chelimsky, & W. R. Shadish (Eds.), Evaluation for the 21st century: A handbook (pp. 30-52). Thousand Oaks, CA: Sage Publications, Inc.

Court, J. & Young, J. (2004). Research impact on policy: What can researchers do? Newsletter of the Economic Research Forum for the Arab Countries, Iran & Turkey, 11(1), 18-25.

Deniz, M. (2001). Milli eğitim şuralarının tarihçesi ve eğitim politikalarına etkileri. (Unpublished thesis). Anadolu University, Eskisehir.

Donaldson, S. I. (2008). In search of the blueprint for an evidence-based global society. In S. I. Donaldson, C. A. Christie, &  M. M. Mark (Eds), What counts as credible evidence in applied research and evaluation practice? (pp. 2-18). Thousand Oaks, California: Sage.

Education Reform Initiative (ERI). (2009). Education monitoring report 2009. Retrieved February 2013, from,

Education Reform Initiative (ERI). (2011). Education monitoring report 2010. Retrieved February 2013, from

Erguder, U. (2013). The indispensable role of education for the centennial goals of the Turkish Republic. Turkish Policy Quarterly, 12 (2), 49-63.

European Commission. (2006). Capacity building support for the Ministry of National Education (Project number TR 06 03 11). Retrieved January 2013, from,

Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R. (2004). Program evaluation. Alternative approaches and practical guidelines (3rd Ed.). New York: Pearson Education, Inc.

Furubo, J.E., Rist, R., & Sandahl, R. (Eds.). (2002). International atlas of evaluation. New Brunswick, London: Transaction Publishers.

Gasper, D. (2000). Evaluating the logical framework approach: towards learning-oriented development evaluation. Public Administration and Development, 20, 17–28.

Government Action Plan, Republic of Turkey. (2008). 60th government action plan. Retrieved January 2013, from,

Government Action Plan, Republic of Turkey. (2011). 61st government action plan. Retrieved January 2013, from,

Greene, J. C. (2006). Evaluation, democracy, and social change. In I. Shaw, J. C.  Greene, & M. M. Mark (Eds.), The Sage handbook of evaluation.  (pp. 118-140). Thousand Oaks, CA: Sage Publications Inc.

Gür, B.S., and Z. Çelik. 2009. Turkiye’de milli egitim sistemi: Yapisal sorunlar ve oneriler [National educational system in Turkey: Structural problems and suggestions]. Ankara: Foundation for Political, Economic and Social Research.

Hay, K. (2010). Evaluation field building in South Asia: Reflections, anecdotes, and questions. American Journal of Evaluation, 31(2), 222-231.

King, J. A., & Stevahn, L. (2012). Interactive evaluation practice: Mastering the interpersonal dynamics of Program Evaluation. Thousand Oaks, CA: Sage Publications, Inc.

Kocaman, O., & Guven, N. (2008). Glossary of key terms in evaluation and results-based management. Organisation for Economic Cooperation and Development. Retrieved on April 13, 2013, from

Lee, Y. (2002). Public policy evaluation in Korea: In search for new direction. In J. E. Furubo, R. C. Rist, & R. Sandahl (Eds.), International atlas of evaluation (pp. 191- 205). Transaction Publishers.

Mark, M. M., & Henry, G. T. (2004). The mechanisms and outcomes of evaluation influence. Evaluation10 (1), 35-57.

Mark, M. M., Henry, G. T., & Julnes, G. (2000). Evaluation: An integrated framework for understanding, guiding, and improving policies and programs. San Francisco, CA: Jossey-Bass.

Mertens, D. M., & Russon, C. (2000). A proposal for the International Organization for Cooperation in Evaluation. The American Journal of Evaluation, 21(2), 275-283.

Ministry of Development, Republic of Turkey. (2001). Eight (8th) five year development plan (2001– 2006). Retrieved February 25, 2013, from

Ministry of Development, Republic of Turkey. (2006). Ninth (9th) national  development plan (2007 – 2013). Retrieved February 25, 2013, from

Ministry of National Education, Republic of Turkey. (1999). 16th National Council decisions. Retrieved March 5, 2013, from

Ministry of National Education, Republic of Turkey. (2002). National education at the beginning of 2002. Retrieved November 2012, from,

Ministry of National Education, Republic of Turkey. (2006). 17th National Council decisions. Retrieved March 2013, from,

Ministry of National Education, Republic of Turkey. (2019). 18th National Council decisions. Retrieved March 2013, from,

Ministry of National Education, Republic of Turkey. (2011). Activity report. Retrieved January, 2013, from

Ministry of National Education, Republic of Turkey. (2011). National education report (to be submitted to the European Commission). Retrieved February 15, 2013, from

Organization for Economic Co-operation and Development. (2005). Basic education in Turkey: Background report. Paris, France: OECD Publishing. Retrieved October, 2012, from

Ozturk, A. (2002). The domestic context of Turkey’s changing foreign policy towards the Middle East and the Caspian Region. German Development Institute.

Patton, M. Q. (2010). Developmental evaluation: Applying complexity concepts to enhance innovation and use. The Guilford Press.

Patton, M. Q. (2012). Essentials of utilization-focused evaluation. Thousand Oaks, CA: Sage Publications, Inc.

Picciotto, R. (2007). The new environment for development evaluation. American journal of evaluation28(4), 509-521.

Preskill, H. (2008). Evaluation’s Second Act A Spotlight on Learning. American Journal of Evaluation29(2), 127-138.

Rossi, P. H., Freeman, H. E., & Lipsey, M. W. (2003). Evaluation: A systematic approach. Thousand Oaks, CA: Sage Publications, Inc.

Russon, C., & Russon, K. (Eds.). (2000). The annotated bibliography of international program evaluation. Dordrecht, The Netherlands: Kluwer.

Scriven, M. (1991). Evaluation thesaurus. Thousand Oaks, CA: Sage Publications, Inc.

Segone, M. (Eds.). (2008). Bridging the gap: The role of monitoring and evaluation in evidence-based policy making. Geneva, Switzerland: United Nations Children’s Fund.

Segone, M. (Eds.). (2009). Country-led monitoring and evaluation systems: Better evidence, better policies, better development results. Geneva, Switzerland: United Nations Children’s Fund.

Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. New York: Houghton Mifflin Company.

Stufflebeam, D. L., & Shinkfield, A. J. (2007). Evaluation theory, models, and applications (Vol. 3). San Francisco: Jossey-Bass.

Turk, E., Yalcin, M., & Unsal, N. (2006). Milli Egitim Bakanligi yoneticilerinin goruslerine dayali stratejik planlama arastirmasi. Ankara: Milli Egitim Bakanligi Strateji Gelistirme Baskanligi. Retrieved October, 2012, from

United Nations Development Programme. (2010). Assessment of development results: Turkey. Evaluation of UNDP contribution to development results in Turkey. Washington, D.C.: Evaluation Office. Retrieved December, 2012, from

United Nations Development Programme. (2011). Proceedings from the international conference 2011.National evaluation capacities. Washington, D.C.: Evaluation Office. Retrieved September 2012, from

USAID. (2001). Best practices in monitoring and evaluation: Lessons learned from the USAID Turkey population program. Retrieved September, 2012, from

U.S. Census Bureau. (2002). The big payoff: Educational attainment and synthetic estimates of work-life earnings. Retrieved October, 2012, from

Wadsworth, Y. (2001). Becoming responsive and some consequences for evaluation as dialogue across distance. In J. C. Greene and T. A. Abma (Eds.), Responsive Evaluation. New Directions for Evaluation, 92. San Francisco: Jossey-Bass.

Weiss, C. H. (Eds.). (1977). Using social research in public policy making. Lexington, MA: Lexington Books.

Weiss, C. H., & Bucuvalas, M. J. (1980). Social science research and decision-making. New York: Columbia University Press.

Weiss, C. H. (1987). Evaluating social programs: What have we learned?. Society25(1), 40-45.

Weiss, C. H. (1998). Evaluation (2nd Ed). New Jersey: Prentice Hall.

Weiss, C. H. (1998). Have we learned anything new about the use of evaluation?. American Journal of Evaluation19(1), 21-33.

Weiss, C. H., Murphy-Graham, E., & Birkeland, S. (2005). An alternate route to policy influence how evaluations affect DARE.  American Journal of Evaluation, 26(1), 12-30.

World Bank. (2007). Turkey: Higher education policy study. Volume 1: Strategic directions for higher education in Turkey. Retrieved October, 2012, from

World Bank. (2011). Improving the quality of equity of basic education in Turkey: Challenges and opportunities. Retrieved October, 2012, from

Yasar, S. (1998, April). Evaluation of educational programmes in Turkey. Unpublished paper presented at the Annual Meeting of the American Educational Research Association, San Diego, CA.

Yeh, S. (2009). Shifting the bell curve: The benefits and costs of raising student achievement. Evaluation and Program Planning, 32, 74-82.



Loading Facebook Comments ...

Leave a Reply

Your email address will not be published.