About the Author(s)


Dominique E. Uwizeyimana Email symbol
School of Public Management Governance and Public Policy, College of Business and Economics, University of Johannesburg, Soweto Campus, Johannesburg, South Africa

Citation


Uwizeyimana, D.E., 2020, ‘The logframe as a monitoring and evaluation tool for government interventions in a chaotic and complex environment’, Africa’s Public Service Delivery and Performance Review 8(1), a328. https://doi.org/10.4102/apsdpr.v8i1.328

Original Research

The logframe as a monitoring and evaluation tool for government interventions in a chaotic and complex environment

Dominique E. Uwizeyimana

Received: 10 June 2019; Accepted: 05 Oct. 2019; Published: 30 Jan. 2020

Copyright: © 2020. The Author(s). Licensee: AOSIS.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Background: The logical framework approach (or logframe) as a tool for the monitoring and evaluation (M&E) of government interventions has gained popularity in M&E scholarly research. The term ‘interventions’ as used in this article refers to public policy, strategies, programmes and projects that are implemented by governments to address the socio-economic problems affecting the citizens.

Aim: The aim of this article was to assess the logframe as an M&E tool for government interventions in Africa and to close the knowledge gap in the current literature.

Setting: The logframe is currently used (in one form or another) by most multilateral and bilateral aid agencies that are operating as development partners across the Africa. Its popular use ignores the fact that the success and failure of government interventions do not follow a pre-deterministic logical sequence of events.

Methods: This research is qualitative and is based on a robust review of the existing literature on the use of the logframe in M&E and the theory of chaos and complexity.

Results: This article shows that government interventions are generally implemented and evaluated in a dynamic, ever-changing, complex and often chaotic environment.

Conclusion: Because of the dynamic, complex nature of the environment in which government interventions are implemented and evaluated, the logframe should be continuously adjusted to accommodate changes in the environment. In addition, its use in M&E should be limited to the level at which the changes on the evaluand and in the environment allow a certain level of stability, predictability and logic.

Keywords: logframe; logic model; monitoring and evaluation; public policy; government interventions.

Introduction

The logframe is now a relatively ‘middle-aged’ procedure after its entry into development practice around the 1960s (Woodhill 2005:5–6). Since its introduction as an analytical and management tool for policies, programmes and projects in the 1960s, the logframe has gained popularity (Ile, Eresia-Eke & Allen-Ile 2012). There is hardly any development project, funded by international development agencies and financial institutions, which does not use the logframe as a tool for monitoring and evaluation (M&E) of the projects and programmes they are funding anywhere in the world (Woodhill 2005:5–6). As an M&E tool, the logframe is often used to demonstrate the link between the inputs, processes (actions), outputs, outcomes and impact of policies, programmes and projects (i.e. government interventions) (Woodhill 2005:5–6), and as an evaluation tool, it is often relied upon to evaluate the success or failure of government interventions. However, the literature suggests that government interventions are implemented and evaluated in a dynamic environment, which is ever-changing, complex and often chaotic (Cloete 2006:1; Heider 2015:1–2; Kayuni 2010:7; Overman 1996:490). If this is the case, then how can the logframe be logical in an ever-changing, complex and chaotic environment in which government interventions are implemented, and how can the logframe be used as an evaluation tool for government interventions in an ever-changing, complex and chaotic environment in which these interventions are implemented and evaluated? The following paragraphs of this article focus on conceptual evaluation of M&E, the theoretical and contextual framework of M&E, and the suitability and consequences of using the logframe to evaluate government interventions in a changing, ever-complex and chaotic environment.

Monitoring and evaluation: A conceptual orientation

According to Rabie and Goldman (2014):

[S]ystematic planning, design and implementation for the purpose of improving policy outputs and outcomes will generally come to no avail if one is not able to evaluate whether the people responsible for policy, programmes and projects implementation have hit the intended target, or whether they have missed it – by what margin and why?. (p. 1)

To assess whether a policy, programme or project has achieved its intended objectives, one must systematically and objectively monitor and evaluate its performance (Saunders 2015:3). Monitoring is about systematically tracking progress against an adopted plan to ensure compliance to aspects contained in the plan (Ho 2003:68–70; Ijeoma 2010). Rabie and Goldman (2014:4–6) state that the Latin root of the term ‘evaluation’ is Valere, which simply means working out or determining the value of something. ‘Evaluation is [therefore] the systematic and objective assessment of an ongoing or completed policy, programme, or project, including its design, implementation and results’ (Kusek & Rist 2004:12). The Organisation for Economic Co-operation and Development (OECD 2002) argues that:

[T]he aim of evaluation is to determine the relevance and fulfilment of objectives, development efficiency, effectiveness, impact and sustainability. An evaluation should provide information that is credible and useful, enabling the incorporation of lessons learned into the decision-making process of both recipients and donors. Evaluation also refers to the process of determining the worth or significance of an activity, policy or programme; an assessment, as systematic and objective as possible, of a planned, on-going, or completed development intervention. Evaluation in some instances involves the definition of appropriate standards, the examination of performance against those standards, an assessment of actual and expected results and the identification of relevant lessons. (p. 21)

Examining the above argument, it can be argued that the aim of evaluation is to use the data collected through systematic monitoring to objectively ‘determine the relevance and fulfilment of objectives, development efficiency, effectiveness, impact, and sustainability’ (Rabie & Goldman 2014:4–6). The literature shows that evaluation can be formal or informal. Informal evaluation is non-systematic and is conducted day by day to inform decisions on a daily basis (Rabie & Goldman 2014:4–6). Formal evaluation is conducted systematically, using systematic and rigorous methods, with appropriate control measures to ensure the reliability and validity of the conclusions (Rabie & Goldman 2014:4–6).

The literature suggests that there are different types of evaluation. The different types of evaluation found in the literature can be classified into two major categories based on ‘the time evaluation is conducted’ and the ‘focus of the evaluation on the logframe’.

Types of evaluation on the basis of the time evaluation is conducted

According to Bhikhoo and Louw-Potgieter (2014):

[I]f we are to improve our performance, we have to reflect on what we are doing, what we are achieving against what we set out to achieve, and why unexpected results are occurring. We cannot advance without making mistakes on the way, but we must evaluate and learn from our success and or mistakes. (p. 152)

Without evaluation, ‘we cannot improve’ (Department of Performance Monitoring and Evaluation [DPME] 2011:ii). Evaluation results can be used to demonstrate the effectiveness of programmes, identify ways to improve programmes, modify programme planning, demonstrate accountability and justify programme funding. The results of a well-designed and well-executed evaluation therefore help to achieve the following objectives (Centers for Disease Control and Prevention [CDC] 2012):

To demonstrate to legislators or other stakeholders that resources are being well spent and that the program is effective, to aid in forming budgets and to justify the allocation of resources, to compare outcomes with those of previous years, to compare actual outcomes with intended outcomes, to suggest realistic intended outcomes, to support annual and long range planning, to focus attention on issues important to your program, to promote your program, to identify partners for collaborations, to enhance the image of your program, to retain or increase funding, to provide direction for program staff, and to identify training and technical assistance needs. (p. 1)

Systematic evaluation requires appropriate evaluation indicators. According to Rabie (2014:205), indicators are measurement instruments that are used to track, document and assess progress in the attainment of interventions’ (policies, programmes and projects) objectives and outcomes. Evaluation focusses on different elements and use different indicators (nature and types); therefore, it is important that the selected indicators are appropriate for the evaluation perspective. According to the Public Service Commission (PSC 2008):

[T]he subject of an evaluation (the topic, the entity to be evaluated) may be the Public Service, a system, policy, programme, several programmes, a service, project, a department or unit within a department, a process or practice. The subject of an evaluation may also be the whole of government or the country. These entities are complex, or multi-dimensional. (p. 21)

Cloete (2009:296) defines evaluation as ‘gap analysis’, and identifies and outlines three types of evaluation: ongoing evaluation, formative evaluation and summative evaluation.

Formative evaluation

‘Formative evaluation’ (Cloete 2009:296) is similar to the ‘ex-ante evaluation’ of the OECD (2002:22). Cloete (2009) argues that formative evaluation:

[I]s frequently required at a very early stage in the policy planning process to undertake a formal assessment of (or appraisal) of the feasibility of the different policy options that one can choose from. (p. 296)

This is why Scriven (1967) defines formative evaluation (sometimes referred to as internal evaluation) as ‘a method for judging the worth of a program while the program activities are forming (in progress)’.

Formative evaluation (Stufflebeam & Shinkfield 2007):

[A]ssesses and assists with the formulation of goals and priorities; provides direction for planning by assessing alternative courses of action and draft plans; and guides program management by assessing implementation of plans and interim results. (p. 12)

Formative evaluation is the opposite of ‘ex-post evaluation’, which is defined by the OECD (2002:21–22) as ‘the evaluation conducted (directly or long) after the actual implementation of the policy, programme and projects has been completed’.

The objective of ex-ante evaluation is to determine or assess the gap between the status quo before the implementation and the desired future situation (at some stage during or after the implementation). As the desired future is projected as a distant period, there is a high level of uncertainty. This explains why formative evaluations ‘are undertaken using statistical analyses and other trend-projection techniques such as modelling, scenario building, and cost–benefit analyses’ (Cloete 2009:296). It can therefore be argued that formative evaluation is the most difficult to execute and possibly the least accurate because it relies on trend analysis and predictions. Formative evaluation is conducted to determine the policy outcomes of a generally unknown future and relies on complex technology-based trend-projection techniques that are not necessarily known to all evaluators.

Ongoing or process performance evaluation

According to Rabie and Cloete (2009:11), the ongoing or process performance evaluation is done at different intervals ‘when a policy project or programme is still being implemented’. This type of evaluation is used to evaluate what has actually been accomplished at a particular time during the implementation process. Ongoing or process performance evaluation is done to keep track of the timeframe and the spending patterns on the programme. Ongoing evaluation also assesses whether there is sufficient progress towards objectives. It also assesses whether the quality and quantity of outputs have been achieved in economic, efficient and effective ways (Cloete 2009:297). Looking at the type of questions that guide an ongoing evaluation, it can be argued that ‘this type of evaluation focuses primarily on the effectiveness, efficiency and levels of public participation in the implementation process’ (Cloete 2009:296). Depending on the timing of an evaluation (i.e. when evaluation is conducted), what has actually been accomplished at a particular point in time during the implementation process could be a milestone on a project or programme. The milestone being evaluated could be in the form of output in terms of goods or services (therefore tangible indicators are most appropriate). Depending on the timing of the evaluation, the objectives of the evaluation could be to assess part or full output, the immediate or short-term outcomes or the longterm impact.

Summative evaluation

Cloete (2009) argues that:

[S]ummative evaluation takes place after the completion of the policy, project or programme (e.g. at the end of the financial year or the term for which the policy was planned). (p. 206)

As an end-result evaluation, summative evaluations ‘are done to assess either the progress made towards achieving policy objectives, if those objectives can be determined, or to assess the general results or impacts of the policy’ (Cloete 2009:296). The results or impacts assessed at the end stage of the implementation process ‘include any positive or negative changes to the status quo (i.e. the status before the policy was implemented), if it is known’. ‘Summative evaluation focuses on both the short-term end products (outputs), the medium-term sectoral outcomes and the long-term inter-sectoral impacts or changes that the end product brought about’ (Cloete 2009:296). Determining whether an evaluation is ongoing or is conducted at the end of the implementation process requires data both about (Cloete 2017a/b):

[T]he status quo ante (so-called baseline data: before the policy project was initiated), and data at the cut-off point, which signals the end of the evaluation period (so-called end or culmination data). (p. 17)

The better the quality of these different sets of data, the more accurately evaluations can be conducted. Cloete summarises the above types of analysis and foci, as shown in Figure 1.

FIGURE 1: Cloete’s model of evaluation as gap assessment.

However, in addition to the fact that an evaluation can be formative, ongoing or summative, the literature also suggests that the evaluation can be classified on the basis of what the evaluators focus on (the evaluand) according to the logframe model. The following paragraphs focus on M&E and the logframe as an M&E tool.

Type of evaluation based on the evaluand on the logframe

It is hardly possible to find any multilateral or bilateral development agencies or foreign donors who have not adopted the logframe as a project planning, implementation and evaluation tool for the programmes or projects they fund in developing countries (Government of the Republic of Serbia 2011:6). It is argued that even when ‘different agencies and donors modify the formats, terminology and tools used in their logframe, the basic analytical principles have remained the same’ (Government of the Republic of Serbia 2011:6). Why this emphasis? Some organisations call this approach ‘logframe’ when it is shown as a matrix (hence logframe matrix) (Ile et al. 2012:104), while others call it a ‘logic model’ when it is shown as a flow chart (Brown 2017:3). The terms ‘logframe’, ‘logframe matrix’, ‘logic model’, ‘logical model’ or ‘programme logic’ refer to one and the same thing in the literature, and they are sometimes used interchangeably (Brown 2017:3). However, even when different agencies and donors, governments and so on modify the formats of the logframe, the terminology and the tools used in their logframe, the basic analytical principles do not change (Brown 2017:3). That is, irrespective of whether the logframe is represented as a matrix or as a chart in form or format, it always depicts the relationship between the inputs (money, time, people and skills), activities (processes), outputs, outcomes (short- and medium-term results) and impacts (long-term results) (Auriacombe 2011:42). Stein and Valters (2012:7) define the term ‘logframe’ as a schematic explanation of which inputs, activities (processes) and inputs will achieve the results (output), the outcomes (short- and medium-term results) and impacts (long-term results) (Auriacombe 2011:42). The PSC (2008:43) defines inputs as ‘all the resources that contribute to production and the delivery of outputs and “what we use to do the work” such as finances, personnel, equipment and buildings’ (PSC 2008:43). Activities are ‘the processes or actions that use a range of inputs to produce the desired outputs and ultimately outcomes; in short, activities are “what we do”’ (PSC 2008:43). Outputs are ‘the final products, or goods and services, produced for delivery; that is, “what we produce or deliver”’ (PSC 2008:43). Outcomes are (PSC 2008):

[T]he medium-term results for specific beneficiaries that are a logical consequence of achieving specific outputs, outcomes are ‘what we wish to achieve’ and impacts are the long-term results of achieving specific outcomes, such as reducing poverty, creating jobs and general improvement of the wellbeing of the target communities. (p. 43)

A careful analysis of the logic model shows how its logic is similar to the logic of a production process. For example, according to the PSC (2008):

[B]oth the logframe and production process use inputs and resources like staff, equipment and materials to yield the product or service. They also both consist of the tasks required to transform the resources (inputs) in order to produce the output (goods or services), and they both apply knowledge and technologies that have been developed over time. (p. 43)

While there are different varieties (forms and shapes) of logframes, a simplified logic model that consists of the hierarchy of inputs, activities, outputs, outcomes and impacts is presented by the PSC (2008:42), as shown in Figure 2.

FIGURE 2: The logframe model.

Clearly, based on the above logframe discussion and the fact that evaluation can focus on the different parts of the logframe model, evaluation can also be classified as input evaluation, process/activity evaluation, output evaluation, outcome evaluation and impact evaluation. These are called in the literature as the evaluand. The concept ‘evaluand’ refers to ‘the object of an evaluation’. Based on the analysis of this article, the evaluand could be any form of government intervention, especially a policy, programmes or projects, but the focus of an evaluation (i.e. evaluand) could also be an entire organisation, a department in the organisation or persons. Thus, the evaluation, which focusses on the evaluand should give us at least six major types of evaluation. This is because policies are often implemented through programmes (which could be divided into sub-programmes); programmes and sub-programmes are implemented through projects (which could be divided into subprojects); projects are implemented by organisations (which comprise many departments); and departments are made of units that are manned by people. Each one of these components is an evaluand because it can be evaluated. Thus, even though Auriacombe (2011:42) argues that ‘programme logic model is an analytical tool that is used to plan, monitor and evaluate projects’, it can be argued that the logframe model also applies to all government interventions (policies, programmes and projects). As the objective of this article is to assess the logframe as a tool for policy, programme and project M&E, the analysis of this article focusses on the different components of the logframe (i.e. the input), which must be processed (actioned) to produce the output, which, in turn, produces outcomes and ultimately the impact as presented in the logframe model. In addition, while the logframe was first designed to assist the United States Agency of International Development (USAID) to improve its project planning, management and evaluations in the early 1960s, the discussion presented here shows that projects are the main components of programmes, and policies are often implemented through a number of programmes, suggesting that the logframe model can also be used to evaluate the different components of the logframe. Finally, while the logframe approach (LFA) was introduced and designed because (Woodhill 2005):

[P]lanning was too vague, without clearly defined objectives that could be used to monitor and evaluate the success (or failure) of a project; management responsibilities were unclear; and finally, evaluation was often an adversarial process, because there was no common agreement as to what the project was really trying to achieve. (pp. 5–6)

It can be logically argued that because projects are subcomponents of programmes and programmes are subcomponents of policies, the problems in the projects also affect and lead to problems in programmes, and problems in projects and programmes also lead to problems in policies, in that logical order. The fact that problems in the input component of the logframe affect the other parts of the logframe (input, action [process], output, outcomes and impact) in that order has resulted in input evaluation, process or action evaluation, output evaluation, outcome evaluation and impact evaluation. Following is a brief of these five types of evaluation, which are based on the logframe.

Input evaluation

Inputs are about all the resources we need to accomplish something. Therefore, input evaluation ‘assesses alternative approaches, competing action and staffing plans, and budgets for their feasibility and potential cost-effectiveness in meeting targeted needs and achieving goals’ (Stufflebeam & Shinkfield 2007:12).

Process (action) evaluation

The purpose of process evaluation is to describe how well the programme is being implemented. This type of evaluation is able to determine the extent to which the programme has achieved, or rather adhered to, the initial plan. It also provides information on the progress of the implementation process. Such information can be used as evidence to make necessary changes to the output and to ensure that the quality of the products is improved. The data of the evaluation process are important, as they help to interpret the outcome and identify any errors within the process, so that they may be corrected. Such early detection of errors within the evaluation process of the programme is of great advantage, as it helps the programme to be consistent with the standards, the design and the initial programme plan. In short, process evaluation provides an overview as to why a programme is or is not working (McDavid & Hawthorn 2006:31).

Output evaluation

Output are the direct ‘products, capital goods and services’ which result from program and project activities (INTRAC 2015:1). Outputs are ‘quantitative measurements to monitor and report in your evaluation report’ (INTRAC 2015:1). Based on these definitions, it can be argued that Output evaluation is verifying the existence of tangible evidence, which proves that ‘the grant-funded program’s activities were performed as planned’ (Arts and Humanities Research Council [AHRC] n.d.:1–2).

Outcome evaluation (also called product evaluation)

Outcome evaluation examines whether there are changes in the people participating in a programme. It assesses how big the changes are and if the changes are of a positive or negative nature. This type of evaluation also intends to combine the changes to the programme with some of the aspects of the programme. It indirectly examines whether the rationale for the programme is still valid. An outcome evaluation is used to justify the continuation of the programme, based on its short-term effectiveness (McDavid & Hawthorn 2006:31).

Impact evaluation

The purpose of impact evaluation is to assess the programme on the basis of the long-term changes it makes, and to determine whether it has achieved the set long-term goals. Outcome and impact are, in most cases, seen as having the same meaning; however, outcome focusses on short-term and medium-term outcomes, whereas impact focusses on long-term changes. An impact evaluation is a more comprehensive assessment, which examines the holistic changes that the programme has brought about (McDavid & Hawthorn 2006:33). The above indicators are summarised in Table 1.

TABLE 1: Classification of the major types of evaluation found in the literature.

However, in addition to the fact that evaluation can be classified on the basis of the time it is conducted (timing) and the element of evaluation on the logframe, each of these types of evaluation can also be formal or informal. A ‘formal evaluation’ is ‘an evaluation that is relevant, rigorous, designed and executed to control bias, kept consistent with appropriate professional standards, and otherwise made useful and defensible’, while ‘informal evaluation is an evaluation that is unsystematic, lacks rigor, and may be biased’ (Stufflebeam & Shinkfield 2007:10–11). Irrespective of the type of evaluation and the time it is conducted, the purpose of any evaluation is to measure whether there has been any change from the undesirable conditions in the past to the desirable conditions at present, and to attempt to find the causal relationship between the observed change and the government interventions that are being or have been implemented (Uwizeyimana 2018:1).

Monitoring and evaluation: A theoretical and contextual framework

As indicated above, the central argument in this article is that lack of stability, predictability and logic in the way changes take place in the implementation and evaluation of government interventions limit the usefulness of the logframe as a tool to evaluate them. This argument is based on the consensus among authors, such as Rabie and Goldman (2014:4–6) and Kushner and Rotondo (2012:1), who argue that evaluation has developed as a response to the growing chaos and complexity of ‘bourgeoning economies, geopolitical changes, and a commitment to the social programme as a means of governance and social change’. What this means is that the logframes are ‘embedded in a particular context’ and their use as a tool for evaluating government interventions should consider such context’ (Woodrow & Oatley 2013:4). The literature on the nature and frequency of change in the environment in which government interventions are implemented and evaluated agree on the following key characteristics of the environment: that the environment is dynamic and constantly changing, that change in the environment is inevitable, it is unpredictable, it is complex and often chaotic (Cloete 2006:2; Kayuni 2010:30). Change is the result of an open system in which government interventions are implemented, and it affects how they are implemented and evaluated (Auriacombe & Ackron 2015:15). As the Independent Evaluation Group/World Bank Group (2015:2) put it, ‘complexity is part of our life, and mindsets and methods of evaluators need to match that reality’.

If we accept that M&E is one such social science phenomena to which chaos and complexity theories apply, then it can be argued that the abilities of evaluators to view organisations as complex, dynamic, self-organising systems can improve their abilities to manage change in times of apparent chaos and transitions to new orders of being (Cloete 2006:45).

Because change is inevitable and likely to be unpredictable, the professionalisation of evaluation means codifying the M&E professional standards in order for them to correspond to chaos and complexity rather than to linear models (Heider 2015:1–2). Following is a brief discussion of how the theory of chaos and complexity debates affect the logframe as a tool for M&E.

Effect of the theory of chaos and complexity on the logframe as a tool for monitoring and evaluation

Chaos and complexity in evaluation are caused by the uncertainty of the future. Bernhardt (2018) argues that:

[U]ncertainty as a factor in policymaking and implementation refers to both determining the decisive influence of different socio-economic variables on the nature of issues and the outcome of policy decisions, forecasting, and predicting the future situation in which these policies must be implemented. (p. 47)

Bernhardt (2018:47) further states that ‘[p]olicy is future orientated and decisions are made based on predictions and implemented in a multifaceted and open environment that can change rapidly in a non-linear way’. The concepts of risk and uncertainty are directly applicable to this research because both policymakers and policy implementers face ‘unexpected, surprising and counterintuitive political, economic and social changes exacerbated by unprecedented technological development’ (Facer 2011:iv), which must be taken into account by evaluation scholars and practitioners.

Social problems are so complex and are shaped by so many variables that it is rarely possible to explain them, to find policies as a remedy for them or to make accurate predictions about the impact of proposed policies (Bernhardt 2018:48).

For example (Bernhardt 2018):

[I]s poverty, unemployment and a lack of education the cause for violence against women and children? If that is so, why does it also occur in higher echelons in society? Controversy has also developed about the relative influence of political and socio-economic variables on policy. (p. 48)

Finally (Bernhardt 2018):

[A]part from the complexity of social problems, different actors – including government institutions in different government spheres, businesses, political parties, the media, civil society organisations and foreign governments – are constantly busy changing society in divergent and conflicting ways. It is impossible to identify each of these participants and to gauge and predict exactly how they will influence policy. (p. 48)

The influence and uncertainty resulting from both the internal and external factors are unavoidable because we operate in an open system. According to Evan (1993), the key concepts of the open system model refer to how:

[O]rganisational inputs from the environment, organisational processing by the organisation, organisational output and feedback to the environment, are always accompanied by new input from the environment providing support or making new demands on the organisation. (p. 5)

Cloete (2006:2) and Kayuni (2010:30) agree that ‘attempts to interpret, analyse, assess, or expand on the relevance of chaos and complexity for aspects of public management have largely been undertaken in the early 2000s’. However, while ‘there are common features between a complex and a chaotic situation’, a careful analysis shows that ‘the two concepts are very different’ (Rickles, Hawe & Shiell 2007:933). ‘Complexity’ is ‘the generation of rich, collective, dynamical behaviour from simple interactions between large numbers of subunits in a complex system’ (Rickles et al. 2007:933). A complex system or situation is beyond what is considered to be a simple or ‘normal’ system. A simple or normal system is one ‘where requirements are known and the execution follows a predictable and controllable path’ (Oehmen et al. 2015:6). Smart-words.org (n.d.:2) states that the synonyms of the term ‘normal’ include ‘usual, standard, regular, ordinary, typical, customary, common or average’. Furthermore, complexity differs from chaos. However, while the interactions in a ‘complex system generate emergent properties in the unit system that cannot be reduced to the subunits (and that cannot be readily deduced from the subunits and their interactions)’ (Rickles et al. 2007:933; Morgan & McMahon 2017:17), such interactions are also not necessarily chaotic (Rickles et al. 2007:933). Complexity is different from chaos because even though ‘complex systems carry a heightened level of complexity, they might be following webs of predictable patterns that can be identified and studied in a systematic manner’ (Uwizeyimana 2020:9; see also Cloete 2006:2).

In his article, entitled ‘Chaos and quantum complexity approaches to public management: Insights from “the new sciences”’, Cloete (2006:2) argues that chaos is not the same as complexity. Rickles et al. (2007:933) define chaos as ‘the generation of complicated, aperiodic, seemingly random behaviour from the iteration of a simple rule’.

Chaos is when everything seems to be on the verge of collapse in a particular moment (let us say today), yet somehow and for some reasons emerges later (next day, next week, next month or some years later) – in a new form with new structures or relationship (Cloete 2006:1).

According to Muthan (2015:15–17), ‘[c]haos theory is concerned with non-linear systems – systems in which an external change causes disproportionate effects’. This phenomenon is popularly known as the ‘butterfly effect’ (Cronje 2014:21). With reference to the chaos theory, the butterfly effect refers to ‘the phenomenon whereby a minute localized change in a complex system can have large effects elsewhere’ (Basu 2017:1). Schneider and Somers (2006:351) contend that Edward Lorenz first encountered what is now known in the literature as ‘the butterfly effect while studying weather patterns, which pointed to the inherent nonlinearity of such systems due to the high degree of inter-relatedness between its parts’.

Thus, while the effect of one unit over the others in a complex system can be identified and isolated using systematic methods, the effects of one unit over the others in a chaotic situation are difficult to isolate. The parts (units) and the effects of each unit over the others in a chaotic system are difficult to isolate and study simply because such interactions are random and highly unpredictable (Muthan 2015:15–16). The synonyms of the concept ‘random’ are ‘chance, accidental, haphazard, arbitrary, casual, unsystematic, indiscriminate and unplanned’ (Oxford English Dictionary 2018).

A closer examination of Cloete’s (2006) argument, however, also suggests the existence of two different types of chaos,namely, the chaos theory or situations that he calls ‘deterministic chaos’ and what he calls ‘quantum chaotic’ or ‘random chaotic’ theories, situations or phenomena. According to Cloete (2006:1), while ‘both so-called chaos (deterministic chaos) and quantum (randomly chaotic) approaches are regarded as examples of the functioning of complex systems’, the two approaches differ. Cloete (2006) defines ‘quantum chaos’ as being ‘un-deterministic and therefore more difficult to predict’ than a complex situation (discussed above) and ‘deterministic chaos’. That is, ‘a deterministic chaotic situation or phenomenon is less complex’ and has a ‘certain level of more order and predictability’ than a ‘quantum chaotic situation or phenomena’, which is truly ‘randomly chaotic’ (Cloete 2006:2).

Cloete’s (2006:1) argument that quantum complexity is totally different and ‘is truly chaotic in the lay sense of the term: totally random and indeterministic’, and is ‘replete with puzzling paradoxes and contra-intuitive characteristics’, is supported by Thornhill (2016:47), who argued that ‘this type of chaos takes place at the quantum or molecular level of the system’. He noted that ‘the size of an atom as a constituent of a molecule is estimated as one ten millions of a millimetre (1/10−6)’ (Thornhill 2016:48 in Uwizeyimana 2020:11). At the quantum level, ‘the study involves the anomalous behaviour of particles within an atom’ (Thornhill 2016:48). However, the fact that chaos takes place at the quantum level does not minimise its impact on the whole system and even on other systems far away in the environment. For example, according to Cloete (2006), ‘[o]ne of the most fundamental features of quantum theory is the so-called Heisenberg (1925) uncertainty principle’, which entails that objective observation of experimental phenomena is impossible because any act of observation is inextricably bound up in and influences the event being observed (Weizmann Institute of Science 1998:1). There is a relationship between the ‘butterfly effect’ described by Muthan (2015:15–17) and the ‘quantum chaos theory’ discussed by Cloete (2006) and Thornhill (2016).

Surprisingly, some authors, such as Overman (1996) and Cloete (2006), believe that both deterministic chaos and quantum chaos reveal patterns of order out of seemingly chaotic behaviour. This is because both types of chaos complexity theory give an (Overman 1996, in Cloete 2006):

[A]ppreciation, not distrust of chaos and of uncertainty, stressful times and further stress that real change and new structures are found in the very chaos they [managers or policymakers] try to prevent. (p. 1)

Kayuni (2010:7) argues that they both carry a certain amount of logical order in themselves. This is, for example, the point Kayuni (2010:5) makes when he argues that despite the chaotic and confusing scenario that underpins chaos theory in most complex systems, one can still find patterns of order in the zone he calls the ‘edge of chaos’. The synonyms for the term ‘edge’ in this context are ‘verge … of chaos’ or at the ‘brink of chaos’. The problem with Kayuni’s (2010:7) argument is that he does not indicate where exactly the so-called ‘edge of chaos’ (verge of chaos or the brink of chaos) is located on the normal–complex–deterministic chaos–quantum chaos spectrum.

The fact that both Kayuni (2010) and Cloete (2006) firmly believe that the edge of chaos exists in every chaotic situation and that there seems to be a place of stability or at least relative stability or order and logic in chaos or chaotic phenomena suggests that it is the duty of the policy, programme and project evaluators (i.e. the evaluation practitioners, evaluation experts and even evaluation scholars) to first recognise what constitutes it and where it is located. Finding the ‘edge of chaos’ is important because that is where government interventions tend to ‘best deliver’ (Kayuni 2010:5), and it is where the logframe can be used as an evaluation tool. Finally, while Cloete (2006) suggests that quantum chaos systems do not have any level of order or predictability (i.e. they are totally random and unpredictable), both Thornhill (2016) and Muthan (2015) agree with Kayuni (2010) when they suggest that even quantum chaotic systems may be ‘carrying a certain level of internal stability at least at the micro-level’. As Muthan (2015:16) puts it, while ‘certain systems … appear at a macro-level to be random and without order’, the very same systems ‘are found to display micro-levels of order when they are simulated by myriad iterations’. Muthan’s (2015:16) argument suggests that ‘[s]ystems that display random results may yet be carrying out simple rules which, when iterated several times, generate chaotic effects’. If one takes into consideration Muthan’s (2015) and Thornhill’s (2016:18) arguments, it can be argued that the logframe can also be applicable in ‘quantum chaos’ situations but only at the micro-level. The only problem with the application of the logframe at the micro-level is that such an evaluation would generate a number of multiple small-scale evaluations with little or no use for policy evaluators (Auriacombe & Ackron 2015:15). According to Auriacombe and Ackron (2015:15), ‘the evaluation of tiny particles of a bigger and complex (open) system would be able to fulfil the objectives of an evaluation’ because the ‘evaluation of the whole system is not the same or equal to the sum of multiple micro-level evaluations of the same system’. The evaluation of the whole system, group of individuals or organisation is far greater than the sum of the evaluations of the parts (Bergoeing, Loayza & Piguillem 2015:268) (Figure 3).

FIGURE 3: The normal–complex–deterministic chaos–quantum chaos spectrum.

The suitability and consequences of using the logframe to evaluate policies, programmes and projects in a changing, complex and chaotic environment

‘Over time, and now present under various guises and evolutions, the logframe has become close to a universal tool for development planning’ (Woodhill 2005:5–6). There seems to be an agreement between scholars that the ‘programme logic model’ serves as a useful tool to review progress, before, during and after policy implementation (Auriacombe 2011). However, while ‘the logframe has become central to the story of M&E in development over the past three decades, there has also been fierce debate about its advantages and disadvantages’ (Woodhill 2005:5–6). A critical analysis of the logframe with regard to M&E shows that the logframe introduced some significant difficulties for those planning and implementing development initiatives (Woodhill 2005:5–6). For example, the logframe as an M&E tool has been criticised for assuming a sequential linear cause-and-effect relationship between its different parts and for ignoring the fact that change, which is a part and parcel of every environment, causes chaos and chaotic changes on the government interventions’. The parts described above include the inputs, activities (processes), outputs, outcomes and impacts of interventions. The logframe has been criticised for assuming a simple linear cause-and-effect relationship that ignores the fact that government interventions are implemented in a very complex, chaotic and generally unpredictable environment, the effects of which are not always known or predictable. For example, while A (inputs) is known and can be determined by the policy implementers, B (actions taken to process the inputs) is generally under the policy implementers’ control, there is no guarantee that C (the output) that comes out of B (the input) will be exactly as hoped for or expected by the policy implementers. How then can the logframe be used to evaluate something that has the potential of producing unpredictable results, or how can it be used ‘in the case of long impact chains, where causes and effects are rather distant from each other, either in time or in their functional relations?’ (Fugita 2010:5). According to Lomofsky (2016), complexity must be acknowledged to recognise that change:

[I]s often beyond the control of our project; is dynamic and multidimensional; is cumulative, with tipping points; is emergent and often unexpected; involves people who behave in ways that we cannot predetermine and have agency (we cannot control what they do or how they think); necessitates basing our programme design on evidence of what works; and does not take place in isolation and happens at different levels of the system. (p. 9)

In spite of the almost universal consensus in the literature that government interventions that are implemented to bring about social or economic change in society are implemented and evaluated in a complex, chaotic environment, there is a growing body of literature that advocates the use of the logframe as an evaluation tool. For example, the logframe is currently used almost everywhere as a starting point for the M&E of public and private interventions (policies, programmes and projects) (OECD 2002:21).

Aitken (2013:62) argues that, ‘this assumption of linear relationships between causes and effects has a restricted utility’ because it ignores the unintended tangible and intangible consequences and the complex nature of policymaking and policy implementation. Considering the above discussion, it can be argued that the logframe has been criticised for the following:

  • Failure to consider the characteristics of the complex and chaotic nature of policy, programme and project evaluation.
  • Lack of flexibility (Woodhill 2005):

    In theory, a logframe can be modified and updated regularly. However, once a development initiative has been enshrined in a logframe format and funding has been agreed on this basis, development administrators wield it as an inflexible instrument. Further, it may be the case that while the broad goals and objectives of an initiative can be agreed on ahead of time, it is not possible or sensible to focus on defining specific outputs and activities, as demanded by the approach. (pp. 5–6)

  • Lack of attention to relationships of multiple stakeholders (multi-actors) in an open system (Woodhill 2005):

    As any [experienced] development practitioner very well knows, it is the relationships between different actors and the way these relationships are facilitated and supported that ultimately determines what will be achieved. The logframe’s focus on output delivery means that often too little attention is given to the processes and relationships that underpin the achievement of development objectives. (pp. 5–6)

‘The outcome mapping methodology developed by IDRC [International Development Research Centre] has been developed to respond to this issue’ (Woodhill 2005:5–6).

  • Problem-based planning (Woodhill 2005):

    The logframe approach begins with clearly defining problems and then works out solutions to these problems. Alternative approaches to change emphasise much more the idea of creating a positive vision to work towards rather than simply responding to current problems. Furthermore, experience shows that solving one problem often creates a new problem; the logframe approach is not well-suited to iterative problem solving. (pp. 5–6)

  • Insufficient attention to outcomes (Woodhill 2005):

    For larger-scale development initiatives, the classic four-level logframe offers insufficient insight into the crucial ‘outcomes’ level, critical to understanding the link between delivering outputs and realising impact. (pp. 5–6)

  • Oversimplification of M&E (Woodhill 2005):

    The logframe implies that M&E is simply a matter of establishing a set of quantitative indicators (means of verification) and associated data-collection mechanisms. In reality, much more detail and different aspects need to be considered if an M&E system is to be effective. (pp. 5–6)

  • Inappropriateness at programme and organisational levels (Woodhill 2005):

    The logframe presupposes a set of specific objectives and a set of clear linear cause and effect relationships to achieve these objectives. While this model may be appropriate for certain aspects of projects, at the programme and organisational level, there is mostly a more complex and less linear development path. For programmes and organisations there are often cross-cutting objectives best illustrated using a matrix approach rather than a linear hierarchy. (pp. 5–6)

For example, as Woodhill (2005:5–6) argues, ‘an organisation may be interested in its gender or policy advocacy work in relation to a number of content areas such as watershed management planning and local economic development’.

  • The other problem is that (Woodhill 2005):

    [W]hile the core ideas behind the logframe approach can be used in flexible and creative ways, this is very rarely the practice and even the basic mechanical steps are often poorly implemented. Consequently, the dominance of its use and poor application have become a significant constraint to more creative and grounded thinking about M&E and the way development initiatives are managed. (pp. 5–6)

This article has strongly argued that chaos and complexity are inevitable because government interventions are implemented and evaluated in organisations that are open systems. Because organisations are open systems, policy evaluators must contend with the inter-relationships between multiple actors and the fact that the implementation and evaluation of policies, programmes and projects have become multidimensional and multisectoral. It is difficult to see how the logframe can be used as an M&E tool in a complex, chaotic and generally unknown and unpredictable environment. Such dynamism and the complex nature of public organisations require flexible, innovative solutions (Bagason & Toonen 1998; Berman 1980; Cloete 2006; Goldin, Kenneth & Reinert 2006; Overman 1996). Complex problems and chaos in the environment create uncertainties that require that the problem is not only defined and redefined throughout the implementation process but also that a policy be interpreted and reality-checked throughout the policy’s lifespan (Alesch & Petak 2001:2–3). If policies are interpreted and reinterpreted in the process of implementation, it means that the programmes and projects through which policies are implemented also continue to be interpreted and reinterpreted and aligned with the changed policies. Hence, policies must be interpreted and reinterpreted because (Hofstede 1978, cited in Uwizeyimana 2011):

[A]ttempts at enforcing cybernetic paradigms, such as the Program-Planning Budgeting System and Management by Objectives of the early 1940s and 50s, are bound to fail if applied in a highly complex and ever-changing environment non-industry-like process. (p. 117)

The chaotic and complex nature of the environment in which policies are formulated and implemented has an impact on the policy being implemented, the policy implementers and the evaluators themselves. According to the above discussion, policy implementation models need to be adaptable and flexible, and policy implementers themselves need to be analytical and ingenious in their decision-making to increase their chances of success within the environmental conditions in which they find themselves (Fox, Schwella & Wissink 2000:13). The failure of policy implementers and evaluators to adapt appropriately to environmental forces will impair the implementation and evaluation processes (Fox et al. 2000:13). It therefore makes perfect sense that the effects of the environmental factors be taken into account when evaluators conduct evaluations. The fact that evaluators should take them into account to produce valid and useful recommendations was emphasised by Auriacombe’s (2011:37) argument that to produce credible reports on policy performance and effectiveness, ‘the evaluation design and methodology must comply with the minimum criteria of validity’. The first and, arguably, the most compelling reason why it could be problematic to apply a logframe to explain the success or failure of chaotic and unpredictable and ‘unilineal processes’, such as policy, programme and project implementation, is that ‘the present is challenging and the future is certainly not certain’ (Auriacombe 2016:8), but the logframe remains static. As the conditions that led to the initial overall objective change, the policy objectives and the strategies for policy implementation are also adjusted. The fact that change in the environment determines the type and level of adjustment required at that particular place, time and moment explains Auriacombe’s (2016:8) advice that policymakers should strive to ‘do the best they can … to achieve the objective in the fluid circumstances of the real world’. Auriacombe’s (2016) argument supports Okecha (2009:2), who earlier argued that ‘the political world is complex’, and Okumus (2003:871), who used the theory of chaos and complexity to argue that it is difficult and misleading to require standard factors applicable to each and every situation and circumstance. According to Overman (1996:490), if we are able to view organisations as complex, dynamic and self-organising systems, and the environment in which policies are implemented as complex and chaotic, then ‘we will be able to improve the evaluators’ ability to take appropriate action in order to manage change in times of chaos’ (cited in Uwizeyimana 2011:117). The fact that ‘it is impossible to have and maintain a certain pattern of factors in these circumstances’ (Hanf & O’Toole 1992:165) explains why policy implementation is about doing the best that can be done by policy implementers. It therefore follows that evaluation should be about determining the level at which the best that was done by policy implementers was done in an effective and efficient manner, and whether it has achieved the intended output, outcomes and impact because of, or despite, the prevailing circumstances. Hence, the logframe is based on predetermined cause–effect between the input, processes, outputs, outcomes and impacts, and ignores the fact that policy implementation is about doing the best that can be done by policy implementers to respond to and deal with the requirements of the environment and the context facing them during the implementation processes.

If one takes into account the argument on chaos and complexity in this article, it would be difficult to demonstrate how using the logframe to conduct ex-ante, ongoing and post-ante evaluations cannot produce valid and credible findings and recommendations. The logframe’s rigid structure and strong emphasis on predetermined processes limit the implementers’ abilities to do their best, given the prevailing circumstances, and also limits evaluators’ abilities to evaluate whether ‘the best that was done by policy implementers’ can actually be considered ‘the best’ – given the circumstances they were facing at that particular moment. Hofstede (1978, cited in Uwizeyimana 2011:104) argues that evaluators must understand that unpredictable, complex and non-industry-like processes require ‘non-political, flexible paradigms’. Complexity is therefore part of our life and our mindsets, and our methods as evaluators (professionals and scholars) will need to match that reality (Heider 2015:1–2). Professional standards must correspond to complexity rather than linear models (Heider 2015:1–2). If one takes into account the argument provided in this article, then it is easy to conclude that the theory of change (ToC) would be a better tool for M&E in an environment riddled with chaos and complexity. The ToC would deal more effectively with the challenges posed by the chaotic and complex nature of the ever-changing environment in which government interventions are implemented and evaluated because it emphasises uncertainties in the present and the future. It also emphasises that the means to achieve success in a chaotic environment are ‘not by blind and rigid adherence to plans, but by flexibility and adaptation’, while ‘keeping the overall goal or purpose of the policy being implemented clearly in view’ (Auriacombe & Ackron 2015:8). The article can make a more significant contribution by indicating which dimensions or elements of ToC should be incorporated in the logframe to make it more suitable as an M&E tool in dynamic contexts.

Conclusion and recommendations

This article assessed the logframe as an M&E tool for government interventions. Through the analysis of Cloete’s model of evaluation as gap assessment and the evaluand on the logframe, the article showed that there exist many types of evaluation. In addition, the purpose of this article was to demonstrate that because of the ever-changing, complex and often chaotic nature of the environment, the implementation and evaluation of government interventions rarely, if ever, follow a predictable, linear and logical order. The question at the core of the debate was how logical the logframe can be in the ever-changing, complex and chaotic environment in which government interventions are implemented. It was argued that the policy environment is not static but constantly changes in response to changes in the political, social and economic environments that face organisations in which they are implemented. The bottom line in the discussion in this article is that changes in the environment are inevitable, and they will inevitably affect every aspect of the logframe (input, processes, outputs, outcomes and the results [impact]) of the government interventions. That is, change in one component of the logframe will have different types of effects on one or all parts of the logframe, and each effect will have ripple effects on every other individual element of the logframe, as well as compounded effects on all the other components of the logframe simultaneously. Such change and the nature of effects it has on the logframe must be taken into account when designing tools for evaluation and when conducting actual evaluation. Thus, unless the environment in which government interventions are implemented contains a certain level of logic such as the edge of chaos spoken about by Cloete (2004) and Kayuni (2010) in this article, the effectiveness of the logframe in evaluating these policies, programmes and projects is not likely to produce credible results. Therefore, the logframe should be constantly updated to integrate this kind of ever-changing, complex and chaotic environment, and its use in the evaluation of government interventions should be limited to where the level of stability and predictability is possible to produce valid results and recommendations on the evaluand. Furthermore, the level at which the changes on the evaluand and in the environment allow a certain level of stability, predictability and logic should be clearly determined and incorporated in the logframe to make it more suitable as an M&E tool in dynamic contexts.

Acknowledgements

Competing interests

The author declares that no competing interests exist.

Author’s contributions

I declare that I am the sole author of this research article.

Ethical consideration

This article followed all ethical standards for carrying out research.

Funding Information

This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors.

Data availability statement

Data sharing is not applicable to this article as no new data were created or analysed in this study.

Disclaimer

The views and opinions expressed in this article are those of the authors and do not necessarily reflect the official policy or position of any affiliated agency of the authors.

References

Aitken, W.A., 2013, ‘Attribution analysis: A critique of the policy paradigm with a case study of the Northern Territory emergency response’, PhD thesis, University of Tasmania.

Alesch, D. & William, P., 2001, ‘Overcoming obstacles to implementing earthquake hazard mitigation policies’, paper presented at the first annual iiasa-dpri4 meeting of integrated disaster risk management: Reducing socio-economic vulnerability, IIASA, Laxenburg, pp. 1–4, viewed 03 March 2018, from http://www.iiasa.ac.at/Research/RMS/dpri2001/Papers/petak.ps.

Arts and Humanities Reaearch Council (AHRC), n.d., Logic Models for programme planning and evaluation, viewed 09 January 2020, from https://ahrc.ukri.org/documents/guides/logic-models-for-programme-planning-and-evaluation/

Auriacombe, C.J., 2011, ‘Role of theories of change and programme logic models in policy evaluation’, African Journal of Public Affairs 4(2), 36–53.

Auriacombe, C.J., 2016, PMG 2B: Theories about public administration, public management & governance 2015 internal handbook, University of Johannesburg, Johannesburg.

Auriacombe, C.J. & Ackron, J., 2015, PLG 3A section A: Semester 1 integrated development planning and LED internal handbook, University of Johannesburg, Johannesburg.

Bagason, P. & Toonen, T., 1998, ‘Introduction: Networks in public administration’, Public Administration 76(1), 205–227. https://doi.org/10.1111/1467-9299.00098

Basu, A., 2017, What is the real meaning of ‘butterfly effect’ in 13 reasons why?, viewed 29 August 2018, from https://www.quora.com/What-is-the-real-meaning-of-butterfly-effect-in-13-reasons-why.

Bergoeing, R., Loayza, N. & Piguillem, F., 2015, ‘The whole is greater than the sum of its parts: Complementary reforms to address microeconomic distortions’, The World Bank Economic Review 30(2), 268–305. https://doi.org/10.1093/wber/lhv052

Berman, P., 1980, ‘Thinking about programmed and adaptive implementation: Matching strategies to situations’, in H. Ingram & D. Mann (eds.), Why policies succeed or fail, pp. 205–227, Sage, Beverley Hills, CA.

Bernhardt, Y., 2018, UJ internal handbook (APK & Sow) PMG3A1, PMG3AA3 & PGM3A11: Programme: Public management and governance (third year), University of Johannesburg, Johannesburg.

Bhikhoo, A. & Louw-Potgieter, J., 2014, ‘Case: Do managers use evaluation reports? A case study of a process evaluation for a grant-making organisation’, in F. Cloete, B. Rabie & C. de Coning (eds.), Evaluation Management in South Africa and Africa, n.p., Sun Press Imprint, Stellenbosch.

Brown, A.M., 2017, Theory of change vs. the logic model, viewed 04 April 2018, from https://www.annmurraybrown.com/single-post/2016/03/20/Theory-of-Change-vsThe-Logic-Model-Never-Be-Confused-Again.

CDC, 2012, ‘Introduction to Program Evaluation for Public Health Programs: A Self-Study Guide: Step 6: Ensure Use of Evaluation Findings and Share Lessons Learned’, viewed n.d., from https://www.cdc.gov/eval/guide/step6/index.htm

Choo, C.W., 2007, The knowing organization: How organizations use information to construct meaning, create knowledge and make decisions, viewed 25 March 2018, from http://www.oxfordscholarship.com/oso/public/content/management/9780195176780/toc.html.

Cloete, F., 2006, ‘Chaos and quantum complexity approaches to public management: Insights from the new sciences’, Administratio Publica 14(1), 45–83.

Cloete, F., 2009, ‘Evidence-based policy analysis in South Africa: Critical assessment of the emerging governmentwide monitoring and evaluation system’, South African Journal of Public Administration 44(2), 293–311.

Cloete, F., 2017, ‘Evidence-based Policy Making and Policy Evaluation’, 3rd International Conference on Public Policy (ICPP3), Singapore, June 28–30, 2017, pp. 1–29, viewed 09 January 2020, from https://www.ippapublicpolicy.org/file/paper/59356882ccfd2.pdf

Cronje, F., 2014, A time traveller’s guide: Our next ten years, Tafelberg, Cape Town.

Department of Performance Monitoring and Evaluation (DPME), 2011, National evaluation policy framework, Department of Performance Monitoring and Evaluation, DPME, Pretoria.

Evan, W.E., 1993, Organization theory: Research and design, Macmillan, New York.

Facer, K., 2011, Learning futures: Education, technology and social change, Routledge, London.

Fox, W., Schwella, W. & Wissink, H., 2000, Public management, Juta, Kenwyn.

Fugita, N., 2010, Beyond Logframe: Using systems concepts in evaluation, viewed 26 June 2018, from https://www.fasid.or.jp/_files/publication/oda_21/h21-3.pdf.

Goldin, I., Kenneth, R. & the World Bank, 2006, Globalisation for development: Trade, finance, aid, migration, and policy, World Bank, New York.

Government of the Republic of Serbia, 2011, Guide to the logical framework approach, viewed 21 August 2018, from http://www.evropa.gov.rs/Evropa/ShowDocument.aspx?Type=Home&Id=525.

Hanf, K. & O’Toole, L., 1992, ‘Revisiting old friends: Networks, implementation structures, and the management of inter-organizational relations’, European Journal of Political Research 21(1), 163–180. https://doi.org/10.1111/j.1475-6765.1992.tb00293.x

Heider, C., 2015, Evaluation 2030: What does it mean for professionalization?, viewed 04 May 2016, from https://www.linkedin.com/pulse/evaluation-2030-what-does-mean-professionalization-caroline-heider.

Heisenberg, W., 1925, ‘Über quantentheoretische Umdeutung kinematischer und mechanischer Beziehungen [On the quantum-theoretical reinterpretation of kinematical and mechanical relationships]’, Zeitschrift für Physik, 33, 879–893.

Ho, S.Y., 2003, Evaluating urban policy, ideology, conflict and compromise, Ashgate, Surrey.

Ijeoma, E.O.C., 2010, ‘An eye on impact assessment: Mainstreaming government-wide monitoring and evaluation policy in South Africa’, Journal of Public Administration 45(2), 344–360.

Ile, I.U., Eresia-Eke, C.E. & Allen-Ile, C., 2012, Monitoring and evaluation of policies, programmes and projects, Van Schaik, Pretoria.

Independent Evaluation Group/World Bank Group, 2015, Professionalization with a view to Eval2030, viewed 04 May 2016, from https://ieg.worldbankgroup.org/blog/professionalization-view-eval2030.

INTRAC, 2015, Outputs, Outcomes and Impact, viewed 09 January 2020, from https://www.intrac.org/wpcms/wp-content/uploads/2016/06/Monitoring-and-Evaluation-Series-Outcomes-Outputs-and-Impact-7.pdf

Kayuni, H., 2010, ‘Chaos-complexity theory and education policy: Lessons from Malawi’s community day secondary schools’, Bulgarian Journal of Science and Education Policy (BJSEP) 4(1), 1–31.

Kusek, J. & Rist, R., 2004, Ten steps to a results-based monitoring and evaluation system, World Bank, Washington, DC.

Kushner, S. & Rotondo, E., 2012, ‘Evaluation voices from Latin America: Paradigms and practices’, New Directions for Evaluation 143(1), 7–16. https://doi.org/10.1002/ev.20014

Lomofsky, D., 2016, Theory of change: A way of thinking, a way of doing and a way of learning, viewed 21 April 2017, from http://www.samea.org.za/index.php?module=Pagesetter&type=file&func=get&tid=17&fid=file1&pid=37.

McDavid, J.C. & Hawthorn, L.R.L., 2006, Program evaluation & performance measurement: An introduction to practice, Sage, London.

Morgan, T. & McMahon, C., 2017, Constructivism and complexity: A philosophical basis for experiential learning models in engineering design education?, viewed 24 June 2018, from https://www.itas.kit.edu/downloads/veranstaltung_2017_philosophy_of_models_morgan.pdf.

Muthan, V.M., 2015, ‘Using chaos and complexity theory to design robust leadership architecture for South African technology businesses’, Master’s Research, University of the Witwatersrand, Johannesburg.

Oehmen, J., Thuesen, C., Ruiz, P. & Geraldi, J., 2015, Complexity management for projects, programmes, and portfolios: An engineering systems perspective, viewed 26 June 2018, from http://orbit.dtu.dk/files/108586258/Complexity_Management.pdf.

Okecha, K., 2009, Regime politics and service delivery in the Cape Town UniCity Council, viewed 27 August 2017, from http://ufh.netd.ac.za/jspui/bitstream/10353/216/3/Full%20dissertation.pdf.

Okumus, F., 2003, ‘A framework to implementation strategies in organisations’, Emerald Insight 41(9), 871–882. https://doi.org/10.1108/00251740310499555

Organisation for Economic Co-operation and Development (OECD), 2002, Evaluation and aid effectiveness: Glossary of key terms in evaluation and results-based management, OECD, Paris.

Overman, S., 1996, ‘The new sciences of administration: Chaos and quantum theory’, Public Administration Review 56(5), 487–491. https://doi.org/10.2307/977050

Oxford English Dictionary, 2018, Definition of normal in English, viewed 08 June 2018, from https://en.oxforddictionaries.com/definition/normal.

Public Service Commission (PSC), 2008, Basic concepts in monitoring and evaluation, viewed 20 March 2017, from http://www.psc.gov.za/documents/docs/guidelines/PSC%206%20in%20one.pdf.

Rabie, B., 2011, ‘Improving the systematic evaluation of local economic development results in South African local government’, PhD thesis, Stellenbosch University.

Rabie, B., 2014, ‘Chapter 6: Indicators for evidence-based evaluation’, in F. Cloete, B. Rabie & C. De Coning (eds.), Evaluation management in South Africa and Africa, pp. 204–251, Sun Press, Stellenbosch.

Rabie, B. & Cloete, F., 2009, ‘A new typology of monitoring and evaluation approaches’, Administratio Publica 17(3), 76–97.

Rabie, B. & Goldman, I., 2014, ‘Chapter 1: The context of evaluation management’, in F. Cloete, B. Rabie & C. De Coning (eds.), Evaluation management in South Africa and Africa, pp. 3–27, Sun Press, Stellenbosch.

Rickles, D., Hawe, P. & Shiell, A., 2007, ‘A simple guide to chaos and complexity’, Journal of Epidemiology and Community Health 61(11), 933–937. https://doi.org/10.1136/jech.2006.054254

Saunders, R., 2015, Implementation monitoring and process evaluation, Sage, Singapore.

Schneider, M. & Somers, M., 2006, ‘Organizations as complex adaptive systems: Implications of complexity theory for leadership research’, The Leadership Quarterly 17(1), 351–365. https://doi.org/10.1016/j.leaqua.2006.04.006

Scriven, M., 1967, ‘The methodology of evaluation’, in R.W. Tyler, R.M. Gagne & M. Scriven (eds.), Perspectives of curriculum evaluation, pp. 39–83, Rand McNally, Chicago, IL.

Smart-words.org, n.d., List of synonyms: A list of synonyms & antonyms for the 100 most often used words in the English language, viewed 24 June 2018, from http://www.smart-words.org/list-of-synonyms/list-of-synonyms-and-antonyms.pdf.

Stein, D. & Valters, C., 2012, Understanding theory of change in international development, Justice and Security Research Programme, London.

Stufflebeam, D.J. & Shinkfield, A.J., 2007, Evaluation theory, models & applications, Jossey-Bass, San Francisco, CA.

Thornhill, C., 2016, ‘Quantum physics, cosmology and public administration: Compatibility in theory construction?’, Administratio Publica 24(1), 45–58.

Uwizeyimana, D.E., 2011, ‘The effects of party-political interests on policy implementation effectiveness: Low-cost housing allocation in the Cape Town Unicity, 1994–2008’, PhD thesis, University of Johannesburg, Johannesburg.

Uwizeyimana, D.E., 2019, ‘Progress made towards achieving Rwanda’s vision 2020: Key indicators’ targets’, International Journal of Management Practice 12(1), 4–46. https://doi.org/10.1504/IJMP.2019.096676

Uwizeyimana, D.E., 2020, ‘Monitoring and evaluation in a chaotic and complex government interventions’ environment’, International Journal of Business and Management Studies 12(1), 1309–8047.

Weizmann Institute of Science, 1998, Quantum theory demonstrated: Observation affects reality, viewed 28 June 2018, from www.sciencedaily.com/releases/1998/02/980227055013.htm.

Woodhill, J., 2007, ‘M&E as learning: Rethinking the dominant paradigm’, in J. de Graaff, J. Cameron, S. Sombatpanit, C. Pieri & J. Woodhill (eds.), Monitoring and Evaluation of Soil Conservation and Watershed Development Projects, viewed 09 January 2020, from http://www.capfida.mg/km/atelier/wageningen/download

Woodrow, P. & Oatley, N., 2013, Change in conflict, security & justice programmes: Part I: What they are, different types, how to develop and use them, Department of International Development, UK Aid from the British People, London.


 

Crossref Citations

1. The pitfalls or gaps in monitoring and evaluation tools during Coronavirus disease 2019 era in South African municipalities
Babalo Yekani, Bethuel Sibongiseni Ngcamu, Sareesha Pillay
International Journal of Research in Business and Social Science (2147- 4478)  vol: 12  issue: 10  first page: 13  year: 2023  
doi: 10.20525/ijrbs.v12i10.3101