# 15.2: Other Measures For Making Decisions - Mathematics

The previous section explored the three main types of decisions and introduced various techniques for making smart investment decisions. This section revisits Decision #1 (making one choice from multiple options) and Decision #2 (pursuing one course of action) and introduces new characteristics:

• Decision #1: Examine situations where the timelines are not equal in length.
• Decision #2: Determine another profit-focused method of reaching the same decision.

Consider the situation in which you need to buy only one of two $50,000 machines that solve the same problem. Machine #1 has savings of$20,000 per year, while machine #2 has savings of $14,000 per year. However, machine #1 has a life expectancy of four years, while machine #2 has a life expectancy of eight years. Calculating the net present value on these two machines at a 15% cost of capital reveals the (NPV) of machine #1 equals$7,100 while the (NPV) of machine #2 equals $12,823. Is this a fair comparison? Should machine #2 be selected based on its higher (NPV)? The answer is no. The (NPV) analysis does not factor in that if you chose machine #1 it must be replaced after four years and would once again have the opportunity to produce more savings over the subsequent four years, thereby offsetting the original difference in (NPV). To fairly choose between the alternatives, you need a calculation that can equate two (or more) timelines of different lengths. This requires you to convert the net present value for each alternative into its equivalent annual cash flow. This is an annual annuity payment amount that, when present valued using the cost of capital, arrives at the same (NPV) as all of the original cash flows. ## The Formula To arrive at the equivalent annual cash flow, you need to apply two formulas: 1. Formula 15.1 calculates the net present value for each alternative you are considering. 2. To convert each calculated (NPV) into an annual cash flow payment, convert the net present value into an annuity possessing an annual cost of capital and annual end-of-interval payments—or an ordinary simple annuity! Formula 11.4 is reprinted below to illustrate how you can adapt this formula to suit the purposes of the equivalent annual cash flow. ## How It Works Follow these steps to calculate the equivalent annual cash flow: Step 1: For each alternative project, draw a timeline and calculate the net present value using Formula 15.1 and the techniques discussed in Section 15.1. Step 2: For each alternative project, calculate the periodic interest rate using Formula 9.1 and the number of annuity payments using Formula 11.1. Then solve Formula 11.4 for (PMT). Step 3: Compare the equivalent annual cash flow of each alternative and make the best decision. ## Important Notes Factoring in the unequal life expectancies of projects is important for situations in which you can choose only one out of many mutually exclusive projects. This is the basis for Decision #1. With respect to the other decisions: • There is only one timeline to consider in Decision #2 (pursuing one course of action), so the issue of unequal life expectancies does not apply. • In Decision #3 (making multiple choices under constraints), the net present value ratio provides an adequate means of equating different timelines. You do not need the equivalent annual cash flow. Exercise (PageIndex{1}): Give It Some Thought For each of the following decisions where alternative projects solve the same problem and only one can be chosen (a Decision #1 situation), indicate whether the decision should be made through the comparison of (NPV) or equivalent annual cash flows. 1. Project #1 with a seven-year life; Project #2 with a seven-year life 2. Project #1 with a five-year life; Project #2 with a seven-year life Answer 1. (NPV); each alternative has the same time frame. 2. Equivalent annual cash flow; each alternative has a different time frame Example (PageIndex{1}): Which Machine to Purchase when Timelines Are Unequal Recall the earlier situation in which you can buy only one of two equal$50,000 machines. The cost of capital is 15%. Determine which machine should be purchased and the annual benefit of your choice.

Solution

Notice that you are in a Decision #1 situation and need to choose from these two alternatives. Also notice that the cost of capital is known but the timelines are different. You need to use the equivalent annual cash flow to make the decision.

Step 1:

The timelines for machine #1 and machine #2 appear below, respectively.

Machine #1: (PV) = −$50,000, (IY) = 15%, (CY) = 1, (PMT) =$20,000, (PY) = 1, (FV) = $0, Years = 4 Machine #2: (PV) = −$50,000, (IY) = 15%, (CY) = 1, (PMT) = $14,000, (PY) = 1, (FV) =$0, Years = 8

How You Will Get There

Step 1 (continued):

Each timeline represents an ordinary simple annuity. Calculate the net present value by applying Formulas 9.1, 11.1, 11.4, and Formula 15.1 to each alternative.

Step 2:

For each alternative, calculate the equivalent annual cash flow using Formula 11.4 (rearranging for (PMT)).

Step 3:

Make the decision.

Step 4:

Determine on an annual basis how much better your decision is by taking the equivalent annual cash flow of your chosen alternative and subtracting the equivalent annual cash flow of the other alternative.

Perform

Machine #1:

Step 1:

(i=15 \% / 1=15 \% ; N=1 imes 4=4 ) payments

[PV_{ORD}=$20,000left[dfrac{1-left[dfrac{1}{(1+0.15)^{frac{1}{1}}} ight]^{4}}{(1+0.15)^{frac{1}{1}-1}} ight]=$ 57,099.56725 onumber ]

[egin{aligned}NPV&=$57,099.56725-$ 50,000&=$7,099.56725 &==>$ 7,100 end{aligned} onumber ]

Step 2:

[egin{aligned} PMT&=dfrac{$7,100}{left[dfrac{1-left[dfrac{1}{(1+0.15)^{frac{1}{1}}} ight]^{4}}{(1+0.15)^{frac{1}{1}}-1} ight]} &=$2,486.883996 &==>2,487 end{aligned}] Machine #2: Step 1: (i=15 \% / 1=15 \% ; N=1 imes 8=8 onumber ) payments [PV_{ORD}= 14,000left[dfrac{1-left[dfrac{1}{(1+0.15)^{frac{1}{1}}} ight]^{8}}{(1+0.15)^{frac{1}{1}-1}} ight]=62,822.50111 onumber ] [egin{aligned}NPV&= 62,822.50111-$50,000&=$ 12,822.50111 &==>12,823end{aligned} onumber ] Step 2: [egin{aligned} PMT&=dfrac{ 12,823}{left[dfrac{1-left[dfrac{1}{(1+0.15)^{frac{1}{1}}} ight]^{8}}{(1+0.15)^{frac{1}{1}}-1} ight]} &=$2,857.606699 &==>$2,858 end{aligned}]

Step 3:

The best choice is machine #2 because it has a higher equivalent annual cash flow of $2,858. Step 4: Annual benefit = Machine #2 − Machine #1 =$2,858 − $2,487 =$371

Calculator Instructions

Cash Flows
Machine #1Machine #2
Cash FlowAmount ((CXX))Frequency ((FXX))Amount ((CXX))Frequency ((FXX))
(CF0)-50000N/A-50000N/A
C01 & F01200004140008
(NPV)
Machine #1Machine #2
(I)1515
MachineMode(N)(I/Y)(PV)(PMT)(FV)(P/Y)(C/Y)

The smart decision is to purchase machine #2 because it produces the highest equivalent annual cash flow of $2,858, which represents savings of$371 more per year than machine #1.

## Internal Rate of Return

Another method of reaching a decision when choosing whether to pursue a single course of action (Decision #2) involves percentages. While the (NPV) calculations in Section 15.1 provide an exact monetary magnitude of the project, the common mindset in business focuses on profitability as a percentage and not a dollar amount. Thus, decisions are based on the internal rate of return for a project, or IRR for short. The IRR is the annual percentage rate of return on the investment being made such that the net present value of all cash flows in a particular project equals zero.

To interpret the IRR, examine the (NPV) decision criteria and the relationship to the IRR:

1. If the net present value is greater than or equal to $0, pursue the project. 1. If the (NPV) is more than zero, the definition of IRR requires you to find a rate of return such that your present value becomes zero. Mathematically this means that a higher discount rate must be used to calculate your present value. In other words, the IRR is greater than the cost of capital. 2. If the (NPV) equals zero, by definition the cost of capital and the IRR are the same value. 2. If the net present value is less than$0, do not pursue the project. The IRR requires you to find a rate of return where the present value becomes zero. Mathematically this means that a lower discount rate must be used to calculate your present value. In other words, the IRR is less than the cost of capital.

This table summarizes how to decide whether to pursue a single course of action using the IRR method instead of the (NPV) method.

If...So...Then...Decision
(NPV>0)IRR > Cost of CapitalIt is profitable since it makes enough money to cover the costsPursue the project
(NPV=0)IRR > Cost of CapitalIt breaks even and just pays the billsThis is the minimum financial condition to pursue the project
(NPV<0)IRR > Cost of CapitalIt is unprofitable and does not make enough money to cover the costsDo not pursue the project

## The Formula

Solving for the internal rate of return requires you to calculate the annually compounded interest rate for the project. For annuities, substituting and rearranging Formula 15.1 produces:

[NPV= ext { (Present Value of All Future Cash Flows) - (Initial Investment) } onumber ]

[$0=PMTleft[dfrac{1-left[dfrac{1}{left(1+i ight )^{frac{CY}{PY}}} ight ]^N}{(1+i)^{frac{CY}{PY}}-1} ight ] -( ext { Initial Investment }) onumber ] [ ext { Initial Investment }=PMTleft[egin{array}{c}{1-left[dfrac{1}{(1+i)^{frac{CY}{PY}}} ight]^{N}} (1+i)^{frac{CY}{PY}}-1end{array} ight] onumber ] The only algebraic method to solve this general formula for the periodic interest rate is through trial and error, which is time consuming and inefficient. The same algebraic problem exists if your cash flows consist of multiple lump-sum amounts at different points in time. Assume you have inflows of$15,000 and $10,000 at the end of years one and two, respectively. Taking Formula 15.1 you have: [NPV= ext { (Present Value of All Future Cash Flows) - (Initial Investment) } onumber ] [$ 0=left($15,000 /(1+i)^{1}+$ 10,000 /(1+i)^{2} ight)-( ext { Initial Investment }) onumber ]

[ ext { Initial Investment }=$15,000 /(1+i)^{1}+$ 10,000 /(1+i)^{2} onumber ]

It is algebraically difficult to solve this formula for the periodic interest rate.

Therefore, using the same process as in Section 11.6, you should let the BAII Plus calculator perform the trial and error and arrive at the solution. Note that Excel can also perform this calculation, and the formula is pre-programmed into the chapter template.

## How It Works

Follow these steps to solve for the internal rate of return:

Step 1: Draw a timeline to illustrate the cash flows involved in the project.

Step 2: If using manual trial and error, set up the appropriate algebraic formula to arrive at an (NPV) of $0 and start the sequence of iterations to generate the solution. Alternatively, use technology such as the BAII Plus calculator by entering the cash flows and solving for the IRR. Step 3: Compare the IRR to the cost of capital and make a decision. ## Important Notes Using the IRR Function on the BAII Plus Calculator. Use the IRR function in conjunction with the CF (cash flow) function. Once you have entered all cash flows, activate the IRR function by pressing the IRR key followed by the CPT button to perform the calculation. The output is a percentage in percent format. To exit the window, press 2nd Quit. Recall that because of the trial-and-error method required, the calculator may briefly hesitate before displaying the solution. ## Things To Watch Out For When making decisions, you use the internal rate of return only to figure out if one particular path should be followed or not (which is Decision #2). The internal rate of return should not be used when making one choice from multiple options (Decision #1) or when making multiple choices under constraints (Decision #3). This rule is in place for two reasons: 1. The Cost of Capital Is Ignored. The IRR does not factor in the cost of capital in its computations. Recall that the fundamental concept of time value of money requires all money to be on the same date using an appropriate rate of interest—the cost of capital—before any decision can be made. Therefore, if you have not factored in the cost of capital, then your analysis is incomplete and choosing between different alternatives based solely on the IRR is flawed. 2. The Magnitude of the Decision Is Ignored. It can easily happen that an alternative has a high IRR but a low (NPV). For example, using a cost of capital of 10% consider two alternatives. Alternative A invests$1 and one year later returns $1.50. The IRR is 50%, while the (NPV) is$0.36. Alternative B invests $1,000 and one year later returns$1,250. The IRR is 25%, while the (NPV) is $136.36. If choosing between these two options based on the IRR, you select Alternative A, resulting in a net present value that is$136 lower than for Alternative B.

Give It Some Thought

In each of the following situations, determine whether the project should be pursued or not.

1. Cost of capital = 15%; IRR = 17%
2. Cost of capital = 12%; IRR = 9%
3. Cost of capital = 14%; IRR = 14%
1. Pursue it; IRR > cost of capital
2. Do not pursue it; IRR < cost of capital
3. Minimum condition to pursue it; IRR = cost of capital (breaks even

Pursuing a Project Using the IRR Criterion

Tim Hortons has purchased the lease on a three-year onsite concession space in the cafeteria at a local college for $750,000. The franchise is expected to earn$400,000, $500,000, and$600,000 in profits per year for the first three years, respectively.

1. What is the investment's internal rate of return?
2. If the cost of capital is 20%, did Tim Hortons make a smart financial decision?

Solution

You need to calculate the internal rate of return (IRR) for this project. Once calculated, you can compare it to the provided cost of capital to evaluate the decision.

Step 1:

The timeline for this project appears below.

(PV) = $750,000, (CY) = 1 (C01) =$400,000, Years = 1

(C02) = $500,000, Years = 2 (C03) =$600,000, Years = 3

How You Will Get There

Step 2:

This project involves multiple lump-sum cash flows, so apply Formulas 9.2 and 9.3 (rearranged for (PV)). Substitute into the rearranged Formula 15.1. Algebraically, this must be solved through trial and error for i (note that (i = IY) since the compounding frequency is 1). Alternatively, use the cash flow and internal rate of return function on your calculator.

Step 3:

Compare the IRR to the cost of capital to make the decision.

Perform

Step 2:

Cash Flow 1: (N=1 imes 1=1 ext { compound; } PV=$400,000 div(1+i)^{1} ) Cash Flow 2: (N=1 imes 2=2 ext { compounds; } PV=$ 500,000 div(1+i)^{2} )

Cash Flow 3: (N=1 imes 3=3 ext { compounds; } PV=$600,000 div(1+i)^{3} ) ($ 750,000=$400,000 div(1+i)^{1}+$ 500,000 div(1+i)^{2}+\$ 600,000 div(1+i)^{3} )

Through trial and error or by using the calculator (see instructions below), the calculated solution is: (IY) = 40.9235%

Step 3:

40.9235% > 20% ==> smart decision

Calculator Instructions

Cash FlowAmount ((CXX))Frequency ((FXX))
CF0-750000N/A
C01 & F014000001
C02 & F025000001
C03 & F036000001

IRR

Since the internal rate of return on this project is 40.9235%, which far exceeds the cost of capital of 20%, Tim Hortons made a very smart financial decision in pursuing this project.

## Mathematical Modeling and Models for Optimal Decision-Making in Health Care

Many aspects in the management of healthcare systems are quantitative, the amount of data within the health care increases by the minute, and, in reality, it makes difficult for healthcare systems to identify the insights into what is most valuable for the patients. Data-driven approach to health (or health economic) outcome assessment, artificial intelligence, and mathematical, computational, methodological, and technological advances are the core of effective healthcare system management [1–3].

Modeling in medicine is a valuable tool in the planning and evaluation of interventions, especially when a clinical trial is ethically or logistically impossible [4, 5]. The development of such mathematical models used to simulate medical outcomes is a growing area in medicine. The mathematical modeling is known by various names like predictive modeling, simulation, or decision analysis. In general, modeling techniques are used for health service planning, effectiveness and outcome assessment, healthcare financing and budget impact assessment, health economic assessments, infectious disease surveillance, health service outcomes predicting purposes, and other applications in health care. Mathematical modeling is also helpful when limitations like a rare event prohibit implementing RCT and similar studies or expanding research on actual patients due to time, ethical, legal, financial, technical, and other limitations [6, 7].

With this special issue, we add to the literature by providing case studies and practical examples of mathematical modeling and models for optimal decision-making in health care. We aim to address questions from data analytics, solving problems in predicting outcomes for clinical medicine and public health.

Blood pressure (BP) is one of the indispensable elements of physiological health characteristics and a significant indicator for predicting and diagnosing hypertension and cardiovascular diseases. Q. Wang et al. proposed a two-domain fusion model to estimate BP continuously from pulse wave acquired with a pressure sensor. In more detail, the optimal external pressure applied on the pressure sensor was first determined in order to capture pulse wave in the radial artery. The captured pulse wave was then processed in both the time and frequency domains via filtering and fast Fourier transform. A set of features were extracted from these two domains and input into a neural network along with blood pressure values measured by a commercial sphygmomanometer for training. Finally, the model was tested on new data for accuracy evaluation, and the proposed two-domain fusion method achieved a high degree of accuracy in measuring blood pressure.

H. Shang et al. proposed an improvement for ECG analysis, namely, improved sliding window area method for T wave detection. It allows better detection of T wave onset and offset, which allows improving clinical diagnosis as well as daily heart monitoring.

A. K. Heikhmakhtiar and K. M. Lim proposed computational prediction of the combined effect of CRT and LVAD on cardiac electromechanical delay in the failing ventricle with left bundle branch blocked (LBBB) and right bundle branch blocked (RBBB) conditions. The subjects were normal sinus rhythm, LBBB, RBBB, LBBB with CRT-only, RBBB with CRT-only, LBBB with CRT + LVAD, and RBBB with CRT + LVAD. The results showed that the CRT-only shortened the total electrical activation time (EAT) in the LBBB and RBBB conditions by 20.2% and 17.1%, respectively. The CRT-only reduced the total mechanical activation time (MAT) and electromechanical delay (EMD) of the ventricle under LBBB by 21.3% and 10.1%, respectively. Furthermore, the CRT-only reduced the contractile adenosine triphosphate (ATP) consumption by 5%, increased left ventricular (LV) pressure by 6%, and enhanced cardiac output (CO) by 0.2 L/min under LBBB condition. However, CRT-only barely affected the ventricle under RBBB condition. Under the LBBB condition, CRT + LVAD increased LV pressure and CO by 10.5% and 0.9 L/min, respectively. CRT + LVAD reduced ATP consumption by 15%, shortened the MAT by 23.4%, and shortened the EMD by 15.2%. In conclusion, they computationally predicted and quantified that the CRT + LVAD implementation is superior to CRT-only implementation particularly in HF with LBBB condition.

Magnetic resonance imaging has been widely used in diagnostic imaging for general checkup in clinical practice, especially in detection and diagnosis of brain diseases. However, brain MR imaging has some lacks such as noise, intensity inhomogeneity, low contrast, and partial volume effect , which brings serious obstacles to segment the brain MR images. The study of J. Song and Z. Zhang presented a novel and more robust method to noise in the brain magnetic resonance imaging, together with a more effective estimation method of the bias field.

Automatic segmentation of different images is one of the most important topics in medicine. L. Cao et al. discuss application of Random Forests Stacks for automatic segmentation of pathological glomerular basement membranes in TEM images. It allows faster observation of morphological changes, reducing manual and laborious work of specialists. Another exercise in image processing is discussed in J. Song and Z. Zhang. Improvements for brain tissue segmentation and bias field correction of MR images are proposed and evaluated. The results are promising and potentially can deal with noise in brain MR images.

In conventional radiotracer and drug development, poor bench-to-bedside translation often results due to the differences between in vitro and in vivo conditions [8]. The study by Y.-H. Nai and H. Watabe evaluated the feasibility of extending the amyloid-validated screening methodology to support the development of tau PET radiotracers, where more challenges like off-target binding exist. This is the first in silico method investigated, which uses the physicochemical and pharmacological properties of the compounds to support tau PET radiotracers developments. 22 PET radiotracers reported to bind to tau proteins were investigated, including 9 clinically applied and tau-focused radiotracers. The study supported the use of the screening methodology in radiotracer development by allowing comparison of candidate radiotracers with clinically applied radiotracers based on SUVR, with respect to binding to a single target, and provides some insights to guide the development of in silico models in supporting the development of tau radiotracers.

Automatic identification of relevant biomarkers is one of the important steps towards personalized treatment. B. Haller et al. evaluate applicability of a number of methods, for example, Cox regression with linear interaction, Multivariable Fractional Polynomials for Interaction (MFPI), Local Partial Likelihood Bootstrap (LPLB), and the Subpopulation Treatment Effect Pattern Plot (STEPP), for biomarker identification. Experiments on randomized clinical trials show that the Cox regression works best when interactions are monotonous and the number of events is low. When complexity increases, MFPI and LPLB outperform other methods. The authors recommend application of statistical methods developed for assessment of interactions between continuous biomarkers and treatment instead of arbitrary or data-driven categorization of continuous covariates.

The study of D. Liu et al. applies similarity measures of single and interval valued neutrosophic sets based on Euclidean distance for diagnostics. Novel theoretical model is developed in the paper, and its effectiveness is demonstrated on generalized diagnosis, showing that it performs well in solving a multiple criteria decision process. The proposed similarity measures were applied to medical diagnosis decision problems, and a number of examples were used to illustrate the feasibility and effectiveness of the proposed similarity measure.

Coreference resolution is a challenging part of natural language processing (NLP) with applications in machine translation, semantic search, and other information retrieval and decision support systems. V. Žitkus et al. presented a method for coreference resolution in the Lithuanian language and its application for processing e-health records from a hospital reception. The novelty of their proposed method is the ability to process coreferences with minimal linguistic resources, which is important in linguistic applications for rare and endangered languages. Their experimental results have shown that coreference resolution is applicable to the development of NLP-powered online healthcare services in Lithuania.

Computer-aided models for mammographic breast cancer diagnosis (MBCD) have been explored for over thirty years [9]. The study of L. Zou et al. dedicated to the technique of CNN applied in a specific application of MBCD, and it aims to provide clues on how to use CNN in intelligent diagnosis. This study is restricted to peer-reviewed journal publications, and consequently, technical details and pros and cons of each model can be delivered. Furthermore, based on how to use CNN techniques, the MBCD models are broadly categorized into three groups. One is to design shallow models or to modify existing models for decreased time cost and medical instances for training another is to make the best use of a pretrained CNN model by transfer learning and parameter fine-tuning and the third is to take advantage of CNN models for feature extraction, while the differentiation between malignant and benign lesions is based on machine learning classifiers. At last, findings, challenges, and limitations are summarized, and some clues on the future work are also given. At present, the design and use of CNN-based MBCD is at its early stage and result-oriented. The ultimate goal of using deep learning tools is to facilitate clinical practice. This review provides benefits to scientific researchers, industrial engineers, and those who are devoted to intelligent cancer diagnosis.

The past application of mathematical models in medicine also has been proven useful in cardiovascular diseases (CVDs). The study of O. Saidi et al. aimed to describe a comprehensive Markov model based on both a probabilistic multivariate approach and simple linear regression metamodeling using the model to evaluate the effects of increases in uptake of stroke treatments, lifestyle changes, and primary prevention among the Tunisian population aged 35–94 years in 2025. It examined three interventions: improved medical treatments in the acute phase, secondary prevention of stroke by increasing the prescribing of statins, and primary prevention aiming to reduce salt intake.

Type-1 diabetes is a condition caused by the lack of insulin hormone, which leads to an excessive increase in blood glucose level. The glucose kinetics process is difficult to control due to its complex and nonlinear nature and with state variables that are difficult to measure. P. D. Ngo et al. proposed a method for automatically calculating the basal and bolus insulin doses for patients with type-1 diabetes using reinforcement learning with a feedforward controller. The proposed controller also improved the blood glucose responses and prevented hypoglycemia condition. Simulation of the control system in different uncertain conditions provided insights on how the inaccuracies of carbohydrate counting and meal-time reporting affect the performance of the control system. As a conclusion, the proposed controller is an effective tool for reducing postmeal blood glucose rise and for countering the effects of external known events such as meal intake and maintaining blood glucose at a healthy level under uncertainties.

In the paper of B. H. Lichae et al., the fractional-order differential model of HIV-1 infection of CD4+ T-cells with the effect of drug therapy has been introduced. There are three components: uninfected CD4+ T-cells, x, infected CD4+ T-cells, y, and density of virions in plasma, z. The aim is to gain numerical solution of this fractional-order HIV-1 model by the Laplace Adomian decomposition method (LADM). The solution of the proposed model has been achieved in a series form. Moreover, to illustrate the ability and efficiency of the proposed approach, the solution has been compared with the solutions of some other numerical methods. The Caputo sense has been used for fractional derivatives.

Beds are key, scarce medical resources in hospitals. The study of L. Luo et al. aimed to balance the utilization of existing beds in a large tertiary hospital in China. The author developed a data-driven hybrid three-stage framework incorporating data analysis, simulation, and mixed integer programming to minimize the gaps in bed occupancy rates (BOR) among different departments. The first stage is to calculate the length of stay (LOS) and BOR of each department and identify the departments that need to allocate beds. In the second stage were used a fitted arrival distribution and median LOS as the input to a generic simulation model. In the third stage was built a mixed integer programming model using the results obtained in the first two stages to generate the optimal bed allocation strategy for different departments. The case study demonstrated the effectiveness of the proposed data-driven hybrid three-stage framework and provides hospital bed policy makers with a feasible solution for bed allocation.

Mathematical models are often used and prove their applicability for optimal decision-making. They are also useful to derive estimates of rare or future events from recorded intermediate points. When developing models, decisions are needed about the appropriate level of complexity to be represented and about model structure and assumptions.

#### Conflicts of Interest

The editors declare no conflicts of interest.

#### Acknowledgments

We would like to express our gratefulness to Prof. Katarzyna Kolasa for participation in the preparation of the call materials and participation in guest editing some manuscripts. Also, we would like to thank all researchers who submitted their research work and reviewers who made very important comments and suggestions for authors to improve their manuscripts.

Giedrius Vanagas
Tomas Krilavičius
Ka Lok Man

#### References

1. A. X. Costa, S. A. Ridley, A. K. Shahani, P. R. Harper, V. De Senna, and M. S. Nielsen, “Mathematical modelling and simulation for planning critical care capacity,” Anaesthesia, vol. 58, no. 4, pp. 320–327, 2003. View at: Publisher Site | Google Scholar
2. R. B. Chambers, “The role of mathematical modeling in medical research: “research without patients?”,” The Ochsner Journal, vol. 2, no. 4, pp. 218–223, 2000. View at: Google Scholar
3. M. Calder, C. Craig, D. Culley et al., “Computational modelling for decision-making: where, why, what, who and how,” Royal Society Open Science, vol. 5, no. 6, Article ID 172096, 2018. View at: Publisher Site | Google Scholar
4. G. P. Garnett, S. Cousens, T. B. Hallett, R. Steketee, and N. Walker, “Mathematical models in the evaluation of health programmes,” The Lancet, vol. 378, no. 9790, pp. 515–525, 2011. View at: Publisher Site | Google Scholar
5. W. Crown, N. Buyukkaramikli, M. Y. Sir et al., “Application of constrained optimization methods in health services research: report 2 of the ISPOR Optimization Methods Emerging Good Practices Task Force,” Value in Health, vol. 21, no. 9, pp. 1019–1028, 2018. View at: Publisher Site | Google Scholar
6. Y. Zheng, F. Pan, and S. Sorensen, “Modeling treatment sequences in pharmacoeconomic models,” PharmacoEconomics, vol. 35, no. 1, pp. 15–24, 2017. View at: Publisher Site | Google Scholar
7. L. Echazu and D. Nocetti, “Priority setting in health care: disentangling risk aversion from inequality aversion,” Health Economics, vol. 22, no. 6, pp. 730–740, 2013. View at: Publisher Site | Google Scholar
8. Y.-H. Nai, M. Shidahara, C. Seki, and H. Watabe, “Biomathematical screening of amyloid radiotracers with clinical usefulness index,” Alzheimer’s & Dementia: Translational Research & Clinical Interventions, vol. 3, no. 4, pp. 542–552, 2017. View at: Publisher Site | Google Scholar
9. N. I. R. Yassin, S. Omran, E. M. F. El Houby, and H. Allam, “Machine learning techniques for breast cancer computer aided diagnosis using different image modalities: a systematic review,” Computer Methods and Programs in Biomedicine, vol. 156, pp. 25–45, 2018. View at: Publisher Site | Google Scholar

Copyright © 2019 Giedrius Vanagas et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Chard, D. J., Clarke, B., Baker, S., Otterstedt, J., Braun, D., & Katz, R. (2005). Using measures of number sense to screen for difficulties in mathematics: Preliminary findings. Assessment for Effective Intervention, 30(3), 3–14.

Clarke, B., Baker, S., Smolkowski, K., & Chard, D. J. (2008). An analysis of early numeracy curriculum-based measurement: Examining the role of growth in student outcomes. Remedial and Special Education, 22(1), 46–57.

Clarke, B., Lembke, E., Hampton, D., & Hendricker, E. (2011). Understanding the R in RTI: What we know and what we need to know about measuring student response in mathematics. Brookes Publishing: Baltimore.

Clarke, B., & Shinn, M. R. (2004). A preliminary investigation into the identification and development of early mathematics curriculum-based measurement. School Psychology Review, 33, 234–248.

Conoyer, S. J., Foegen, A., & Lembke, E. S. (2016). Examining predictive validity across years and states using early numeracy indicators. Remedial and Special Education, 37(3), 159–171.

Curran, P. J., Obeidat, K., & Losardo, D. (2010). Twelve frequently asked questions about growth curve modeling. Journal of Cognitive Development, 11(2), 121–136.

Deno, S. L. (1985). Curriculum-based measurement: The emerging alternative. Exceptional Children, 52, 219–232.

Entwisle, D. R., & Alexander, K. L. (1992). Summer setback: Race, poverty, school composition, and mathematics achievement in the first two years of school. American Sociological Review, 57(1), 72–84. Retrieved from http://www.jstor.org/stable/2096145.

Fuchs, L. S. (2003). Assessing intervention responsiveness: Conceptual and technical issues. Learning Disabilities Research and Practice, 18, 172–186.

Fuchs, L. S. (2004). The past, present and future of curriculum based measurement research. School Psychology Review, 33, 188–192.

Fuchs, L. S., & Fuchs, D. (2001). Principles for the prevention and intervention of mathematics difficulties. Learning Disabilities Research and Practice, 10(2), 85–95.

Fuchs, L. S., Fuchs, D., Compton, D. L., Hamlett, C. L., & Seethaler, P. M. (2007). Mathematics screening and progress monitoring at first grade: Implications for responsiveness to intervention. Exceptional Children, 73, 311–330.

Fuchs, L. S., Hamlett, C., & Stecker, P. M. (1991). Effects of curriculum-based measurement and consultation on teacher planning and student achievement in mathematics operations. American Educational Research Journal, 28(3), 617–641.

Gersten, R., & Chard, D. J. (1999). Number sense: Rethinking arithmetic for students with mathematics disabilities. Journal of Special Education, 33(1), 18–28.

Gustavson, K., von Soest, T., Karevold, E., & Roysamb, E. (2012). Attrition and generalizability in longitudinal studies: Findings from a 15-year population-based study and a Monte Carlo simulation study. BMC Public Health, 12, 1–11.

Hampton, D. D., Lemkbe, E. S., Lee, Y. S., Pappas, S., Chiong, C., & Ginsberg, H. (2012). Technical adequacy of early numeracy curriculum-based progress monitoring measures for kindergarten and first grade students. Assessment for Effective Intervention, 37(2), 118–126.

Harwell, M., & LeBeau, B. (2010). Student eligibility for a free lunch as an SES measure in education research. Educational Researcher, 39(2), 120–131.

Harwell, M. R., Maeda, Y., & Lee, K. (2004). Replicating and extending White’s (1982) meta-analysis of the relationship between SES and student achievement. Paper presented at the annual meeting of the American Educational Research Association, San Diego, CA.

Hyde, J. S., Fennema, E., & Lamon, S. J. (1990). Gender differences in mathematics performance: A meta-analysis. Psychological Bulletin, 107(2), 139.

Kaufman, A. S., & Kaufman, N. L. (1985). Kaufman Test of Educational Achievement. Circle Pines, MN: American Guidance Service.

Lembke, E. S., & Foegen, A. (2009). Identifying early numeracy indicators for kindergarten and first grade students. Learning Disabilities Research and Practice., 24, 12–20.

Lembke, E. S., Foegen, A., Whittaker, T. A., & Hampton, D. D. (2008). Establishing technical adequate measures of progress in early numeracy. Assessment for Effective Intervention, 33, 206–214.

Lembke, E. S., Hampton, D. D., & Beyers, S. J. (2012). Response to intervention mathematics: Critical elements. Psychology in the Schools, 49(3), 257–272.

Lubienski, S., & Lubienski, C. (2006). School sector and academic achievement at a multi-level analysis of NAEP mathematics data. American Educational Research Journal, 43(4), 651–698.

Methe, S. A., Hintze, J. M., & Floyd, R. G. (2008). Validation and decision accuracy of earlynumeracy skill indicators. School Psychology Review, 37(3), 359–373.

Okamato, Y., & Case, R. (1996). Exploring the microstructure of children’s central conceptual Structures in the domain of number. Monographs for the Society of Research in Child Development, 61, 27–59.

Rabe-Hesketh, S., & Skrondal, A. (2008). Multilevel and longitudinal modeling using Stata. College Station, TX: Stata.

Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical linear models: Applications and data analysis methods. Thousand Oaks, CA: Sage.

Shinn, M. R. (1989). Identifying and defining academic problems: Curriculum-based measurement screening and eligibility procedures, (184–203). In M. R. Shinn (Ed.), Curriculum-based measurement: Assessing special children. New York: Guilford.

Singer, J. D., & Willett, J. B. (2003). Applied longitudinal data analysis: Modeling change and event occurrence. New York: Oxford University Press.

Sirin, S. R. (2005). Socioeconomic status and academic achievement: a meta-analytic review of research. Review of Educational Research, 75, 417–453.

Skrongdal, A., & Rabe-Hesketh, S. (2004). Generalized latent variable modeling: Multilevel, longitudinal, and structural equation models. Boca Raton, FL: Chapman and Hall/CRC.

The Psychological Corporation (1992). Weschler Individual Achievement Test. San Antonio, TX.

Woodcock, R. W., McGrew, K. S., & Mather, N. (2001). Woodcock-Johnson® III Test. Riverside Publishing Company. Itasca, IL.

## High School: Statistics and Probability

One of the main reasons for collecting data is so it can be compared to other data. Sounds like a dream come true, doesn't it?

Comparing data allows us to make big statements. After all, how can we be sure that using Shmoop increases test scores if we don't have two sets of data, pre-Shmoop and post-Shmoop, to compare?

Rather than comparing entire data sets, however, we can summarize the data and compare these summaries. That way, rather than comparing long and seemingly never-ending lists, we can compare two very basic factors that tell us a lot about the data: the center and spread of the data.

The center of the data is exactly what it sounds like: a representation of the middle of the data, or a typical value. It gives us a good first guess as to where on the number line the data will fall. Students should know the two types of centers of data: mean and median. The mean, or average, is the sum of all the data points divided by the number of data points, while the median is the value that splits the data into two intervals.

Students should know that the center of data can give us a good sense of the data set overall. For instance, we'll know that the heights of buildings are more closely represented by an average of 100 feet than by an average of 100,000 feet. Still, the center of data doesn't tell us the whole story. Let's say we have the following two sets of data:

Set 1: 4, 5, 6, 4, 6, 5
Set 2: 1, 9, 2, 8, 0, 10

Both of these data sets have an average of 5, but the first set only has values between 4 and 6 and the second data set has values between 0 and 10, a much wider range. This is "wideness" or "breadth" of the data is represented by the spread of data, and that's the second aspect students should consider when summarizing data.

Students should know how to use the interquartile range and standard deviation to describe the spread of data. The interquartile range (IQR) is the range that spans the middle fifty percent of the data. To determine the IQR, the lower (Q1) and upper (Q3) quartile need to be determined. Once that is done, IQR = Q3Q1.

The standard deviation, denoted by σ, is the spread of the data away from the mean of a set of data. If you could simultaneously move away from the mean in both directions, when you had traveled the distance of the standard deviation in both directions, then 68% of the data would be between you and your clone (in a normal distribution, anyway).

Students should know that the mean has the formula

and the standard deviation has the formula

Often, the mean and standard deviation are used together and the median and interquartile range are used together. Students should know that the mean and standard deviation are most frequently used when the distribution of data follows a bell curve (normal distribution), shown below.

Students should understand that the larger the values of the IQR or standard deviation, the larger the spread of the data is. If students are struggling with why this is so, show them mathematically using the formulas (since the quartiles are further apart, or the differences between the data points and the mean are further apart). Now, rather than comparing tables of dozens or even hundreds of numbers, we just need to compare two.

Here's a resource teachers can use to help explain normal distribution curve.

## Key Points

Decision Matrix Analysis helps you to decide between several options, where you need to take many different factors into account.

To use the tool, lay out your options as rows on a table. Set up the columns to show the factors you need to consider. Score each choice for each factor using numbers from 0 (poor) to 5 (very good), and then allocate weights to show the importance of each of these factors.

Multiply each score by the weight of the factor, to show its contribution to the overall selection. Finally add up the total scores for each option. The highest scoring option will be the best option.

Decision Matrix Analysis is the simplest form of Multiple Criteria Decision Analysis (MCDA), also known as Multiple Criteria Decision Aid or Multiple Criteria Decision Management (MCDM). Sophisticated MCDA can involve highly complex modeling of different potential scenarios, using advanced mathematics.

A lot of business decision making, however, is based on approximate or subjective data. Where this is the case, Decision Matrix Analysis may be all that&rsquos needed.

This site teaches you the skills you need for a happy and successful career and this is just one of many tools and resources that you'll find here at Mind Tools. Subscribe to our free newsletter, or join the Mind Tools Club and really supercharge your career!

In the current context of the electronic governance of society, both administrations and citizens are demanding greater participation of all the actors involved in the decision-making process relative to the governance of society. In addition, the design, planning, and operations management rely on mathematical models, the complexity of which depends on the detail of models and complexity/characteristics of the problem they represent. Unfortunately, decision-making by humans is often suboptimal in ways that can be reliably predicted. Furthermore, the process industry seeks not only to minimize cost, but also to minimize adverse environmental and social impacts. On the other hand, in order to give an appropriate response to the new challenges raised, the decision-making process can be done by applying different methods and tools, as well as using different objectives. In real-life problems, the formulation of decision-making problems and application of optimization techniques to support decisions are particularly complex and a wide range of optimization techniques and methodologies are used to minimize risks or improve quality in making concomitant decisions. In addition, a sensitivity analysis should be done to validate/analyze the influence of uncertainty regarding decision-making.

Prof. Víctor Yepes
Prof. José M. Moreno-Jiménez
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

## Résumé

Le concept d’« enseignant remarquant» constitue un outil méthodologique puissant pour comprendre le processus décisionnel et le jugement professionnel des enseignants. Dans ce champ de réflexion, les chercheurs tentent normalement d’identifier les aspects, issus de la pratique en salle de classe qui sont pertinents aux enseignants. Des données sur la qualité d’« enseignant remarquant» et le processus décisionnel des enseignants peuvent être recueillies à l’échelle en utilisant des instruments fondés sur des scénarios, toutefois, la littérature considère ces instruments comme étant intrinsèquement ambigus dans la mesure où ils n’apportent pas toute l’information nécessaire à la prise de décision. Afin de faciliter l’établissement de liens entre la qualité d’« enseignant remarquant» et le jugement professionnel des enseignants, nous contribuons à mieux cerner cette ambiguïté en présentant le concept d’« interprétations conditionnelles», correspondant aux moments où les enseignants ont besoin de plus de contexte pour déterminer si un acte pédagogique est requis. En repérant les marqueurs linguistiques se trouvant dans un vaste corpus de réponses touchant les aspects relatifs aux scénarios, nous découvrons en outre des « interprétations conditionnelles». Le repérage de ces moments nous permet d’analyser le type de raisonnement (analytique ou intuitif) évoqué par ces scénarios et nous donne un aperçu de l’information dont les enseignants ont besoin pour prendre leurs décisions.

## Decision Making: Factors that Influence Decision Making, Heuristics Used, and Decision Outcomes

Every day, people are inundated with decisions, big and small. Understanding how people arrive at their choices is an area of cognitive psychology that has received attention. Theories have been generated to explain how people make decisions, and what types of factors influence decision making in the present and future. In addition, heuristics have been researched to understand the decision making process.

Several factors influence decision making. These factors, including past experience (Juliusson, Karlsson, & Gӓrling, 2005), cognitive biases (Stanovich & West, 2008), age and individual differences (Bruin, Parker, & Fischoff, 2007), belief in personal relevance (Acevedo, & Krueger, 2004), and an escalation of commitment, influence what choices people make. Understanding the factors that influence decision making process is important to understanding what decisions are made. That is, the factors that influence the process may impact the outcomes.

Heuristics serve as a framework in which satisfactory decisions are made quickly and with ease (Shah & Oppenheimer, 2008). Many types of heuristics have been developed to explain the decision making process essentially, individuals work to reduce the effort they need to expend in making decisions and heuristics offer individuals a general guide to follow, thereby reducing the effort they must disburse. Together, heuristics and factors influencing decision making are a significant aspect of critical thinking (West, Toplak, & Stanovich, 2008). There is some indication that this can be taught, which benefits those learning how to make appropriate and the best decisions in various situations (Nokes &Hacker, 2007).

People make decisions about many things. They make political decisions personal decisions, including medical choices, romantic decisions, and career decisions and financial decisions, which may also include some of the other kinds of decisions and judgments. Quite often, the decision making process is fairly specific to the decision being made. Some choices are simple and seem straight forward, while others are complex and require a multi-step approach to making the decisions.

The present paper will address decision making, in the context of types of decisions people make, factors that influence decision making, several heuristics commonly researched and utilized in the process of decision making. Further, the paper will explore what happens after the decision is made, as well as how present decisions impact future behavior and decision making. Finally, summary comments will be offered, with implications for future research and practical application of teaching decision making skills in teens.

### Factors that Influence Decision Making

There are several important factors that influence decision making. Significant factors include past experiences, a variety of cognitive biases, an escalation of commitment and sunk outcomes, individual differences, including age and socioeconomic status, and a belief in personal relevance. These things all impact the decision making process and the decisions made.

Past experiences can impact future decision making. Juliusson, Karlsson, and Garling (2005) indicated past decisions influence the decisions people make in the future. It stands to reason that when something positive results from a decision, people are more likely to decide in a similar way, given a similar situation. On the other hand, people tend to avoid repeating past mistakes (Sagi, & Friedland, 2007). This is significant to the extent that future decisions made based on past experiences are not necessarily the best decisions. In financial decision making, highly successful people do not make investment decisions based on past sunk outcomes, rather by examining choices with no regard for past experiences this approach conflicts with what one may expect (Juliusson et al., 2005).

In addition to past experiences, there are several cognitive biases that influence decision making. Cognitive biases are thinking patterns based on observations and generalizations that may lead to memory errors, inaccurate judgments, and faulty logic (Evans, Barston, & Pollard, 1983 West, Toplak, & Stanovich, 2008). Cognitive biases include, but are not limited to: belief bias, the over dependence on prior knowledge in arriving at decisions hindsight bias, people tend to readily explain an event as inevitable, once it has happened omission bias, generally, people have a propensity to omit information perceived as risky and confirmation bias, in which people observe what they expect in observations (Marsh, & Hanlon, 2007 Nestler. & von Collani, 2008 Stanovich & West, 2008 see also West et al., 2008).

In decision making, cognitive biases influence people by causing them to over rely or lend more credence to expected observations and previous knowledge, while dismissing information or observations that are perceived as uncertain, without looking at the bigger picture. While this influence may lead to poor decisions sometimes, the cognitive biases enable individuals to make efficient decisions with assistance of heuristics (Shah & Oppenheimer, 2008).

In addition to past experiences and cognitive biases, decision making may be influenced by an escalation of commitment and sunk outcomes, which are unrecoverable costs. Juliusson, Karlsson, and Garling (2005) concluded people make decisions based on an irrational escalation of commitment, that is, individuals invest larger amounts of time, money, and effort into a decision to which they feel committed further, people will tend to continue to make risky decisions when they feel responsible for the sunk costs, time, money, and effort spent on a project. As a result, decision making may at times be influenced by &lsquohow far in the hole&rsquo the individual feels he or she is (Juliusson et al., 2005).

Some individual differences may also influence decision making. Research has indicated that age, socioeconomic status (SES), and cognitive abilities influences decision making (de Bruin, Parker, & Fischoff, 2007 Finucane, Mertz, Slovic, & Schmidt, 2005). Finucane et al. established a significant difference in decision making across age that is, as cognitive functions decline as a result of age, decision making performance may decline as well. In addition, older people may be more overconfident regarding their ability to make decisions, which inhibits their ability to apply strategies (de Bruin et al., 2007). Finally, with respect to age, there is evidence to support the notion that older adults prefer fewer choices than younger adults (Reed, Mikels, & Simon, 2008).

Age is only one individual difference that influences decision making. According to de Bruin et al. (2007), people in lower SES groups may have less access to education and resources, which may make them more susceptible to experiencing negative life events, often beyond their control as a result, low SES individuals may make poorer decisions, based on past decisions.

Over and above past experiences, cognitive biases, and individual differences another influence on decision making is the belief in personal relevance. When people believe what they decide matters, they are more likely to make a decision. Acevedo and Krueger (2004) examined individuals&rsquo voting patterns, and concluded that people will vote more readily when they believe their opinion is indicative of the attitudes of the general population, as well as when they have a regard for their own importance in the outcomes. People vote when they believe their vote counts. Acevedo and Krueger pointed out this voting phenomenon is ironic when more people vote, the individual votes count less, in electoral math. Continued on Next Page »

## Measuring What Counts: A Conceptual Guide for Mathematics Assessment (1993)

Whether a mathematics assessment comprises a system of examinations or only a single task, it should be evaluated against the educational principles of content, learning, and equity. At first glance, these educational principles may seem to be at odds with traditional technical and practical principles that have been used to evaluate the merits of tests and other assessments. In recent years, however, the measurement community has been moving toward a view of assessment that is not antithetical to the positions espoused in this volume. Rather than view the principles of content, learning, and equity as a radical break from past psychometric tradition, it is more accurate to view them as gradually evolving from earlier ideas.

Issues of how to evaluate educational assessments have often been discussed under the heading of "validity theory." Validity has been characterized as "an integrated evaluative judgment of the degree to which empirical evidence and theoretical rationales support the adequacy and appropriateness of inferences and actions based on test scores or other modes of assessment." 1 In other words, an assessment is not valid in and of itself its validity depends on how it is interpreted and used. Validity is a judgment based on evidence from the assessment and on some rationale for making decisions using that evidence.

Validity is the keystone in the evaluation of an assessment. Unfortunately, it has sometimes been swept aside by other technical matters, such as reliability and objectivity. Often it has been thought of in narrow terms ("Does this assessment rank students in the same way as another one that people consider accurate?"). Today, validity is being reconceived more broadly and given greater emphasis in discussions of assessment. 2 Under this broader conception,

validity theory can provide much of the technical machinery for determining whether the educational principles are met by a mathematics assessment. One can create a rough correspondence between the content principle and content validity, 3 between the learning principle and consequential or systemic validity, 4 and between the equity principle and criteria of fairness and accessibility that have been addressed by Silver and Lane. 5

Although every mathematics assessment should meet the three principles of content, learning, and equity, that alone cannot guarantee a high-quality assessment. Technical considerations, including generalizability, evidence, and costs, still have a place. The educational principles are primary and essential but they are not sufficient.

### THE CONTENT PRINCIPLE

The contexts in which assessment tasks are administered and the interpretations students make of them are critical in judging the significance of the content.

What is the mathematical content of the assessment?

What mathematical processes are involved in responding?

Applying the content principle to a mathematics assessment means judging how well it reflects the mathematics that is most important for students to learn. The judgments are similar to early notions of content validity that were limited to asking about the representativeness and relevance of test content. The difference lies in a greater concern today for the quality of the mathematics reflected in the assessment tasks and in the responses to them.

Procedures for evaluating the appropriateness of assessment content are well developed and widely used. Most rely heavily on expert judgment. Judges are asked how well the design of the assessment as a whole captures the content to be measured and how well the individual tasks reflect the design. The two sets of judgments determine whether the tasks sufficiently represent the intended content.

New issues arise when the content principle is applied:

the nature of the important mathematics content leads to some types of tasks that have not been common in educational assessment,

the emphasis on thinking processes leads to new forms of student performance, and

the characteristics of today's important mathematics lead to a broader view of curricular relevance.

Because mathematics has been stereotyped as cut and dried, some assessment designers have assumed that creating high-quality mathematics tasks is simple and straightforward. That assumption is false. Because mathematics relies on precise reasoning, errors easily creep into the words, figures, and symbols in which assessment tasks are expressed.

The Ironing Board

The diagram shows the side of an ironing board.

Use the information in the diagram to calculate the angle x°. Give your answer to the nearest degree.

Difficulties arise when attempts are made to put mathematics into realistic settings. The setting may be so unfamiliar that students cannot see mathematics in it. Or, the designer of the task may have strained too hard to make the mathematics applicable, ending up with an artificial reality, as in the example above. 7 As a practical matter, the angle between

the legs of the ironing board is not nearly so important as the height of the board. As Swan notes, 8 the mathematical content is not incorrect, but mathematics is being misused in this task. A task designer who wants to claim the situation is realistic should pose a genuine question: Where should the stops be put under the board so that it will be convenient for people of different heights?

The thinking processes students are expected to use are as important as the content of the assessment tasks.

The thinking processes students are expected to use in an assessment are as important as the content of the tasks. The process dimension of mathematics has not merited sufficient attention in evaluations of traditional multiple-choice tests. The key issue is whether the assessment tasks actually call for students to use the kind of intellectual processes required to demonstrate mathematical power: reasoning, problem solving, communicating, making connections, and so on. This kind of judgment becomes especially important as interesting tasks are developed that may have the veneer of mathematics but can be completed without students' ever engaging in serious mathematical thinking.

To judge the adequacy of the thinking processes used in an assessment requires methods of analyzing tasks to reflect the steps that contribute to successful performance. Researchers at the Learning Research and Development Center (LRDC) at the University of Pittsburgh and the Center for Research, Evaluation, Standards, and Student Testing (CRESST) at the University of California at Los Angeles are beginning to explore techniques for identifying the cognitive requirements of performance tasks and other kinds of open-ended assessments in hands-on science and in history. 9

Mixing Paint

To paint a bathroom, a painter needs 2 gallons of light blue paint mixed in a proportion of 4 parts white to 3 parts blue. From a previous job, she has I gallon of a darker blue paint mixed in the proportion of I part white to 2 parts blue. How many quarts of white paint and how many quarts of blue paint (I gallon = 4 quarts) must the painter buy to be able to mix the old and the new paint together to achieve the desired shade? How much white paint must be added and how much blue paint?

Discuss in detail how to model this problem, and then use your model to solve it.

The analysis of task demands, however, is not sufficient. The question of what processes students actually use in tackling the tasks must also be addressed. For example, could a particular problem designed to assess proportional reasoning be solved satisfactorily by using less sophisticated operations and knowledge? A problem on mixing paint, described at left, was written by a mathematics teacher to get at high-level understanding of proportions and to be approachable in a variety of ways. Does it measure what was intended?

Such questions can be answered by having experts in mathematics education and in cognitive science review tasks and evaluate student responses to provide information about the cognitive processes used. (In the mixing paint example, there are solutions to the problem that involve computation with complicated fractions more than proportional reasoning, so that a student who finds a solution has not necessarily used the cognitive processes that were intended by the task developer.) Students' responses to the task, including what they say when they think aloud as they work, can suggest what those processes might be. Students can be given part of a task to work on, and their reactions can be used to construct a picture of their thinking on the task. Students also can be interviewed after an assessment to detect what they were thinking as they worked on it. Their written work and videotapes of their activity can be used to prompt their recollections.

None of these approaches alone can convey a complete picture of the student's internal processes, but together they can help clarify the extent to which an assessment taps the kinds of mathematical thinking that designers have targeted with various tasks. Researchers are beginning to examine the structure of complex performance assessments in mathematics, but few studies have appeared so far in which labor-intensive tasks such as projects and investigations are used. Researchers at LRDC, CRESST, and elsewhere are working to develop guidelines for gauging whether appropriate cognitive skills are being engaged by an assessment task.

Innovative assessment tasks are often assumed to make greater cognitive demands on students than traditional test items do. Because possibilities for responses to alternative assessment tasks may be broader than those of traditional items, developers must work harder to specify the type of response they want to evoke from the task. For example, the QUASAR project has developed a scheme for classifying tasks that involves four dimensions: (1) cognitive processes (such as understanding and representing problems, discerning mathematical relationships, organizing information, justifying procedures, etc.) (2) mathematical content (which is in the form of categories that span the curriculum) (3) mode of representation (words, tables, graphs, symbols, etc.) and (4) task content (realistic or nonrealistic). By classifying tasks along four dimensions, the QUASAR researchers can capture much of the richness and complexity of high-level mathematical performance.

The QUASAR project has also developed a Cognitive Assessment Instrument (QCAI) 10 to gather information about the program itself and not individual students. The QCAI is a paper-and-pencil instrument for large-group administration to individual students. At each school site, several dozen tasks might be administered, but each student might receive only 8 or 9 of them. A sample task developed for use with sixth grade students is at left. 11

The table shows the cost for different bus fares.

Yvonne is trying to decide whether she should buy a weekly bus pass. On Monday, Wednesday and Friday she rides the bus to and from work. On Tuesday and Thursday she rides the bus to work, but gets a ride home with her friends.

Should Yvonne buy a weekly bus pass?

The open-ended tasks used in the QCAI are in various formats. Some ask students to justify their answers others ask students to show how they found their answers or to describe data presented to them. The tasks are tried out with samples of students and the responses are analyzed. Tasks are given internal and external reviews. 12

Internal reviews are iterative, so that tasks can be reviewed and modified before and after they are tried out. Tasks are reviewed to see whether the mathematics assessed is important, the wording is clear and concise, and various sources of bias are absent. Data from pilot administrations, as well as interviews with students thinking aloud or explaining their responses, contribute to the internal review. Multiple variants of a task are pilot tested as a further means of making the task statement clear and unbiased.

External reviews consist of examinations of the tasks by mathematics educators, psychometricians, and cognitive psychologists. They look at the content and processes measured, clarity and precision of language in the task and the directions, and fairness. They also look at how well the assessment as a whole represents the domain of mathematics.

The scoring rubrics are both analytic and holistic. A general scoring rubric (similar to that used in the California Assessment Program) was developed that reflected the scheme used for classifying tasks. Criteria for each of the three interrelated components of

the scheme were developed at each of the five score levels from 0 to 4. A specific rubric is developed for each task, using the general scoring rubric for guidance. The process of developing the specific rubric is also iterative, with students' responses and the reactions of reviewers guiding its refinement.

Each year, before the QCAI is administered for program assessment, teachers are sent sample tasks, sample scored responses, and criteria for assigning scores that they use in discussing the assessment with their students. This helps ensure an equitable distribution of task familiarity across sites and gives students access to the performance criteria they need for an adequate demonstration of their knowledge and understanding.

#### CURRICULAR RELEVANCE

The mathematics in an assessment may be of high quality, but it may not be taught in school or it may touch on only a minor part of the curriculum. For some purposes that may be acceptable. An external assessment might be designed to see how students approach a novel piece of mathematics. A teacher might design an assessment to diagnose students' misconceptions about a single concept. Questions of relevance may be easy to answer.

The term alignment is often used to characterize the congruence that must exist between an assessment and the curriculum.

Other purposes, however, may call for an assessment to sample the entire breadth of a mathematics curriculum, whether of a course or a student's school career. Such purposes require an evaluation of how adequately the assessment treats the depth and range of curriculum content at which it was aimed. Is each important aspect of content given the same weight in the assessment that it receives in the curriculum? Is the full extent of the curriculum content reflected in the assessment?

The term alignment is often used to characterize the congruence that must exist between an assessment and the curriculum. Alignment should be looked at over time and across instruments. Although a single assessment may not be well aligned with the curriculum because it is too narrowly focused, it may be part of a more comprehensive collection of assessments.

The question of alignment is complicated by the multidimensional nature of the curriculum. There is the curriculum as it exists

in official documents, sometimes termed the intended curriculum there is the curriculum as it is developed in the classroom by teachers through instruction, sometimes termed the implemented curriculum and there is the curriculum as it is experienced by students, sometimes termed the achieved curriculum. Depending on the purpose of the assessment, one of these dimensions may be more important than the others in determining alignment.

Consider, for example, a curriculum domain consisting of a long list of specific, self-contained mathematical facts and skills. Consider, in addition, an assessment made up of five complex open-ended mathematics problems to which students provide multi-page answers. Each problem might be scored by a quasi-holistic rubric on each of four themes emphasized in the NCTM Standards: reasoning, problem solving, connections, and communication. The assessment might be linked to an assessment framework that focused primarily on those four themes.

Better methods are needed to judge the alignment of new assessments new curricula.

An evaluator interested in the intended curriculum might examine whether and with what frequency students actually use the specific content and skills from the curriculum framework list in responding to the five problems. This examination would no doubt require a reanalysis of the students' responses because the needed information would not appear in the scoring. The assessment and the intended curriculum would appear to be fundamentally misaligned. An evaluator interested in the implemented curriculum, however, might be content with the four themes. To determine alignment, the evaluator might examine how well those themes had been reflected in the instruction and compare the emphasis they received in instruction with the students' scores.

## Cuisenaire Rod Fractions: Level 3

This unit introduces the fact that fractions come from equi-partitioning of one whole. So the size of a given length can only be determined with reference to one. When the size of the referent whole varies then so does the name given to a given length.

• Name the fraction for a given Cuisenaire rod with reference to one (whole).
• Find the one (whole) when given a Cuisenaire rod and its fraction name.
• Create a number line showing fractions related to a given one (whole).
• Identify equivalent fractions

‘Fractions as measures’ is arguably the most important of the five sub-constructs of rational number (Kieren, 1994) since it identifies fractions as numbers, and is the basis of the number line. Fractions are needed when ones (wholes) are inadequate for a given purpose. This purpose is usually some form of division. In measurement lengths are defined by referring to some unit that is named as one. When the size of another length cannot be accurately measured by a whole number of ones then fractions are needed.

For example, consider the relationship between the brown and orange Cuisenaire rods. If the orange rod is defined as one (an arbitrary decision) then what number is assigned to the brown rod?

Some equal partitioning of the one is needed to create unit fractions with one as the numerator. For the size of the brown rod to be named accurately those unit fractions need to fit into it exactly. We could choose to divide the orange rod into tenths (white rods) or fifths (red rods), either would work. By aligning the unit fractions we can see that the brown rod is eight tenths or four fifths of the orange rod.

Note that eight tenths and four fifths are equivalent fractions and the equality can be written as 8/10 = 4/5. These fractions are just different names for the same quantity and share the same point on a number line. The idea that any given point on the number line has an infinite number of fraction names, is a significant change from what occurs with whole numbers. For the set of whole numbers each location on the number line matches a single number. Some names are more privileged than others by our conventions. In the case of four fifths, naming it as eight tenths aligns to its decimal (0.8) and naming it as eighty hundredths aligns to its percentage (80/100 = 100%).

#### Specific Teaching Points

Understanding that fractions are always named with reference to a one (whole) requires flexibility of thinking. Lamon (2007) described re-unitising and norming as two essential capabilities if students are to master fractions. By re-unitising she meant that students could flexibly define a given quantity in multiple ways by changing the units they attended to. By norming she meant that the student could then act with the new unit. In this unit of work Cuisenaire rods are used to develop students’ skills in changing units and thinking with those units.

Consider this relationship between the dark green and blue rods. Which rod is one? Either could be defined as one and the other rod could be assigned a fraction name.

If the blue rod is one then the dark green rod is two thirds, as the light green rod is one third. If the dark green rod is one then the blue rod is three halves since the light green rod is now one half.

Re-unitising and norming are not just applicable to defining a part to whole relationships like this. In this unit students also consider how to use re-unitising to find the referent one and to name equivalent fractions. For example, below the crimson rod is named as two fifths. Which rod is the one (whole)? If the crimson rod is two fifths, then the red rod is one fifth. Five fifths (red rods)form the whole. Therefore, the orange rod is one.

What other names does two fifths have? If the red rods were split in half they would be the length of white rods, and be called tenths since ten of them would form one. The crimson rod is equal to four white rods which is a way to show that 2/5 = 4/10. If the red rods were split into three equal parts the new rods would be called fifteenths since 15 of them would form one. The crimson rod would be equal to six of these rods which is a way to show 2/5 = 6/15. The process of splitting the unit fraction, fifths in this case, into equal smaller unit fractions, produces an infinite number of fractions for the same quantity.

The learning opportunities in this unit can be differentiated by providing or removing support to students, by varying the task requirements. Ways to support students include:

• providing Cuisenaire rods for students to manipulate when solving problems
• modelling how to record fraction symbols and drawing attention to the meaning of numerator and denominator
• drawing diagrams to clarify the unit of comparison and the one (whole) in problems
• encouraging students to work collaboratively, especially where some students are affected by colour blindness.

Tasks can be varied in many ways including:

• alter the complexity of the rod relationships that students work with. Working with halves and quarters tends to be easier than with thirds and fifths
• providing 1cm 2 grid paper and coloured felt pens to ease the recording demands (Cuisenaire rods are based on that scale).

The contexts for this unit can be adapted to suit the interests and cultural backgrounds of your students. Cuisenaire rods (rakau) are a media often used in introduction of te Reo so some students may already have encountered them. Knowing the relationships between rods of different colours, without having assigned number names to the rods, is very helpful in easing cognitive load. Other contexts involving fractions of lengths might also be engaging for your students. For example, the fraction of a race or journey that has been covered at different points is practically useful. Consuming foods that are linear, such as submarine sandwiches, bananas, or sausages, might motivate some learners. Board games that have a particular number of steps from start to finish provide opportunities to look at a fraction as an operator.

#### Prior Experience

Students are unlikely to have previous experience with using Cuisenaire rods since the use of these materials to teach early number has been mostly abandoned. Their lack of familiarity with the rods is a significant advantage for students as they will need to imagine splitting the referent one to solve problems.

#### Session One

1. Use Cuisenaire rods or the online tool to introduce the relative size of Cuisenaire rods in the following way.
Relative to the orange rod, how long is the yellow rod? How do you know? Justify

The relationship between the yellow and orange rods can be expressed in two ways:
“The yellow rod is one half of the orange rod.”
“The orange rod is two times the length of the yellow rod.”
So if the orange rod was one then the yellow rod would represent one half. What fraction would the red rod and dark green rod represent? Justify. Convince us you are right.

2. Encourage the students to express the relationships in various ways, such as:
“The red rod is one fifth of the orange rod because five of it fit into the whole (one)”
“The orange rod is five times longer than the red rod.”
“So the dark green rod must be three fifths of the orange rod because three red rods make one dark green rod.”
​A more complex question is “How many dark green rods (three-fifths) fit into the orange rod (one)?” While the correct answer is five-thirds, or one and two thirds, students will be unlikely to name the relationship that precisely. Expect answers like “Almost two but not quite.”
3. Introduce Investigation One using Slide 1 of the PowerPoint. Encourage students to record both their names for each rod (relative to the brown rod) and their reason for naming it that way. Provide sets of Cuisenaire rods or access to the online tool. Let the students work in small teams. Look for the following:
• Do the students refer back to the brown rod as the one?
• Do they name each rod with reference to how many times it fits into one?
• Do they use the relationship between rods to name them? (For example, if pink is one half then red must be one quarter and white must be one eighth).
• Can they name a rod larger than one as an improper fraction or mixed number? (For example, the orange rod now represents one and one quarter (1 1/4 or 5/4).
4. All of the points above can be raised in discussion as a whole class. Extend the conversation to which rods were hardest to name and why that was so. For example, the light green rod does not fit into the brown rod an exact number of times but the white rod (one eighth) can be used as a reference.
5. Also discuss equivalence. The diagram below shows 1/2 = 2/4 = 4/8. Note that equivalent fractions are different names for the same quantity.

#### Session Two

1. Revise the key points from the previous session using the blue rod as one.

If the blue rod is one what do we call the light green and white rods? Justify your answers.
What statements can you make about the relative size of the rods?
Are ther
e equivalent fractions in the picture (1/3 = 3/9)? So what fraction is equivalent to… two thirds? (2/3 = 6/9), … to three thirds? (3/3 = 9/9).
Students might notice some patterns in the symbols such as the same multiplier between numerators and denominators in the equalities.
3. Reflect back on fractions where the rod was larger than one. Ask: If blue is one then what fraction is the orange rod?

Thinking that fractions are restricted to less than one is a common constraint students learn so opportunities to name fractions greater than one is important. In this case students will recognise that the white rod fills the gap. Good questions are:
• Remember, which rod is one?
• So what fraction is the white rod? (1/9)
• How many white rods fit into the blue rod? (nine)
• How many white rods fit into the orange rod? (ten)
• So what shall we call the orange rod? (1 1/9 or 10/9)
4. Ask students to attempt Investigation Two of the PowerPoint (Slide 2). Remind them of the necessity for recording their solutions and justifications.
5. As they investigate in small teams look for:
• Do they accept the new imaginary rod as one?
• Do they name the other rods as unit fractions in terms of how many of that rod fit into one?
• Do they know how to name non-unit fractions using copies of unit fractions? E.g. Three quarters (blue rod) is three copies of one quarter (light green).
• Do they realise that equivalent fractions are different names for the same quantity?
6. Share the results as a class attending to the points above.
7. Construct a fraction wall with the gold rod as one. Name each unit fraction (1/2 ,1/4 ,1/12 ,1/3 ,1/6). Ask if these are the only unit fractions that are possible and why that is so. Students may note that the denominators are all factors of 12. Look for equivalence in the fractions within the wall. Encourage students to find non-unit fraction equivalence as well, e.g. 2/3 = 8/12 and 3/4 = 9/12.

8. Use the wall to create a number line as shown. Ask:
How much more three quarters is than two thirds?
How much less one half is than two thirds?

9. Let students work on Investigation Three from the PowerPoint. Their work will extend into Session Three. Look for the following:
• Can students name the fractions for the rods that are being joined?
• Can they record the combinations as sums like, 1/2 + 1/3 + 1/6 = 1?
• Can they use equivalence, particularly referring to twelfths, to explain why the combinations add to one?

#### Session Three

In this session the purpose is to reconstruct the one rod. Students connect from part to whole as opposed to whole to part.

1. Begin by going over previous ideas in the context of this model.

2. Ask: What are the size relationships between the yellow and black rods?
The students might use the white rod as a reference to say, “The yellow rod is five sevenths of the black rod.” It is more difficult to recognise that “The black rod is seven fifths of the yellow rod.” The key idea is to establish the referent one. If a comparison ‘of a given rod’ is being made then that rod becomes the one.
3. Ask: So if you were told that the yellow rod was five sevenths of the one rod, what colour would the one rod be? (black).
If you were told that the black rod was seven fifths of the one rod, what colour would the one rod be? (yellow)
4. Provide another scenario. If you were told that the pink rod was one half of the one rod, what colour would the one rod be?

Students might easily recognise that two halves make one so the rod colour of one is brown. This is an easy scenario as a unit fraction is given. Therefore, ask a harder problem like this:
If you were told that the dark green rod was two thirds of the one rod, what colour would the one rod be?
The dark green rod does not fit exactly into the mystery one but half of it does. That half of the green rod is the light green rod (one third). So the one rod must be blue.

5. Ask the students to complete Copymaster One in pairs or threes. Point out the need to justify their decisions about which rod is one in each case. Students need to use Cuisenaire rods or the online tool for this activity. They should not rely on the pictures being to scale. Look for:
• Do they adjust to the variable one in each case?
• Do they use the given rod as a unit of measure?
• Do they subdivide the visible rod to find a unit fraction they can measure with? For example if told the rod is two thirds, do they divide the rod equally into two parts to create a one third measure?
6. If students complete Copymaster One, ask them to create similar part to whole problems for other students.

#### Session Four

The aim of this session is to develop students’ mental number line for fractions. Inclusion of fractions with whole numbers on the number line requires some significant adjustments. These adjustments include:

• A point on the number line can have an infinite number of names called equivalent fractions, for example, 2/3 ,4/6 ,6/9 … all ‘live’ at the same point.
• Between any two fractions are an infinite number of other fractions (this is known as ‘density’ of the number line).
1. Begin by building up a number line for quarters in this way.
If the brown rod is one (mark zero and one on the number line) where would one quarter be?

Students may now know that the red rod is one quarter of the brown rod. Ask: What fractions could be marked on the number line using one quarter? Look for them to explain that quarters can be ‘iterated’ (place end on end) to form non-unit fractions. Make sure you push the iteration past one and include the fraction and mixed number ways to represent the amount (see below). Also encourage renaming in equivalent form where this is sensible, for example, 2/4 = 1/2, 4/4 = 1.

2. Look at the space between one quarter and one half. Ask, “Are there any fractions that belong in this space?” Students may recognise from previous work that white rods are one eighth of a brown rod. Three eighths will work. Note that three eighths measure exactly half way between one quarter and one half. Ask, “What fraction would belong half way between one quarter and three eighths?” (five sixteenths). The last questions requires students to use their imagination, as there is no rod that is half the length of a white rod.

3. Show the students the diagram on Slide 4 of the PowerPoint. It shows zero and two fractions (orange rod as two thirds and blue rod as three fifths placed on a number line. Ask, “How could we find the length of the one rod?” From the part-whole task in the previous session students should reply that one third or one-fifth need to be located by equally partitioning the orange rod in two parts to get the yellow rod or equally partitioning the blue rod into three parts to get the light green rod. These unit fractions can then be iterated to get the referent one (three yellow rods or five light green rods).
4. Ask students to create a number line with the orange rod as two thirds and the blue rod as three fifths. They must include one and any other fractions they can find. In the event of early finishers to this infinite task, provide the following challenge fractions to locate on the number line: 4/3, 9/5, 3/15, 14/15, 2/9. Look for the following:
• Do the students use fifths and thirds as unit fractions to locate other non-unit fractions, like four fifths?
• Do the students find fractions greater than one by iterating unit fractions?
• Do the students record equivalent fractions in the same location, for example five fifths and three thirds at one?
• Do the students subdivide unit fractions to form other units, for example divided thirds into two equal parts to form sixths?
• Do the students attempt to identify fractions between fractions, for example, which fractions lie between two thirds and four fifths?
5. Bring the class together after a suitable period of investigation to share results. You will need a large number line on the whiteboard. By the time the discussion is over you should saturate the number line with fractions. You may like to ask if it ever possible to complete this task. Students may already realise that there are an infinite number of fractions that could be located.
6. After discussing the fractions that can be located on the above number line ask the students to make up a similar number line problem for someone else. The problem must include enough fractions already placed to locate the referent one and at least six other fractions to be located on the line. They are free to choose whatever rod they want as the one and may even create a rod that is not in the set.

Dear parents and caregivers,

This week students will be learning about fractions, like three quarters and two thirds. We will be using some materials called Cuisenaire rods which are lengths of plastic or wood. They look like this:

Your child should be able to name fractions of a given rod. For example, they might say that the light green rod is three fifths of the yellow rod.

There is an online tool that lets you play with Cuisenaire rods on this page: