Skip to main content

Mathematic 2 - Calculus 2 - Summary

1. Linear System

1.1 Linear Equations and Systems

  • [[Linear "Equation and Systems" & Augmented Matrices]]

Linear Equation

  • Variables like xx, yy, or zz must have power 1 and constant coefficients: a1x1+a2x2++anxn=bora1x+a2y++anz=ba_1x_1+a_2x_2+\dots+a_nx_n = b \quad \text{or} \quad a_1x+a_2y+\dots+a_nz = b
  • No variables multiplying each other.

Linear System and Its Solution

  • System of equations: {a1x1+a2x2++anxn=b1a11x1+a12x2++amnxn=b2am1x1+am2x2++amnxn=bm\begin{cases} a_1x_1+a_2x_2+\dots+a_nx_n = b_1 \\ a_{11}x_1+a_{12}x_2+\dots+a_{mn}x_n = b_2 \\ \vdots \\ a_{m1}x_1+a_{m2}x_2+\dots+a_{mn}x_n = b_m \end{cases}
  • A solution satisfies all equations.
  • Solutions:
    • No solution → parallel lines/planes
    • One solution → intersection point
    • Infinite solutions → equations are dependent

Augmented Matrices

Transform system into matrix of coefficients and constants:

{a1x1+a2x2+...+anxn=b1a11x1+a12x2+...+amnxn=b2am1x1+am2x2+...+amnxn=bm Augmented Matrics (a1+a2+...+an=b1a11+a12+...+amn=b2am1+am2+...+amn=bm)\begin{cases} a_1x_1+a_2x_2+...+a_nx_n& =b_1 \\ a_{11}x_1+a_{12}x_2+...+a_{mn}x_n&=b_2 \\ & \Big\downarrow \\ a_{m1}x_1+a_{m2}x_2+...+a_{mn}x_n&=b_m \end{cases} \xrightarrow{\text{ Augmented Matrics }} \begin{pmatrix} a_1+a_2+...+a_n& =b_1 \\ a_{11}+a_{12}+...+a_{mn}&=b_2 \\ & \Big\downarrow \\ a_{m1}+a_{m2}+...+a_{mn}&=b_m \end{pmatrix}

1.2 Elementary Row Operations & Row-Reduced Matrices

  • [[Elementary Row Operations & Row-Reduced Matrices]]

Elementary Row Operations

Used to simplify matrices for solving:

  • Only operate by row (not by column).
  • Matrix example: (abcPdefQghiR)\begin{pmatrix} \color{violet}a & b & c & | & P \\ \color{violet}d & e & f & | & Q \\ \color{violet}g & h & i & | & R \end{pmatrix}

Three operations:

  1. Multiply a row by a constant RikRiR_i \leftarrow kR_i
  2. Swap two rows RiRjR_i \leftrightarrow R_j
  3. Add a multiple of one row to another RiRi+kRjR_i \leftarrow R_i + kR_j

Row-Reduced Matrices

Transform into row-echelon or reduced row-echelon form:

  • Final form (as system): {1x+0+0=ΔP0+1y+0=ΔQ0+0+1z=ΔR\begin{cases} 1x + 0 + 0 = \Delta P \\ 0 + 1y + 0 = \Delta Q \\ 0 + 0 + 1z = \Delta R \end{cases}
  • Final form (as matrix): (100ΔP010ΔQ001ΔR)\begin{pmatrix} 1 & 0 & 0 & | & \Delta P \\ 0 & 1 & 0 & | & \Delta Q \\ 0 & 0 & 1 & | & \Delta R \end{pmatrix}

Properties of Reduced Row-Echelon Form:

  1. First nonzero element in each row is 1 (leading 1).
  2. All-zero rows are at the bottom.
  3. Each leading 1 is to the right of any leading 1 above it.
  4. Each leading 1 is the only nonzero entry in its column.
  • If only 1–3 are satisfied → row-echelon form
  • If all 1–4 are satisfied → reduced row-echelon form

1.3 Gauss-Jordan Elimination

  • [[Gauss-Jordan Elimination]]

Used to solve:

{ax+by+cz=Pdx+ey+fz=Qgx+hy+iz=R[abcPdefQghiR][100x010y001z]\begin{cases} ax + by + cz = P \\ dx + ey + fz = Q \\ gx + hy + iz = R \end{cases} \Rightarrow \begin{bmatrix} a & b & c & | & P \\ d & e & f & | & Q \\ g & h & i & | & R \end{bmatrix} \Rightarrow \begin{bmatrix} 1 & 0 & 0 & | & x \\ 0 & 1 & 0 & | & y \\ 0 & 0 & 1 & | & z \end{bmatrix}

Performing

  • Change one row while keeping the others: [abcPdefQghiR]2R1R2[Δa0ΔcΔPdefQghiR]\begin{bmatrix} a & b & c & | & P \\ d & e & f & | & Q \\ g & h & i & | & R \end{bmatrix} \xrightarrow{2R_1 - R_2} \begin{bmatrix} \Delta a & 0 & \Delta c & | & \Delta P \\ d & e & f & | & Q \\ g & h & i & | & R \end{bmatrix}

Then scale rows:

[Δa00ΔP0Δe0ΔQ00ΔiΔR]multiply each by 1Δ[100x010y001z]\begin{bmatrix} \Delta a & 0 & 0 & | & \Delta P \\ 0 & \Delta e & 0 & | & \Delta Q \\ 0 & 0 & \Delta i & | & \Delta R \end{bmatrix} \Rightarrow \text{multiply each by } \frac{1}{\Delta} \Rightarrow \begin{bmatrix} 1 & 0 & 0 & | & x \\ 0 & 1 & 0 & | & y \\ 0 & 0 & 1 & | & z \end{bmatrix}

Where:

  • x=ΔP/Δax = \Delta P / \Delta a
  • y=ΔQ/Δey = \Delta Q / \Delta e
  • z=ΔR/Δiz = \Delta R / \Delta i

Types of Solutions

  • One solution: intersection point
    ![[Screenshot 2567-06-12 at 10.17.51.png|300]]
  • Infinite solutions: same plane
    ![[Screenshot 2567-06-12 at 10.19.08.png|300]]
  • No solution: (1001401220001)\begin{pmatrix} 1 & 0 & 0 & | & 14 \\ 0 & 1 & -2 & | & 2 \\ 0 & 0 & 0 & | & 1 \end{pmatrix}

1.4 References


2. Matrix Algebra

2.1 Matrices and Operations

  • A matrix has mm rows and nn columns → its size is m×nm \times n
  • Column and Row matrices:
    • nn columns and 1 row: called a row vector
    • mm rows and 1 column: called a column vector

Square matrices

  • If m=nm = n, it is a square matrix

Diagonal matrices

  • A square matrix where all off-diagonal entries are zero
  • If the main diagonal has non-zero values, it is called a diagonal matrix

Zero matrices

  • A matrix where all entries are 0

Identity matrices

  • A diagonal matrix with all 1s on the diagonal
  • Denoted by InI_n

2.2 Operations of Matrices

Equality

  • Two matrices are equal if they have the same size and corresponding entries are equal

Addition

  • Add corresponding elements; both matrices must have the same size

Scalar Multiplication

  • Multiply each entry of a matrix AA by a scalar kk to get kAkA

Matrix Multiplication

  • Only valid if the number of columns of AA equals the number of rows of BB
  • Not commutative: ABBAAB \ne BA

Power of Square Matrix

  • For integer n0n \geq 0:
    A0=IA^0 = I,
    An=AAAA^n = A \cdot A \cdot \ldots \cdot A (n times)

Transpose

  • Switch rows with columns: ATA^T
  • Tips:
    • Focus on the columns
    • When transposing powers: (An)T=(AT)n(A^n)^T = (A^T)^n

2.3 Properties of Matrix Operations

  • A+B=B+AA + B = B + A  (Commutative law of addition)
  • A+(B+C)=(A+B)+CA + (B + C) = (A + B) + C  (Associative law of addition)
  • A(BC)=(AB)CA(BC) = (AB)C  (Associative law of multiplication)
  • A(B+C)=AB+ACA(B + C) = AB + AC  (Distributive law)
  • (B+C)A=BA+CA(B + C)A = BA + CA  (Distributive law)
  • A+0=0+A=AA + 0 = 0 + A = A,  AA=0A - A = 0
  • 0A=00A = 0,  A0=0A0 = 0
  • (AT)T=A(A^T)^T = A,  (AB)T=BTAT(AB)^T = B^T A^T

2.4 Matrix Inversion

Inverse Matrix

  • Only square matrices have inverses
  • For a matrix AA, if AA1=IAA^{-1} = I, then A1A^{-1} is the inverse

Inverse of a 2×22 \times 2 Matrix

If adbc0ad - bc \ne 0, then:

(abcd)1=1adbc(dbca)\begin{pmatrix} a & b \\ c & d \end{pmatrix}^{-1} = \frac{1}{ad - bc} \begin{pmatrix} d & -b \\ -c & a \end{pmatrix}

Finding Inverse using Gauss-Jordan

  • Augment matrix AA with identity matrix InI_n:
    AInInA1A \, | \, I_n \longrightarrow I_n \, | \, A^{-1}

2.5 Solving Linear System using Inverse

  • For a system AX=BAX = B, solve as:
    AX=B(A1A)X=A1BIX=A1BX=A1BAX = B \Rightarrow (A^{-1}A)X = A^{-1}B \Rightarrow IX = A^{-1}B \Rightarrow \color{tomato}{X = A^{-1}B}

  • Visual reference:

{a11x1+a12x2++a1nxn=b1a21x1+a22x2++a2nxn=b2am1x1+am2x2++amnxn=bm(a11x1+a12x2++a1nxna21x1+a22x2++a2nxnam1x1+am2x2++amnxn)=(b1b2bm)(a11a12a1na21a22a2nam1am2amn)(x1x2xn)=(b1b2bm)\begin{aligned} &\left\{ \begin{array}{cccccc} a_{11}x_1 & + a_{12}x_2 & + \cdots & + a_{1n}x_n &= b_1 \\ a_{21}x_1 & + a_{22}x_2 & + \cdots & + a_{2n}x_n &= b_2 \\ \vdots & \vdots & & \vdots & \vdots \\ a_{m1}x_1 & + a_{m2}x_2 & + \cdots & + a_{mn}x_n &= b_m \\ \end{array} \right. \\[1em] \Rightarrow\quad &\left( \begin{array}{cccc} a_{11}x_1 & + a_{12}x_2 & + \cdots & + a_{1n}x_n \\ a_{21}x_1 & + a_{22}x_2 & + \cdots & + a_{2n}x_n \\ \vdots & \vdots & & \vdots \\ a_{m1}x_1 & + a_{m2}x_2 & + \cdots & + a_{mn}x_n \\ \end{array} \right) = \left( \begin{array}{c} b_1 \\ b_2 \\ \vdots \\ b_m \\ \end{array} \right) \\[1em] \Rightarrow\quad &\left( \begin{array}{cccc} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} \\ \end{array} \right) \left( \begin{array}{c} x_1 \\ x_2 \\ \vdots \\ x_n \\ \end{array} \right) = \left( \begin{array}{c} b_1 \\ b_2 \\ \vdots \\ b_m \\ \end{array} \right) \end{aligned}

3. Determinants

3.1 Determinants by Cofactor Expansion

  • 2x2 Determinant for Inverse Matrix
    If

    A=(abcd),A1=1adbc(dbca)A = \begin{pmatrix} a & b \\ c & d \end{pmatrix}, \quad A^{-1} = \frac{1}{ad - bc} \begin{pmatrix} d & -b \\ -c & a \end{pmatrix}
  • Determinant of n×nn \times n Matrix:

    • Use Minor, Cofactor, and optionally Cross Product
    • Minor MijM_{ij} is the matrix that results from removing the iith row and jjth column
    • Cofactor Cij=(1)i+jMijC_{ij} = (-1)^{i+j} M_{ij}
    • Trick:
      • You can perform cofactor expansion along any row or column
      • Use row/column with the most zeros to simplify
      • For 3×3 matrix: A=a11C11+a12C12+a13C13|A| = a_{11}C_{11} + a_{12}C_{12} + a_{13}C_{13}

3.2 Determinants by Row and Column Reduction

  • Triangular Matrices (Upper/Lower/Diagonal):

    • Determinant is product of diagonal: A=a11a22ann|A| = a_{11} a_{22} \cdots a_{nn}
  • Elementary Row/Column Operations:

    • Row swap (RiRjR_i \leftrightarrow R_j) → sign change
    • Multiply row by scalar → determinant multiplied
    • Safe operation: RiRi+kRjR_i \leftarrow R_i + kR_j (no change in determinant)

3.3 Cramer's Rule

  • Solve AX=BAX = B using determinants: xi=AiA,where Ai is A with column i replaced by Bx_i = \frac{|A_i|}{|A|}, \quad \text{where } A_i \text{ is A with column } i \text{ replaced by } B
  • Effective only for small systems (2×2 or 3×3)
  • For n=2n = 2 and n=3n = 3, substitute column-wise and solve using determinant rules

3.4 Volume of a Tetrahedron

  • Given vertices (x1,y1,z1),,(x4,y4,z4)(x_1, y_1, z_1), \dots, (x_4, y_4, z_4): V=±16x1y1z11x2y2z21x3y3z31x4y4z41V = \pm \frac{1}{6} \begin{vmatrix} x_1 & y_1 & z_1 & 1 \\ x_2 & y_2 & z_2 & 1 \\ x_3 & y_3 & z_3 & 1 \\ x_4 & y_4 & z_4 & 1 \end{vmatrix}
  • Always take the positive value of the result as volume

4 - Functions of Several Variables 1

4.1 Introduction, Domains, and Graphs

Domain and Range

  • Check for square roots, denominators, or other conditions that could lead to undefined expressions.
  • To find the range, observe the min/max values the expression can take.

4.2 Level Curves and Contour Maps

Level Curves

  • Level curves are the 2D projections (bird’s-eye view) of surfaces where z=f(x,y)z = f(x, y) is constant.
  • Set z=kz = k (where kk is a constant) and solve for the relation between xx and yy.

Contour Maps

  • A contour map consists of multiple level curves at different heights z=a,b,c,z = a, b, c, \dots.
  • Each curve corresponds to the same function value, representing height.

References:

4.3 Partial Derivatives

Concept

  • For f(x,y)f(x,y):
    • fxtreat y as constant\frac{\partial f}{\partial x} \Rightarrow \text{treat } y \text{ as constant}
    • fytreat x as constant\frac{\partial f}{\partial y} \Rightarrow \text{treat } x \text{ as constant}
  • For f(x,y,z)f(x, y, z): hold two variables constant while differentiating with respect to the third.

Geometrical Interpretation

  • fx\displaystyle\frac{\partial f}{\partial x}: change in z=f(x,y)z = f(x, y) in the xx-direction while yy is fixed
  • fy\displaystyle\frac{\partial f}{\partial y}: change in z=f(x,y)z = f(x, y) in the yy-direction while xx is fixed

References:

4.4 Chain Rules

Case 1: Simple Parametric Chain Rule

If z=f(x,y)z = f(x, y) and x=g(t),y=h(t)x = g(t), y = h(t):

dzdt=zxdxdt+zydydt\frac{dz}{dt} = \frac{\partial z}{\partial x}\frac{dx}{dt} + \frac{\partial z}{\partial y}\frac{dy}{dt}

Case 2: Multivariable Parametric Chain Rule

If z=f(x,y)z = f(x, y) and x=g(s,t),y=h(s,t)x = g(s, t), y = h(s, t):

  • zs=zxxs+zyys\frac{\partial z}{\partial s} = \frac{\partial z}{\partial x}\frac{\partial x}{\partial s} + \frac{\partial z}{\partial y}\frac{\partial y}{\partial s}
  • zt=zxxt+zyyt\frac{\partial z}{\partial t} = \frac{\partial z}{\partial x}\frac{\partial x}{\partial t} + \frac{\partial z}{\partial y}\frac{\partial y}{\partial t}
  • s,ts, t are independent variables

  • x,yx, y are intermediate variables

  • zz is the dependent variable

General Chain Rule for Many Variables

Let w=f(x,y,z,t,)w = f(x, y, z, t, \dots):

  • wu=wxxu+wyyu+wzzu+wttu+\frac{\partial w}{\partial u} = \frac{\partial w}{\partial x}\frac{\partial x}{\partial u} + \frac{\partial w}{\partial y}\frac{\partial y}{\partial u} + \frac{\partial w}{\partial z}\frac{\partial z}{\partial u} + \frac{\partial w}{\partial t}\frac{\partial t}{\partial u} + \dots
  • wv=wxxv+wyyv+wzzv+wttv+\frac{\partial w}{\partial v} = \frac{\partial w}{\partial x}\frac{\partial x}{\partial v} + \frac{\partial w}{\partial y}\frac{\partial y}{\partial v} + \frac{\partial w}{\partial z}\frac{\partial z}{\partial v} + \frac{\partial w}{\partial t}\frac{\partial t}{\partial v} + \dots
  • Extend for more independent variables as needed.


5. Functions of Several Variables 2

5.1 Directional Derivatives

Concept

  • Directional derivative:
    Duf(x0,y0)D_{\vec{u}}f(x_0, y_0) denotes the directional derivative of ff at (x0,y0)(x_0, y_0) in the direction of unit vector u\vec{u}.
  • Partial derivatives consider only one direction (x or y), but the directional derivative considers any direction:
Duf(x,y)=f(x,y)u=fx(x,y)a+fy(x,y)bD_{\vec{u}}f(x, y) = \nabla f(x, y) \cdot \vec{u} = \color{tomato} f_x(x, y) \cdot a + f_y(x, y) \cdot b

Unit Vector

  • Given vector vv: u=vv=a,b\vec{u} = \frac{v}{|v|} = \langle a, b \rangle
  • Given an angle θ\theta with the x-axis:
    • a=cosθa = \cos \theta, b=sinθb = \sin \theta
    • Since u=1|\vec{u}| = 1, u=cosθ,sinθ\vec{u} = \langle \cos \theta, \sin \theta \rangle

5.2 Gradient Vector

Gradient Definition

  • The gradient vector f(x,y,z)\nabla f(x, y, z) is:
f(x,y,z)=fx,fy,fz=fxi+fyj+fzk\nabla f(x, y, z) = \left\langle f_x, f_y, f_z \right\rangle = \frac{\partial f}{\partial x} \, \mathbf{i} + \frac{\partial f}{\partial y} \, \mathbf{j} + \frac{\partial f}{\partial z} \, \mathbf{k}
  • It is the vector sum of all partial derivatives, pointing in the direction of steepest increase.

5.3 Tangent Planes

Equation of Plane

  • Let p(x0,y0,z0)p(x_0, y_0, z_0) be the point of tangency.
  • General form (based on gradient and point):
z=a(xx0)+b(yy0)+z0z = a(x - x_0) + b(y - y_0) + z_0
  • Since a=fx(x0,y0)a = f_x(x_0, y_0) and b=fy(x0,y0)b = f_y(x_0, y_0):
P(x,y)=fx(x0,y0)(xx0)+fy(x0,y0)(yy0)+f(x0,y0)\color{tomato} P(x, y) = f_x(x_0, y_0)(x - x_0) + f_y(x_0, y_0)(y - y_0) + f(x_0, y_0)

Derivative with Respect to Each Variable

  • Let P(x,y)P(x, y) be the tangent plane function.

  • Partial with respect to xx:

Px(x,y)=aPx(x0,y0)=a=fx(x0,y0)P_x(x, y) = a \quad \Rightarrow \quad P_x(x_0, y_0) = a = \color{tomato} f_x(x_0, y_0)
  • Partial with respect to yy:
Py(x,y)=bPy(x0,y0)=b=fy(x0,y0)P_y(x, y) = b \quad \Rightarrow \quad P_y(x_0, y_0) = b = \color{tomato} f_y(x_0, y_0)

5.4 Analogy to 1D Tangent Lines

  • For 1D function f(x)f(x):
{L(x0)=f(x0)L(x0)=f(x0)\begin{cases} L(x_0) = f(x_0) \\ L'(x_0) = f'(x_0) \end{cases}
  • For 2D surface f(x,y)f(x, y) and tangent plane P(x,y)P(x, y):
{P(x0,y0)=f(x0,y0){Px(x0,y0)=fx(x0,y0)Py(x0,y0)=fy(x0,y0)\begin{cases} P(x_0, y_0) = f(x_0, y_0) \\ \begin{cases} P_x(x_0, y_0) = f_x(x_0, y_0) \\ P_y(x_0, y_0) = f_y(x_0, y_0) \end{cases} \end{cases}

References


Week 6 - Double Integrals 1

6.1 Double Integrals over General Regions

Concept

  • When the shape is not a rectangle, use directional slicing (horizontal or vertical).
  • If two bounding curves are given, find the intercept points between them first to determine the limits of integration.
  • Points of intersection will define:
    • xx-range: x[a,b]x \in [a, b]
    • yy-range: y[c,d]y \in [c, d]

Choosing the Order of Integration

  • You may integrate with respect to yy first or xx first — choose the easier way.

Example:

  • Respect to yy (vertical strips):

    abx22xf(x,y)dydx\int_a^b \int_{x^2}^{2x} f(x, y) \, dy \, dx
  • Respect to xx (horizontal strips):

    cdy/2yf(x,y)dxdy\int_c^d \int_{y/2}^{\sqrt{y}} f(x, y) \, dx \, dy

General Form (Vertical Strip):

  • If integrating in yy direction, use: V=abg1(x)g2(x)f(x,y)dydxV = \int_a^b \int_{g_1(x)}^{g_2(x)} f(x, y) \, dy \, dx

General Form (Horizontal Strip):

  • If integrating in xx direction, use: V=cdh1(y)h2(y)f(x,y)dxdyV = \int_c^d \int_{h_1(y)}^{h_2(y)} f(x, y) \, dx \, dy

References

6.2 Reversing the Order of Integration

Concept

  • Used when the current order of integration is hard to compute.
  • Unlike double integrals over rectangles (with constant bounds), these often have variable bounds.
  • You can switch from dydxdy \to dx or vice versa, depending on the region.

Strip Visualization

  • Vertical Strip (dy first): fix xx, integrate over yy
  • Horizontal Strip (dx first): fix yy, integrate over xx

Visual for strips

Rewriting Limits

  • Given:

    cdg(x)bf(x,y)dydx\int_c^d \int_{g(x)}^b f(x, y) \, dy \, dx
  • First analyze the bounds:

    • x[c,d]x \in [c, d]
    • y[g(x),b]y \in [g(x), b]
  • Sketch the region: include x=cx = c, x=dx = d, y=g(x)y = g(x), and y=by = b.

  • Then reverse the order:

    mlkh(y)f(x,y)dxdy\int_m^l \int_k^{h(y)} f(x, y) \, dx \, dy
    • New bounds are derived from the region in the graph.

References


Mathematic 2 - Statistic - Summary

1. Introduction to Statistics

1.1 Introduction to Statistics & Classification of Data

Concept

  • Definition of statistical methods:

    • Population: Statistic from the whole population (not practical).
      • Parameter: Summarizes data from a population.
    • Sample: Statistic from a subset (more feasible).
      • Statistic: Summarizes data from a sample.
    • Simple random sample: Considers all people equally.
  • Accuracy check for sample:

    sample averagepopulation averagen|\text{sample average} - \text{population average}| \leq n
    • Big difference: inaccurate
    • Small difference: accurate
  • Confidence interval: Interval where the true value is likely to lie.

Types of Data

  • Qualitative Nominal: Categorical, no order (e.g., Gender, State).
  • Qualitative Ordinal: Categorical with order (e.g., Grades, Survey responses).
  • Quantitative Discrete: Numeric, countable (e.g., Number of cars).
  • Quantitative Continuous: Numeric, measurable (e.g., Exam time, Resistance).

1.2 Frequency Distribution & Histograms

Frequency Distribution

  • Used to handle large amounts of data by grouping into classes.
  • Classes must be non-overlapping and contain all data.

Frequency Distribution Table

Class IntervalFrequencyRelative Frequency83x<8530.0485x<8740.0587x<89170.2189x<91230.2891x<93210.2693x<95100.1295x<9720.0297x<9910.0199x<10110.01Total821.00\begin{array}{|c|c|c|} \hline \textbf{Class Interval} & \textbf{Frequency} & \textbf{Relative Frequency} \\ \hline 83 \leq x < 85 & 3 & 0.04 \\ 85 \leq x < 87 & 4 & 0.05 \\ 87 \leq x < 89 & 17 & 0.21 \\ 89 \leq x < 91 & 23 & 0.28 \\ 91 \leq x < 93 & 21 & 0.26 \\ 93 \leq x < 95 & 10 & 0.12 \\ 95 \leq x < 97 & 2 & 0.02 \\ 97 \leq x < 99 & 1 & 0.01 \\ 99 \leq x < 101 & 1 & 0.01 \\ \hline \textbf{Total} & \textbf{82} & \textbf{1.00} \\ \hline \end{array}

Histogram

  • Visual bar graph based on class intervals and frequencies.
  • X-axis: Class intervals
  • Y-axis: Frequency
  • Bar height = frequency

2. Descriptive Statistics

Argand Diagram

2.1 Measures of Central Tendency

Mean

X=Xnn=xff\overline{X}= \frac{\sum X_n}{n} = \frac{\sum xf}{f}

Median

  • The middle value, or average of the two middle values, in an ordered dataset.

Mode

  • The value that appears most frequently in the data.
  • Modal class: the class interval with the highest frequency density.

Skewness

Argand Diagram
  • Symmetric: Mode = Median = Mean
  • Positively skewed: Mean > Median > Mode
  • Negatively skewed: Mean < Median < Mode

2.2 Measures of Dispersion

  • [[Range, Variance, Standard Deviation, The Coefficient of Variation]]
  • [[The Interquartile Range, A Box-and-Whisker Plot]]

Range

Range=xmaxxmin\text{Range} = x_{\text{max}} - x_{\text{min}}

Variance and Standard Deviation

  • Sample Variance
    s2=(xxˉ)2n1=x2(x)2nn1s^2 = \frac{\sum (x - \bar{x})^2}{n - 1} = \frac{\sum x^2 - \frac{(\sum x)^2}{n}}{n - 1}

  • Sample Standard Deviation
    s=s2s = \sqrt{s^2}

Coefficient of Variation

  • A normalized measure of dispersion:
    CV=sxˉ×100\text{CV} = \frac{s}{\bar{x}} \times 100

Interquartile Range (IQR)

x_min ─── Q₁ ─── Q₂ (Median) ─── Q₃ ─── x_max
25% 25% 25% 25%

IQR=Q3Q1IQR = Q_3 - Q_1

Box-and-Whisker Plot

Argand Diagram Argand Diagram
  • Box plot includes: min, max, lower quartile (Q1), median (Q2), upper quartile (Q3).
  • Whiskers:
    • Lower whisker: larger of lower limit or minimum value.
    • Upper whisker: smaller of upper limit or maximum value.

3. Basic Probability

3.1 Sample Space & Events

Concepts

  • Sample Space (SS): Set of all possible outcomes of the experiment.
    • P(S)=1P(S) = 1
  • Event (EE): A subset of SS
  • Complement of Event EE (E\overline{E} or EcE^c): Event not in EE P(E)=1P(E)P(\overline{E}) = 1 - P(E)

AND, OR

  • AND (EFE \cap F): Both EE and FF occur
  • OR (EFE \cup F): EE, FF, or both occur

Mutually Exclusive and Subset

  • Mutually Exclusive: EF=E \cap F = \emptyset
  • Subset: EFE \subseteq F

Axioms of Probability

  1. 0P(E)10 \leq P(E) \leq 1
  2. P(S)=1P(S) = 1
  3. For mutually exclusive events: P(E1E2...En)=P(E1)+P(E2)+...+P(En)P(E_1 \cup E_2 \cup ... \cup E_n) = P(E_1) + P(E_2) + ... + P(E_n)

Algebra of Events

  • Complement Rule: P(E)=1P(E)P(\overline{E}) = 1 - P(E)
  • Addition Rule: P(EF)=P(E)+P(F)P(EF)P(E \cup F) = P(E) + P(F) - P(E \cap F)

Extended Concepts

  • Mutually Exclusive: Cannot occur together (e.g., coin toss: head or tail)
  • Independent: One does not affect the other (e.g., YouTube like and share)

Probability of AA and not BB

  • P(A and B)=P(A)P(AB)P(A \text{ and } \overline{B}) = P(A) - P(A \cap B)

3.2 Conditional Probability

  • Conditional probability: P(AB)=P(AB)P(B)P(A | B) = \frac{P(A \cap B)}{P(B)}

Total Probability Rule

  • If B1,B2,...,BnB_1, B_2, ..., B_n are mutually exclusive and exhaustive events: P(A)=i=1nP(ABi)P(Bi)P(A) = \sum_{i=1}^{n} P(A|B_i)P(B_i)

3.3 Independent Events

  • EE and FF are independent if: P(EF)=P(E)P(F)P(E \cap F) = P(E)P(F)
  • Check independence: P(E)=P(EF)    P(EF)=P(E)P(F)P(E) = P(E|F) \iff P(E \cap F) = P(E)P(F)
  • Independent ≠ Mutually Exclusive

3.4 Bayes' Theorem

  • Formula: P(AB)=P(BA)P(A)P(B)P(A|B) = \frac{P(B|A) \cdot P(A)}{P(B)}

Reference


4. Discrete Random Variables

4.1 Introduction to Random Variables

Concept

  • Sample Space (S): Set of all possible outcomes of a statistical experiment.
  • Element/Sample point: Each outcome that belongs to the sample space.
  • Random Variable: A function applied to sample space and its elements to generate a range of values used for probability calculation.
Sample Space to Random Variable Mapping

4.2 Discrete and Continuous Random Variables

Random Variable Types

Concept

  • Discrete Sample Space: Countable outcomes.
    • Discrete Random Variable: A random variable defined over a discrete sample space.
  • Continuous Sample Space: Uncountable outcomes.
    • Continuous Random Variable: A random variable defined over a continuous sample space.

4.3 The Binomial and Poisson Distributions

Binomial Distribution

Conditions

  • Experiment consists of nn trials.
  • Each trial has two possible outcomes (success/failure).
  • Probability of success pp remains constant.
  • Trials are independent.

Distribution

XB(n,p)X \sim B(n, p)

  • Formula:
    Pr=(nr)prqnrP_r = \binom{n}{r} p^r q^{n-r}
    where q=1pq = 1 - p

  • Expected Value:
    E(X)=npE(X) = n \cdot p

  • Variance:
    Var(X)=σ2=npqVar(X) = \sigma^2 = n \cdot p \cdot q

Poisson Distribution

Conditions

  • Describes number of events over an interval (time, area, volume).
  • Occurrences happen:
    • Randomly
    • Independently
    • At a constant average rate

Distribution

XPo(λ)X \sim Po(\lambda)

  • Formula:
    P(X=x)=eλλxx!,λ>0P(X = x) = \frac{e^{-\lambda} \lambda^x}{x!}, \quad \lambda > 0

  • Expected Value and Variance:
    E(X)=μ=λ,Var(X)=λE(X) = \mu = \lambda, \quad Var(X) = \lambda

  • Probability of rr occurrences:
    P(X=r)=eλλrr!P(X = r) = \frac{e^{-\lambda} \lambda^r}{r!}
    E(X=r)=nP(X=r)E(X = r) = n \cdot P(X = r)

Notes:

  • Discrete values
  • No upper bound
  • Mean and variance are approximately equal