Copyright | (c) Alexander Ignatyev 2016-2018. |
---|---|
License | BSD-3 |
Stability | experimental |
Portability | POSIX |
Safe Haskell | None |
Language | Haskell2010 |
MachineLearning.Regression
Description
- class Model a where
- data LeastSquaresModel = LeastSquares
- data MinimizeMethod
- minimize :: Model a => MinimizeMethod -> a -> R -> Int -> Regularization -> Matrix -> Vector -> Vector -> (Vector, Matrix)
- normalEquation :: Matrix -> Vector -> Vector
- normalEquation_p :: Matrix -> Vector -> Vector
- data Regularization
Documentation
Minimal complete definition
Methods
hypothesis :: a -> Matrix -> Vector -> Vector Source #
Hypothesis function, a.k.a. score function (for lassifition problem) Takes X (m x n) and theta (n x 1), returns y (m x 1).
cost :: a -> Regularization -> Matrix -> Vector -> Vector -> R Source #
Cost function J(Theta), a.k.a. loss function. It takes regularizarion parameter, matrix X (m x n), vector y (m x 1) and vector theta (n x 1).
gradient :: a -> Regularization -> Matrix -> Vector -> Vector -> Vector Source #
Gradient function. It takes regularizarion parameter, X (m x n), y (m x 1) and theta (n x 1). Returns vector of gradients (n x 1).
Instances
data MinimizeMethod Source #
Constructors
GradientDescent R | Gradient descent, takes alpha. Requires feature normalization. |
MinibatchGradientDescent Int Int R | Minibacth Gradietn Descent, takes seed, batch size and alpha |
ConjugateGradientFR R R | Fletcher-Reeves conjugate gradient algorithm, takes size of first trial step (0.1 is fine) and tol (0.1 is fine). |
ConjugateGradientPR R R | Polak-Ribiere conjugate gradient algorithm. takes size of first trial step (0.1 is fine) and tol (0.1 is fine). |
BFGS2 R R | Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm, takes size of first trial step (0.1 is fine) and tol (0.1 is fine). |
Arguments
:: Model a | |
=> MinimizeMethod | |
-> a | model (Least Squares, Logistic Regression etc) |
-> R | epsilon, desired precision of the solution |
-> Int | maximum number of iterations allowed |
-> Regularization | regularization parameter |
-> Matrix | X |
-> Vector | y |
-> Vector | initial solution, theta |
-> (Vector, Matrix) | solution vector and optimization path |
Returns solution vector (theta) and optimization path. Optimization path's row format: [iter number, cost function value, theta values...]
normalEquation :: Matrix -> Vector -> Vector Source #
Normal equation using inverse, does not require feature normalization It takes X and y, returns theta.