## Thursday, July 25, 2013

### Scala DSL for Mahout in-core linear algebra (proposal)

I've been working on a Scala DSL for Mahout in-core linear algebra.

the code is here:
https://github.com/dlyubimov/mahout-commits/tree/dev-0.9.x-scala/math-scala

Here's what i got so far

### Inline initialization

dense vectors

val denseVec1: Vector = (1.0, 1.1, 1.2)
val denseVec2 = dvec(1, 0, 1.1, 1.2)

sparse vectors

val sparseVec = svec((5 -> 1) :: (10 -> 2.0) :: Nil)
val sparseVec2: Vector = (5 -> 1.0) :: (10 -> 2.0) :: Nil

matrix inline initialization, either dense or sparse, is always row-wise:

dense matrices  :

val a = dense((1, 2, 3), (3, 4, 5))

sparse matrices

val a = sparse(
(1, 3) :: Nil,
(0, 2) :: (1, 2.5) :: Nil
)

diagonal matrix with constant diagonal elements

diag(10, 3.5)

diagonal matrix with main diagonal backed by a vector

diagv((1, 2, 3, 4, 5))

Identity matrix

eye(10)

### element wise operations

geting vector element

val d = vec(5)

setting vector element

vec(5) = 3.0

getting matrix element

val d = m(3,5)

setting matrix element (setQuick() behind the scenes)

m(3,5) = 3.0

Getting matrix row or column

val rowVec = m(3, ::)
val colVec = m(::, 3)

Setting matrix row or column

m(3, ::) = (1, 2, 3)
m(::, 3) = (1, 2, 3)

thru vector assignment also works

m(3, ::) := (1, 2, 3)
m(::, 3) := (1, 2, 3)

subslices of row or vector work too

a(0, 0 to 1) = (3, 5)

or with vector assignment

a(0, 0 to 1) := (3, 5)

matrix contiguous block as matrix, with assignment

// block
val b = a(2 to 3, 3 to 4)

// asignment to a block
a(0 to 1, 1 to 2) = dense((3, 2), (2, 3))

or thru the matrix assignment operator

a(0 to 1, 1 to 2) := dense((3, 2), (2, 3))

Assignment operator by copying between vectors or matrix

vec1 := vec2
m1 := m2

also works for matrix subindexing notations as per above

Assignment thru a function literal (matrix)

m := ((row, col, x) => if (row == col) 1 else 0)

for a vector the same

vec := ((index, x) => sqrt(x))

### BLAS-like ops

plus/minus, either vector or matrix or numeric, with assignment or not

a + b
a - b

a + 5.0
a - 5.0

Hadamard (elementwise) product, either vector or matrix or numeric operands

a * b
a * 5

same things with assignment, matrix, vector or numeric operands

a += b
a -= b

a += 5.0
a -= 5.0

a *= b
a *= 5

One nuance here is associativity rules in scala. E.g. 1/x in R (where x is vector or matrix) is elementwise inverse operation and in scala realm would be expressed as

val xInv = 1 /: x

and R's 5.0 - x would be

val x1 = 5.0 -: x

Even trickier and really probably not so obvious stuff :

a -=: b

assigns a - b to b (in-place) and returns b. Similarly for a /=: b or 1 /=: v.

(all assignment operations, including :=,  return the assignee argument just like in C++)

dot product (vector operands)

a dot b

matrix /vector equivalency (or non-equivalency). Dangerous, exact equivalence is rarely useful, better use norm comparisons with admission of small errors

a === b
a !== b

Matrix multiplication (matrix operands)

a %*% b

for matrices that explicitly support optimized right and left muliply (currently, diagonal matrices)

right-multiply (for symmetry, in fact same as %*%)

diag(5,5)  :%*% b

optimized left multiply with a diagonal matrix:

a %*%: diag(5,5) # i.e. same as (diag(5,5) :%*% a.t) t

Second norm, vector or matrix argument:

a.norm

Transpose

val mt = m.t

### Decompositions(matrix arguments)

Cholesky decompositon (as an object of a CholeskyDecomposition class with all its operations)

val ch = chol(m)

SVD

val (u, v, s) = svd(m)

EigenDecomposition

val (v, d) = eigen(m)

QR

val (q, r) = qr(m)

Check for rank deficiency (runs rank-revealing QR)

m.isFullRank

### Misc

vector cardinality

a.length

matrix cardinality

m.nrow

m.ncol

a copy-by-value (vector or matrix )

val b = a cloned

### Pitfalls

This one the people who are accustomed to writing R linear algebra will probably find quite easy to fall into. R has a nice property, a copy-on-write, so all variables actually appear to act as no-side-effects scalar-like values and all assignment appear to be by value. Since scala always assigns by reference (except for AnyVal types which are assigned by value), it is easy to fall prey to the following side effect modifications

val m1 = m

m1 += 5.0 // modifies m as well

A fix is as follows:

val m1 = m cloned
m1 += 5.0 // now m is intact

### Putting it all together: In-core SSVD with power iterations.

This is a port of my R-based SSVD prototype. To give you an idea, here is how it is going to look in Scala DSL for Mahout (not quite tuned for performance yet, but enough to give you an idea):

/**
* In-core SSVD algorithm.
*
* @param a input matrix A
* @param k request SSVD rank
* @param p oversampling parameter
* @param q number of power iterations
* @return (U,V,s)
*/
def ssvd(a: Matrix, k: Int, p: Int = 15, q: Int = 0) = {
val m = a.nrow
val n = a.ncol
if (k > min(m, n))
throw new IllegalArgumentException(
"k cannot be greater than smaller of m,n")
val pfxed = min(p, min(m, n) - k)

// actual decomposition rank
val r = k + pfxed

// we actually fill the random matrix here
// just like in our R prototype, although technically
// that would not be necessary if we implemented specific random
// matrix view. But ok, this should do for now.
// it is actually the distributed version we are after -- we
// certainly would try to be efficient there.

val rnd = new Random()
val omega = new DenseMatrix(n, r) := ((r, c, v) => rnd.nextGaussian)

var y = a %*% omega
var yty = y.t %*% y
val at = a.t
var ch = chol(yty)
var bt = ch.solveRight(at %*% y)

// power iterations
for (i <- 0 until q) {
y = a %*% bt
yty = y.t %*% y
ch = chol(yty)
bt = ch.solveRight(at %*% y)
}

val bbt = bt.t %*% bt
val (uhat, d) = eigen(bbt)

val s = d.sqrt
val u = ch.solveRight(y) %*% uhat
val v = bt %*% (uhat %*%: diagv(1 /: s))

(u(::, 0 until k), v(::, 0 until k), s(0 until k))

}

And a very-very naive test for it:

test("SSVD") {

val a = dense((1, 2, 3), (3, 4, 5))

val (u, v, s) = ssvd(a, 2, q=1)

printf("U:\n%s\n", u)
printf("V:\n%s\n", v)
printf("Sigma:\n%s\n", s)

val aBar = u %*% diagv(s) %*% v.t

val amab = a - aBar

printf("A-USV'=\n%s\n", amab)

assert(amab.norm < 1e-10)

}

### A follow-up

Many people have come and asked, "why not Matlab syntax? in particular, why not reserve single-symbol '*' for matrix-matrix multiply?". There are some thoughts:

• There's no preference to use either syntax. The only preference is to use some of the existing grammars to make things look familiar. R or Matlab-like syntax makes no difference to me in this case, and I think the adoption-wise they are about the same as well. So we could actually create support for both, that much is easy. We'd have to basically call additionally "import RLikeOps._" or "import MatlabLikeOps._" to enable one or the other. But, see further.
• Element-wise *,+,-,/ are defined on the same geometry and follow the same pattern. They also all define in-place operators which are very popular since they don't involve creation and cloning of a new matrix as a result ("op"= stuff).  Matrix-matrix multiplication seems to be a special case. In that sense, R approach looks more consistent to me.
• Matlab elementwise operation syntax ".*" seems to run into problems with Scala. Scala does not suport '.' symbol as part of operators. Indeed, that would have created ambiguities in Scala.
• In-place elementwise operations would look really more complicated then. I.e. they would have to be '.*=', '.+=' etc. But there would be no such simple looking thing as *=  or /= (since matrix-matrix in-place multiplication is not possible due to cardinality mismatch in the result).
• The rabbit hole goes even further when we consider right-associative operations such as elementwise in-place inversion 1/x (R style), which is pretty popular with me too. That thing would have to look like '1 ./=: x ' in this case which is even more cryptic. '1 /= x' or its in-place version '1 /=: x' looks much more bearable to common sense and scala styling.
• %*%, like many things in R, looks quirky to me, true. But in many ways, matrix-matrix multiplication is also "quirky" and, most notably, computationally heavy. It is asymptotically significantly more expensive and always requires a new matrix to be created (no in-place opportunities here). So, subjectively, asking people to type this pretty inconvenient %*% thing also reflects that heavy nature of the operation, and IMO puts a subconscious warning -- "are you sure this is the best route?  This is expensive!".