31 Mar 2023 |

ngn | "row" and "column" are relative to the default orientation in which a matrix is written | 00:02:49 |

ngn | so it may be better to describe the order as "lexicographic order of index tuples" | 00:03:11 |

ngn | the convention for matrices happens to be "first axis down, second axis right" | 00:05:12 |

ngn | contrary to the convention in geometry where the point (x,y) lies x units right and y units up | 00:05:35 |

ngn | * contrary to the convention in geometry where the point (x,y) lies x units right and y units up from the origin | 00:05:42 |

Moonchild | well there is a question of what lexicographic order is used by ravel—axes are ordered from 'beginning' to 'end' rather than the other way around, which would give column-major raveling of matrices | 00:07:34 |

jatta | Matrix * vector multiplication would be done row-wise m[1,1] * v[1] + m[1,2] * v[2] instead of column-wise (m[1,1] * v[1], m[2,1] * v[1]) ? | 00:13:18 |

ngn | In reply to @_discord_209725348768776192:t2bot.io well there is a question of what lexicographic order is used by ravel—axes are ordered from 'beginning' to 'end' rather than the other way around, which would give column-major raveling of matrices beginning to end (i.e. first, last) of course | 00:16:28 |

ngn | In reply to @_discord_209725348768776192:t2bot.io well there is a question of what lexicographic order is used by ravel—axes are ordered from 'beginning' to 'end' rather than the other way around, which would give column-major raveling of matrices * beginning to end (i.e. first, second) of course | 00:16:43 |

Marshall | I tried to untangle some of the various orderings on this wiki page. I use "index order" instead of "ravel order" in BQN, a lot like ngn's suggestion. | 00:17:54 |

ngn | im sorry for accidentally contributing to dyalog's wiki | 00:18:25 |

Moonchild | lol | 00:19:38 |

ngn | In reply to @_discord_718551566336000090:t2bot.io Matrix * vector multiplication would be done row-wise m[1,1] * v[1] + m[1,2] * v[2] instead of column-wise (m[1,1] * v[1], m[2,1] * v[1]) ? matrix multiplication is generalized to "inner product" in apl. it works on any-dimensional arrays. better think of it as merging the last axis of alpha (the left argument) and the first axis of omega (the right argument) instead of row- or column-wise. | 00:21:31 |

ngn | afaik, the other apl-likes do it the same way, and it makes sense | 00:22:45 |

ngn | * im sorry for accidentally ~~contributing to~~ agreeing with dyalog's wiki | 00:27:32 |

jatta | I guess the reason this order is nice is that it lets you express `a[i,j]` as `a[i][j]` equally efficiently. Under column-major storage `a[i]` is too slow. | 00:43:51 |

jatta | (idk if there's a way to describe column-major storage in apl vocabulary) | 00:46:19 |

Moonchild | no | 00:46:23 |

Moonchild | you can lay out arrays however you want | 00:46:25 |

jatta | but you can't make an efficient apl if `a[i]` requires hopping all over the place | 00:47:43 |

ngn | as long as you store vectors in consecutive memory cells, no | 00:48:30 |

Moonchild | the non-strawman case involves i being a large array of indices, which must hop all over the place anyway | 00:48:50 |

ngn | for matrices one of the two will inevitably have to be hopping: `a[i;]` or `a[;j]` | 00:49:00 |

Moonchild | regardless—yeah, that | 00:49:19 |

Moonchild | (and see also stuff like morton order, which makes *both* hop around, but somewhat less than they might otherwise) | 00:49:43 |

ColTim#5847 | I thought this was the C/Fortran split about how multi-dimensional arrays were laid out in memory | 01:05:21 |

jatta | It is | 01:07:20 |

jatta | https://en.wikipedia.org/wiki/Row-_and_column-major_order | 01:07:32 |

ColTim#5847 | I wonder if it is as relevant now. I thought the way cells are accessed in e.g. matrix multiplies is in blocks or chunks | 01:13:39 |

ColTim#5847 | so you're not traversing the entire row or column | 01:14:11 |