!EjsgbQQNuTfHXQoiax:matrix.org

BQN

43 Members
The BQN array programming language, an APL descendant4 Servers

Load older messages


SenderMessageTime
16 May 2021
@dzaima:matrix.orgdzaima i guess •Decompose does kind of break the assumptions that make rtperf work 11:53:21
@dzaima:matrix.orgdzaima right, •Decompose entirely breaks rtperf's methodology (rtperf assumes that the runtime of rtperf-wrapped modifier operands is entirely made out of other rtperf-wrapped things, and that the impl itself consists of 0 rtperf-wrapped calls, allowing for a simple implTime←totalTime-innerRtPerfTime. •Decompose, however, allows for rtperf-wrapped modifier operands to be anything, breaking the hierarchy. One solution would be to make •Decompose remove the rtperf-wrappers of results, but that's a bit complicated) 12:02:45
@dzaima:matrix.orgdzaima easier to just count as part of time though :) 12:05:36
@mlochbaum:matrix.orgMarshall

Confirmed that this definition is enough to run the compiler (basically the same as commit 6a256ca, but maintaining the fill):

_under_↩{
  i←↕l←1×´s←≢𝕩
  v←𝕨𝔽○𝔾𝕩 ⋄ gi←𝔾s⥊i
  n←(IsArray gi)⊑{⟨𝕩⟩}‿⥊ ⋄ v↩N v ⋄ gi↩N gi
  g←⍋ gi
  P←(≠g)⊸≤◶⟨(⊑⟜g)⊑gi˜,l⟩
  e←P j←0
  𝕩 ⊣_fillBy_⊢˜ s⥊{e=𝕩}◶⟨⊑⟜(⥊𝕩),{𝕩⋄r←(j⊑g)⊑v⋄e↩P j↩1+j⋄r}⟩⌜i
}
12:15:56
@dzaima:matrix.orgdzaima woah, that's, like, fast 12:18:11
@mlochbaum:matrix.orgMarshall

Looks like right operands are always of the form ⊸⊏, ⊸/, ⊸⊑, or so you could even detect and use the special case fairly easily if you don't want the compiler to use a special runtime.

12:18:25
@dzaima:matrix.orgdzaima at some point i intend to set up dzaima/BQN-style under support, falling back to the runtime on unimplemented stuff 12:19:34
@mlochbaum:matrix.orgMarshall

dzaima/BQN is definitely fastest when it works. Although I don't think the runtime version is too hard to implement either. It's a real headache when your only mutable data structure is a closure and you're trying to maintain performance, but with mutable arrays all the reinsertion is pretty simple.

12:21:42
@mlochbaum:matrix.orgMarshall

Same with the special version. The last four lines are just to reinsert values v at indices gi.

12:22:54
@dzaima:matrix.orgdzaima what 12:47:57
@mlochbaum:matrix.orgMarshall

The runtime doesn't use Catch, so structural Under does a manual error trapping thing. I'm guessing this is failing on the fairly complicated result of ⋆⁼.

13:17:22
@dzaima:matrix.orgdzaima another weird under behavior 13:25:50
@dzaima:matrix.orgdzaima that's extremely broken - ≢that gives ⟨0⟩, whereas ≢⥊that gives ⟨1⟩ 13:27:04
@mlochbaum:matrix.orgMarshall

Oh, so it called built-in with an invalid shape (left argument ⟨0⟩ and right argument a 1-element list).

13:29:04
@dzaima:matrix.orgdzaimayeah13:29:14
@dzaima:matrix.orgdzaimaRedacted or Malformed Event13:30:46
@dzaima:matrix.orgdzaima CBQN gives this for @⊸+⌾⊑ ↕0 13:31:29
@mlochbaum:matrix.orgMarshall

Adjusting Recompose to detect if a modifier is StructErr˙ fixes 4 +⌾(⋆⁼) 5. Need to think a little more about whether other cases can have similar problems.

13:38:13
@dzaima:matrix.orgdzaima in other news, I've implemented native 100⊸+⌾⊑ ↕10 going through dzaima/BQN-style dispatch ( because that doesn't need modifiers, which is are bit more complicate) 13:45:58
@dzaima:matrix.orgdzaima heh, making F⌾(a⊸/) a no-op makes CBQN fail 22 prim tests, but still pass 366 14:25:53
@mlochbaum:matrix.orgMarshall

Okay, fixed both of those. For modifying fills, I fill index arrays with a character and then check that none of the indices coming out are characters. Doesn't seem to have a significant performance impact in JS.

14:49:23
@dzaima:matrix.orgdzaima and i think i have a working vector ⌾(a⊸/), saving another 10ms from prim tests 15:13:59
@dzaima:matrix.orgdzaima with the latest mlochbaum/BQN, it seems like 219→205ms or so. Before latest pull it was <200ms 15:19:43
@dzaima:matrix.orgdzaima rtperf before, after; monadic itself has increased in time, but has dropped a lot 15:35:03
@dzaima:matrix.orgdzaima pushed; gonna do some non-BQN things now 15:37:54
@dzaima:matrix.orgdzaima (that's still for 16×prim; ./test.bqn ~/git/BQN -sq prim > SP; ./build -DRT_PERF && cat SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP | ./BQN > rtperf, formatted with •Out∾˘⍉{𝕩↑¨˜⌈´≠¨𝕩}˘⍉>{+´•Eval¨¯3‿¯2↓¨1↓¨2‿5⊏𝕩}¨⊸(⍒⊸⊏){𝕩⊔˜+`𝕩=' '}¨(⊑'|'⊸∊)¨⊸/•FLines"path/to/rtperf") 15:45:11
@_discord_355917200466640898:t2bot.iorowan btw i switched ebqn environments to using a mutable byte array of indices that point to a value in a heap and it slowed it down by like 4 times so thats fun. 15:46:54
@dzaima:matrix.orgdzaima * (that's still for 16×prim; ./test.bqn ~/git/BQN -sq prim > SP; ./build -DRT_PERF && cat SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP | ./BQN > rtperf, formatted with •Out∾˘⍉{𝕩↑¨˜⌈´≠¨𝕩}˘⍉>{+´•Eval¨¯3‿¯2↓¨1↓¨2‿5⊏𝕩}¨⊸(⍒⊸⊏){𝕩⊔˜+`𝕩=' '}¨(⊑'|'⊸∊)¨⊸/•FLines"path/to/rtperf") 15:47:54
17 May 2021
@loke:dhsdevelopments.comloke 02:37:36
@_discord_355917200466640898:t2bot.iorowan ^ my conclusion is that i think ive spent enough energy wrestling performance out of the erlang vm and that a rust FFI is the easier path at this point. 14:41:35

There are no newer messages yet.


Back to Room List