!CsEtqGNbWxRKqvYHog:matrix.org

rust-ml

4 Members
Bridged to https://gitter.im/rust-ml/Lobby2 Servers

Load older messages


SenderMessageTime
17 Mar 2017
@gitter_jonysy:matrix.org@gitter_jonysy:matrix.org (edited) ... it compiles (or ... => ... it _compiles_ (or ... 17:00:06
@gitter_jonysy:matrix.org@gitter_jonysy:matrix.org (edited) ... (or translates, for ... => ... (or _translates_, for ... 17:00:16
@gitter_botev:matrix.org@gitter_botev:matrix.orghere we abstract the architecture to a numerical framework (opencl, cuda, arrrayfire etc) which is a level above binary code17:00:35
@gitter_jonysy:matrix.org@gitter_jonysy:matrix.org Right. So a sigmoid function 1.0 / (1.0 + exp(-x)) in native Rust could compile to CL like such:
__kernel void sigmoid(__global float *a, __global float *b) {
    uintptr_t i = get_global_id(0);
    b[i] = 1.0 / (1.0 + exp(-a[i]));
}
17:02:40
@gitter_botev:matrix.org@gitter_botev:matrix.orgyes elementwise function would all look like that17:03:21
@gitter_jonysy:matrix.org@gitter_jonysy:matrix.orgHow does arrayfire’s JIT work?18:31:14
@gitter_jonysy:matrix.org@gitter_jonysy:matrix.orgHow does it compile a function at run time and then skip to the compiled kernel for subsequent calls?18:32:31
@gitter_jonysy:matrix.org@gitter_jonysy:matrix.org (edited) ... does it compile a function at run time and then skip to the compiled kernel for subsequent calls? => ... does arrayfire’s JIT engine work? 18:32:44
19 Mar 2017
@gitter_jonysy:matrix.org@gitter_jonysy:matrix.org

Poll:

    // Multilayer perceptron (MLP)
    let net_cfg = {
        SequentialConfig::new()
            .input("data", [batch_size, 28, 28])
            .force_backward(true)
            .layer("reshape", Reshape::new().with_shape([batch_size, 784]))
            .layer("linear-1", Linear::new().output_size(1568))
            .layer("sigmoid", Sigmoid)
            .layer("linear-2", Linear::new().output_size(10))
            // ..
            .layer("log_softmax", LogSoftmax)
    };

or

    // Multilayer perceptron (MLP)
    let net = Sequential::new([
        Input("data", [batch_size, 28, 28]),
        Reshape(Layer::new("reshape").with_shape([batch_size, 784])),
        Linear(Layer::new("linear-1").output_size(1568)),
        Sigmoid(Layer::new("sigmoid")),
        Linear(Layer::new("linear-2").output_size(10)),
        LogSoftmax(Layer::new("log_softmax"))
    ]);
20:30:06
@neverfox:matrix.org@neverfox:matrix.org If you do the second style, why not Reshape::new("reshape")...? 21:23:19
21 Mar 2017
@gitter_jonysy:matrix.org@gitter_jonysy:matrix.orgIs anyone here interested in being one of the maintainers of the leaf fork?19:14:54
22 Mar 2017
@gitter_jonysy:matrix.org@gitter_jonysy:matrix.org Why is there a dx argument in the gradient functions here 16:38:16
@gitter_jonysy:matrix.org@gitter_jonysy:matrix.org (edited) ... functions [here](https://github.com/autumnai/collenchyma-nn/blob/master/src/frameworks/native/helper.rs#L39) => ... functions [here](https://github.com/autumnai/collenchyma-nn/blob/master/src/frameworks/native/helper.rs#L39)? 16:38:21
@gitter_jonysy:matrix.org@gitter_jonysy:matrix.org (edited) Why is there a `dx` argument in ... => Why are there a `dx` arguments in ... 16:40:16
25 Mar 2017
@gitter_jonysy:matrix.org@gitter_jonysy:matrix.org I cannot, for the life of me, figure out why the switched the parameters a and b here. Can you help me out? 00:43:26
@gitter_jonysy:matrix.org@gitter_jonysy:matrix.org (edited) ... why the switched ... => ... why they switched ... 00:43:37
@gitter_jonysy:matrix.org@gitter_jonysy:matrix.org (edited) ... the parameters ... => ... the order of parameters ... 00:43:47
@gitter_jonysy:matrix.org@gitter_jonysy:matrix.org Never mind. CUDA expects a column-major memory layout.01:40:38
21 Apr 2017
@cathalgarvey:matrix.org@cathalgarvey:matrix.orghttp://futhark-lang.org/07:21:39
@cathalgarvey:matrix.org@cathalgarvey:matrix.orgLooks like a much funner way to write kernels :)07:21:49
29 May 2017
@cathalgarvey:matrix.org@cathalgarvey:matrix.orgA new framework appears! https://github.com/torchrs/torchrs08:39:26
@neverfox:matrix.org@neverfox:matrix.orgDammit that was my idea.14:12:19
@neverfox:matrix.org@neverfox:matrix.orgOh well, it exists now, so I think we can consider NN in Rust solved. PyTorch API is excellent and uses graph based AD.14:13:48
29 Jul 2017
@aurabindo:matrix.org@aurabindo:matrix.org changed their display name from Aurabindo to jayaura.06:35:07
@aurabindo:matrix.org@aurabindo:matrix.org changed their display name from jayaura to Aurabindo.06:35:19
15 Aug 2017
@gitter_djkooks:matrix.org@gitter_djkooks:matrix.org changed their profile picture.15:22:03
9 Nov 2017
@gitter_usamec:matrix.org@gitter_usamec:matrix.org joined the room.15:11:45
@gitter_usamec:matrix.org@gitter_usamec:matrix.orgHi. Does GIR support creation of recurrent nets?15:11:46
@gitter_botev:matrix.org@gitter_botev:matrix.org changed their profile picture.16:20:23
@gitter_botev:matrix.org@gitter_botev:matrix.org unfortunately I'm no longer developing that project due to being busy with my PhD 16:20:24

Show newer messages


Back to Room ListRoom Version: