7 Mar 2017 |
@gitter_jonysy:matrix.org | Are symbolic NNs faster than non-symbolic NNs? If so, why does Leaf outperform Tensorflow? | 18:18:03 |
@gitter_jonysy:matrix.org | (edited) Are symbolic ... => In general, are symbolic ... | 18:18:25 |
@gitter_botev:matrix.org | it used to outperform it back in the days, as tensorflow was not really symbolic then | 18:18:30 |
@gitter_jonysy:matrix.org | Would creating a graph-like container for Leaf make it symbolic? | 18:19:01 |
@gitter_botev:matrix.org | potentially, but that depend what the graph, do - its main benefit is able to find and optimize intermediate computations | 18:20:37 |
@gitter_jonysy:matrix.org |
.. find and optimize intermediate computations
Which is exactly what GIR does. Point taken. I really want to take Leaf’s philosophy, so to speak, and merge it with a symbolic approach... | 18:23:42 |
@gitter_botev:matrix.org | mmm you might want to look at pytorch then | 18:25:03 |
@gitter_botev:matrix.org | I think it is more like what you describe | 18:25:16 |
@gitter_jonysy:matrix.org | I was actually looking at nngraph (Torch container) | 18:25:20 |
@gitter_jonysy:matrix.org | Given your definition, that doesn't really make it _symbolic+ either? | 18:26:00 |
@gitter_jonysy:matrix.org | (edited) ... it _symbolic+ either? => ... it _symbolic_ either? | 18:26:07 |
@gitter_jonysy:matrix.org | non-sequential doesn't necessarily mean symbolic, right? | 18:26:53 |
@gitter_jonysy:matrix.org | (edited) non-sequential doesn't necessarily mean symbolic, right? => "non-sequential" doesn't necessarily mean “symbolic", right? | 18:27:09 |
@gitter_botev:matrix.org | nope | 18:33:35 |
@gitter_botev:matrix.org | symbolic means that you have like a compilation phase | 18:33:46 |
@gitter_botev:matrix.org | where you change the graph | 18:33:52 |
@gitter_botev:matrix.org | and when you constructed it actually does not do any actual computation | 18:34:02 |
@gitter_botev:matrix.org | but rather when you run it | 18:34:05 |
@gitter_jonysy:matrix.org | (edited) "non-sequential" doesn't necessarily mean “symbolic", right? => Given your definition, that doesn't really make it _symbolic_ either...? | 18:34:39 |
@gitter_neverfox:matrix.org | does gir have any automatic optimizations at this stage? | 18:34:58 |
@gitter_neverfox:matrix.org | like if it gets x * 1 will it just drop the multiplication? | 18:35:32 |
@gitter_neverfox:matrix.org | or is it presumed that optimizations are the responsibility of something downstream? | 18:36:18 |
@gitter_botev:matrix.org | so at this stage no | 18:36:51 |
@gitter_botev:matrix.org | in general there should be 5 layer as in the LLVM: | 18:37:13 |
@gitter_botev:matrix.org |
- Interface - since its written in Rust that does not exist in rust, but you can export it to Python, etc.. where it will have an API
| 18:37:39 |
@gitter_neverfox:matrix.org | I didn't think it did | 18:37:48 |
@gitter_botev:matrix.org |
- IR - this is what currently is the
gir_core
| 18:37:49 |
@gitter_botev:matrix.org | (edited) 2. IR - this is what currently is the `gir_core` => | 18:38:43 |
@gitter_botev:matrix.org | (edited) => 1. Interface - since its written in Rust that does not exist in rust, but you can export it to Python, etc.. where it will have an API
2. IR - this is what currently is the `gir_core`
3. Backend agnostic optimization on the IR
4. Backend specific optimization - this will be downstream backend job
5. Backend code generation/compilation/linking | 18:38:44 |
@gitter_botev:matrix.org | (edited) 1. Interface - since its written in Rust that does not exist in rust, but you can export it to Python, etc.. where it will have an API
2. IR - this is what currently is the `gir_core`
3. Backend agnostic optimization on the IR
4. Backend specific optimization - this will be downstream backend job
5. Backend code generation/compilation/linking => corrected it | 18:38:52 |