14 Oct 2020 |
| @_slack_fatiando_U01CFJ9PHSN:matrix.org joined the room. | 00:27:47 |
| @_slack_fatiando_U01CFJ9PHSN:matrix.org changed their profile picture. | 00:27:49 |
15 Oct 2020 |
@_slack_fatiando_U01CFJ9PHSN:matrix.org | Hello, longtime admirer after some clarification on Verde's cross validated spline function. I'm trying to fit a spline to point data with an average spacing of 1500m. How do I go about choosing a 'good' set of mindist and damping values for this? Are there any rules of thumb to use? | 00:25:09 |
@_slack_fatiando_UMGLPTLAW:matrix.org | There's an example in the docs of how to automatically determine the best values, I'll try and find it later | 00:36:38 |
@_slack_fatiando_UMGLPTLAW:matrix.org | In reply to@_slack_fatiando_U01CFJ9PHSN:matrix.org Hello, longtime admirer after some clarification on Verde's cross validated spline function. I'm trying to fit a spline to point data with an average spacing of 1500m. How do I go about choosing a 'good' set of mindist and damping values for this? Are there any rules of thumb to use? https://www.fatiando.org/verde/latest/tutorials/model_selection.html#cross-validated-gridders Look under the cross validated gridders part | 04:32:10 |
@_slack_fatiando_UMGLPTLAW:matrix.org | In reply to@_slack_fatiando_UMGLPTLAW:matrix.org https://www.fatiando.org/verde/latest/tutorials/model_selection.html#cross-validated-gridders Look under the cross validated gridders part spline = vd.SplineCV(dampings=dampings, mindists=mindists,)
| 04:32:45 |
@_slack_fatiando_UMGLPTLAW:matrix.org | In reply toundefined
(edited) ```spline = vd.SplineCV(dampings=dampings, mindists=mindists,)```
=> ```dampings = [None, 1e-4, 1e-3, 1e-2]
mindists = [5e3, 10e3, 50e3, 100e3]
spline = vd.SplineCV(dampings=dampings, mindists=mindists,)``` | 04:33:05 |
@_slack_fatiando_UMGLPTLAW:matrix.org | In reply to@_slack_fatiando_UMGLPTLAW:matrix.org
dampings = [None, 1e-4, 1e-3, 1e-2]
mindists = [5e3, 10e3, 50e3, 100e3]
spline = vd.SplineCV(dampings=dampings, mindists=mindists,) This might not answer your question though of what range of values you should choose. I usually try a large range to make sure all bases are covered. If it chooses one of the end members, increase the range. | 04:34:30 |
@_slack_fatiando_U01CFJ9PHSN:matrix.org | In reply to@_slack_fatiando_UMGLPTLAW:matrix.org This might not answer your question though of what range of values you should choose. I usually try a large range to make sure all bases are covered. If it chooses one of the end members, increase the range. cool, thanks Craig. Nested grid search it is 👍 | 04:38:42 |
@_slack_fatiando_UMGLPTLAW:matrix.org | In reply to@_slack_fatiando_U01CFJ9PHSN:matrix.org cool, thanks Craig. Nested grid search it is 👍 mindists should somewhat reflect your data spacing, but i'm not really sure on the dampings. | 04:43:36 |
@_slack_fatiando_UMSRSPEMA:matrix.org | In reply to@_slack_fatiando_UMGLPTLAW:matrix.org mindists should somewhat reflect your data spacing, but i'm not really sure on the dampings. That's a great question Thomas Ostersen. The rule of thumb for mindist is as Craig Miller said, roughly the data spacing. For the damping, we run a sklearn StandardScaler to scale the Jacobian to unit variance. This has the effect of controlling the range of dampings. From personal experience, I would try values from 1e1 to 1e-8, probably with exponential spacing instead of linear spacing. For example, start with [10**i for i in range(-8, 2)] and refine as needed. | 08:18:27 |
18 Oct 2020 |
@_slack_fatiando_U0156QCM6AH:matrix.org | Here's a new one from Chain.fit ValueError: Too large work array required -- computation cannot be performed with standard 32-bit LAPACK. | 07:44:25 |
@_slack_fatiando_U0156QCM6AH:matrix.org | Not 64 bit by default? | 07:44:31 |
@_slack_fatiando_UMSRSPEMA:matrix.org | In reply to@_slack_fatiando_U0156QCM6AH:matrix.org Here's a new one from Chain.fit ValueError: Too large work array required -- computation cannot be performed with standard 32-bit LAPACK. That's strange. It would be. But I think it's complaining about the underlying lapack used by numpy. What system is this running on? | 13:45:21 |
19 Oct 2020 |
@_slack_fatiando_U0156QCM6AH:matrix.org | In reply to@_slack_fatiando_UMSRSPEMA:matrix.org That's strange. It would be. But I think it's complaining about the underlying lapack used by numpy. What system is this running on? Windows 10, I had an odd numpy warning/error about something maybe related the day before, so could be an issue there maybe with something installed recently - I can see if it works in a different environment | 03:10:08 |
@_slack_fatiando_U0156QCM6AH:matrix.org | In reply to@_slack_fatiando_U0156QCM6AH:matrix.org Windows 10, I had an odd numpy warning/error about something maybe related the day before, so could be an issue there maybe with something installed recently - I can see if it works in a different environment Installed verde in a new environment, seems to be running | 03:52:52 |
@_slack_fatiando_U0156QCM6AH:matrix.org | In reply to@_slack_fatiando_U0156QCM6AH:matrix.org Installed verde in a new environment, seems to be running so something numpy corrupted I guess | 03:55:37 |
@_slack_fatiando_UMSRSPEMA:matrix.org | In reply to@_slack_fatiando_U0156QCM6AH:matrix.org so something numpy corrupted I guess Yeah, I guess so. Could be something in sklearn since they sometimes access blas and lapack directly. | 10:15:19 |
| @_slack_fatiando_U015ADXNB0V:matrix.org joined the room. | 21:37:19 |
6 Nov 2020 |
| @_slack_fatiando_U01E7LSJCHH:matrix.org joined the room. | 18:36:39 |
9 Nov 2020 |
| @_slack_fatiando_U01E7LSJCHH:matrix.org changed their profile picture. | 15:50:48 |
12 Nov 2020 |
| @_slack_fatiando_U01FBJSSW4Q:matrix.org joined the room. | 20:20:51 |
13 Nov 2020 |
| @_slack_fatiando_U01E2SQ0E73:matrix.org joined the room. | 16:52:30 |
19 Nov 2020 |
@_slack_fatiando_U01E7LSJCHH:matrix.org | Hi everyone!! I am trying to understand Verde. I understand the one of its goals is to interpolated unevenly data. Which is the difference with GMT's surface method? Is Verde oriented to bathymetric data? | 17:49:46 |
@_slack_fatiando_UMGLPTLAW:matrix.org | In reply to@_slack_fatiando_U01E7LSJCHH:matrix.org Hi everyone!! I am trying to understand Verde. I understand the one of its goals is to interpolated unevenly data. Which is the difference with GMT's surface method? Is Verde oriented to bathymetric data? While the examples given are bathymetric the gridding/interpolation is relevant to any unevenly spaced data. I think somewhere on slack there is a discussion on the differences between verde and GMT approach. | 20:52:07 |
20 Nov 2020 |
@_slack_fatiando_U0156QCM6AH:matrix.org | https://www.researchgate.net/publication/328228073_Verde_Processing_and_gridding_spatial_data_using_Green's_functions | 04:48:09 |
@_slack_fatiando_UMSRSPEMA:matrix.org | In reply to@_slack_fatiando_UMGLPTLAW:matrix.org While the examples given are bathymetric the gridding/interpolation is relevant to any unevenly spaced data. I think somewhere on slack there is a discussion on the differences between verde and GMT approach. As Craig mentioned, this work for any type of data (though for gravity and magnetic, using the equivalent source methods in Harmonica is better). Verde is more like the greenspline GMT program. The difference is in how we solve the inversion (GMT uses SVDs and we use damping regularisation) and how we handle weights, normalisation, etc. We’re also very close to a solution to the large memory requirements (keep an yet out for santisoler ‘s next paper). There are other features in there similar to GMT functions, like BlockReduce and BlockMean which have a bit more flexibility than the GMT versions (hence the reimplementation instead of using pygmt). | 08:23:20 |
@_slack_fatiando_UMSRSPEMA:matrix.org | In reply to@_slack_fatiando_UMSRSPEMA:matrix.org As Craig mentioned, this work for any type of data (though for gravity and magnetic, using the equivalent source methods in Harmonica is better). Verde is more like the greenspline GMT program. The difference is in how we solve the inversion (GMT uses SVDs and we use damping regularisation) and how we handle weights, normalisation, etc. We’re also very close to a solution to the large memory requirements (keep an yet out for santisoler ‘s next paper). There are other features in there similar to GMT functions, like BlockReduce and BlockMean which have a bit more flexibility than the GMT versions (hence the reimplementation instead of using pygmt). There is also this video of a tutorial we did earlier this year: https://youtu.be/-xZdNdvzm3E | 08:31:16 |
@_slack_fatiando_U01E7LSJCHH:matrix.org | In reply to@_slack_fatiando_UMSRSPEMA:matrix.org There is also this video of a tutorial we did earlier this year: https://youtu.be/-xZdNdvzm3E So, from the JOSS article, this would be the difference, right? "Some of these interpolationand data processing methods already exist in GMT ... . However, there are no model selection tools in GMT and it can be difficult to separate parts of the processing that are done internally by its modules. Verde is designed to be modular, easily extended, ... .It can be used to implement new interpolation methods". | 13:43:49 |
| @_slack_fatiando_U01F18B993Q:matrix.org joined the room. | 17:48:40 |