!UcIhHIbUvnMjujydRz:matrix.org

Fatiando a Terra - harmonica

23 Members
Processing and modeling gravity and magnetic data | +fatiando:matrix.org1 Servers

Load older messages


SenderMessageTime
23 Feb 2021
@_slack_fatiando_U01NSQW42BY:matrix.orgFethi Ali Cheddad joined the room.01:33:22
12 Apr 2021
@_slack_fatiando_U017UPY031N:matrix.orgShibG joined the room.14:02:22
21 Apr 2021
@_slack_fatiando_UT0BDHKPE:matrix.orgLorenzo HI Thibaut Astic, tomorrow there is a tutorial on processing gravity data with Harmonica (https://www.youtube.com/watch?v=0bxZcCAr6bw), given by santisoler, at Transform 21. Especially for the second point I'm also interested in knowing the best practice (for now cross-validation allow to compare different hyperparmetrization of damping factor and relative depth, and evaluate which pairs gives the best score, according to a chossen metric). However I'm also curious to know more about that 06:07:08
@_slack_fatiando_UT0BDHKPE:matrix.orgLorenzo (edited) ... a tutorials on transofrm21 on processing gravity data with Harmonica (<https://www.youtube.com/watch?v=0bxZcCAr6bw>), given by santisoler. Especially ... => ... a tutorial on processing gravity data with Harmonica (<https://www.youtube.com/watch?v=0bxZcCAr6bw>), given by santisoler, at Transform 21. Especially ... 06:07:42
@_slack_fatiando_UMSRSPEMA:matrix.orgleouieda Hey Thibaut Astic great to have you here! 1. For this one in particular you don't have to specify it since the forward modelling is just 1/distance. You can use this for any harmonic function (hence the name) but there are some restrictions on what you can do: 1) you can't mix data types (gravity + gradients) and 2) no reduction to the pole (since it's not really a magnetic field). For those, you'd need to have the actual field in the forward modelling. We started with this one because it's generally useful and easy to implement but we have plans to add more specialised sources as well. 2. The damping is usually somewhere between 1e-8 and 1e2. Varying by order of magnitude is fine. This is true for most problems since we normalise the coefficients (see this preprint for details: https://doi.org/10.31223/X58G7C). The depth is trickier and depends on the wavelengths in the data. But starting with the average data point spacing would be sensible. For both of these, using Verde's cross-validation functions is a good idea. An undergrad is doing a project on this very topic with me this year so hopefully we'll have more concrete results to show soon. 3. We don't have the FFT processing yet because we've been waiting for a feature in xrft (FFT for xarray). Until their last release there actually wasn't an inverse transform implemented. Now that they have it, implementing continuation though FFT should be very easy and we'd love a PR with this 😉 4. In general, equivalent source results are much better. They suffer less from instabilities (or not at all) and boundary effects. Plus you can upward continue without gridding. So if you're doing an inversion and don't want to interpolate, you can still upward continue to isolate deeper sources for example. 07:39:16
@_slack_fatiando_UMSRSPEMA:matrix.orgleouieda (edited) ... well. 2. The damping is usually somewhere between 1e-8 and 1e2. Varying by order of magnitude is fine. This is true for most problems since we normalise the coefficients (see this preprint for details: <https://doi.org/10.31223/X58G7C>). The depth is trickier and depends on the wavelengths in the data. But starting with the average data point spacing would be sensible. For both of these, using Verde's cross-validation functions is a good idea. An undergrad is doing a project on this very topic with me this year so hopefully we'll have more concrete results to show soon. 3. We don't have the FFT processing yet because we've been waiting for a feature in `xrft` (FFT for xarray). Until their last release there actually wasn't an inverse transform implemented. Now that they have it, implementing continuation though FFT should be very easy and we'd love a PR with this 😉 4. ... => ... well. 2. The damping is usually somewhere between 1e-8 and 1e2. Varying by order of magnitude is fine. This is true for most problems since we normalise the coefficients (see this preprint for details: <https://doi.org/10.31223/X58G7C>). The depth is trickier and depends on the wavelengths in the data. But starting with the average data point spacing would be sensible. For both of these, using Verde's cross-validation functions is a good idea. An undergrad is doing a project on this very topic with me this year so hopefully we'll have more concrete results to show soon. 3. We don't have the FFT processing yet because we've been waiting for a feature in `xrft` (FFT for xarray). Until their last release there actually wasn't an inverse transform implemented. Now that they have it, implementing continuation though FFT should be very easy and we'd love a PR with this 😉 4. ... 07:39:27
@_slack_fatiando_UMGLPTLAW:matrix.orgCraig Miller
In reply to@_slack_fatiando_UMSRSPEMA:matrix.org
Hey Thibaut Astic great to have you here! 1. For this one in particular you don't have to specify it since the forward modelling is just 1/distance. You can use this for any harmonic function (hence the name) but there are some restrictions on what you can do: 1) you can't mix data types (gravity + gradients) and 2) no reduction to the pole (since it's not really a magnetic field). For those, you'd need to have the actual field in the forward modelling. We started with this one because it's generally useful and easy to implement but we have plans to add more specialised sources as well. 2. The damping is usually somewhere between 1e-8 and 1e2. Varying by order of magnitude is fine. This is true for most problems since we normalise the coefficients (see this preprint for details: https://doi.org/10.31223/X58G7C). The depth is trickier and depends on the wavelengths in the data. But starting with the average data point spacing would be sensible. For both of these, using Verde's cross-validation functions is a good idea. An undergrad is doing a project on this very topic with me this year so hopefully we'll have more concrete results to show soon. 3. We don't have the FFT processing yet because we've been waiting for a feature in xrft (FFT for xarray). Until their last release there actually wasn't an inverse transform implemented. Now that they have it, implementing continuation though FFT should be very easy and we'd love a PR with this 😉 4. In general, equivalent source results are much better. They suffer less from instabilities (or not at all) and boundary effects. Plus you can upward continue without gridding. So if you're doing an inversion and don't want to interpolate, you can still upward continue to isolate deeper sources for example.
I'm also interested to learn more about this. For example can the equivalent layer/source be used to transform TMI into amplitude?
21:26:35
23 Apr 2021
@_slack_fatiando_U01UHJAAEDD:matrix.orgThibaut Astic joined the room.21:13:31
@_slack_fatiando_U01UHJAAEDD:matrix.orgThibaut Astic changed their display name from _slack_fatiando_U01UHJAAEDD to Thibaut Astic.21:15:24
@_slack_fatiando_U01UHJAAEDD:matrix.orgThibaut Astic set a profile picture.21:15:25
@_slack_fatiando_U01UHJAAEDD:matrix.orgThibaut Astic Hi, after watching the tutorial, I have a question to avoid confusion regarding upward: When passing the argument upward to the equivalent source, should I consider the height of the new data points to be upward + original_elevation or are they laying on a flat surface at elevation= upward regardless of their original elevation? 21:15:25
@_slack_fatiando_U01UHJAAEDD:matrix.orgThibaut Astic (edited) ... at elevation=`upward`? => ... at elevation= `upward` regardless of their original elevation? 21:30:09
24 Apr 2021
@_slack_fatiando_U0156QCM6AH:matrix.orgRichard Scott Good question! 00:22:27
26 Apr 2021
@_slack_fatiando_UMFSBQVMG:matrix.orgsantisoler
In reply to@_slack_fatiando_U01UHJAAEDD:matrix.org
Hi, after watching the tutorial, I have a question to avoid confusion regarding upward: When passing the argument upward to the equivalent source, should I consider the height of the new data points to be upward + original_elevation or are they laying on a flat surface at elevation= upward regardless of their original elevation?
Hi Thibaut Astic! Nice question. The upward parameter of the grid, profile or any set of coordinates passed to the predict method is an absolute height of the coordinates, i.e. the elevation of the points measured from the zero height. That's why in the tutorial I got the maximum elevation of the data points and chose an upward slightly higher to ensure I'm upward continuing. BTW, the EQL classes don't store the data points we used to fit them, only the coefficients of the sources that are used for predicting.
13:17:00
@_slack_fatiando_UMFSBQVMG:matrix.orgsantisoler
In reply to@_slack_fatiando_UMFSBQVMG:matrix.org
Hi Thibaut Astic! Nice question. The upward parameter of the grid, profile or any set of coordinates passed to the predict method is an absolute height of the coordinates, i.e. the elevation of the points measured from the zero height. That's why in the tutorial I got the maximum elevation of the data points and chose an upward slightly higher to ensure I'm upward continuing. BTW, the EQL classes don't store the data points we used to fit them, only the coefficients of the sources that are used for predicting.
Properly answering your question, the grid points are actually laying on a flat surface at elevation equal to upward
13:18:17
@_slack_fatiando_U01UHJAAEDD:matrix.orgThibaut Astic
In reply to@_slack_fatiando_UMFSBQVMG:matrix.org
Properly answering your question, the grid points are actually laying on a flat surface at elevation equal to upward
Great, thanks for the detailed answer santisoler! Much appreciated.
16:59:59
@_slack_fatiando_U01UHJAAEDD:matrix.orgThibaut Astic Hello again, pursuing my experiments with harmonica equivalent source, I wanted to share the following (and maybe you can correct me if it seems that I am doing something wrong). I have come to realize that I could not process as large a survey as when I was using fatiando.gravmag.transform.upcontinue . While with upcontinue I succeeded in upward-continuing datasets ~10^5-10^6 datapoints (and maybe more), with EQL even a few 10^4 datapoints overload my laptop RAM (16GB) or even crash my python kernel in jupyter. I understand EQL is a more costly process than FFT . I just thought this was a worthwhile mention and subject to bring up 🙂 . 17:10:25
@_slack_fatiando_UMFSBQVMG:matrix.orgsantisoler Thanks for sharing that! You're absolutely right, the EQL is a much more computationally expensive than any FFT based method. Because we need to allocate the Jacobian matrix, which size is equal to the number of coordinates times the number of sources, it's very easy to eat your entire memory even with a not so large dataset. If you're interested on the solution that leouieda and I came up last year, please read the preprint of our paper: https://www.compgeolab.org/publications/eql-gradient-boosted.html On it we show that the new gradient-boosted equivalent sources can grid ~2 million data points using less than 10GB of RAM And, if you are registered at EGU21, this Friday I'll give a vPICO talk about Gradient-boosted equivalent sources: https://doi.org/10.5194/egusphere-egu21-1276 17:21:53
@_slack_fatiando_UMFSBQVMG:matrix.orgsantisoler (edited) ... much higher computationally ... => ... much more computationally ... 17:22:17
@_slack_fatiando_UMFSBQVMG:matrix.orgsantisoler (edited) ... sources, it can grow too big too fast, even with not so large datasets. If you're interested on the solution that leouieda> and I came up last year, please read the preprint of our paper: <https://www.compgeolab.org/publications/eql-gradient-boosted.html And, ... => ... sources, it's very easy to eat your entire memory even with a not so large dataset. If you're interested on the solution that leouieda> and I came up last year, please read the preprint of our paper: <https://www.compgeolab.org/publications/eql-gradient-boosted.html On it we show that the new gradient-boosted equivalent sources can grid ~2 million data points using less than 10GB of RAM And, ... 17:23:29
@_slack_fatiando_UMFSBQVMG:matrix.orgsantisoler (edited) ... sources: <https://meetingorganizer.copernicus.org/EGU21/EGU21-1276.html> => ... sources: <https://doi.org/10.5194/egusphere-egu21-1276> 17:23:59
@_slack_fatiando_U01UHJAAEDD:matrix.orgThibaut Astic
In reply to@_slack_fatiando_UMFSBQVMG:matrix.org
Thanks for sharing that! You're absolutely right, the EQL is a much more computationally expensive than any FFT based method. Because we need to allocate the Jacobian matrix, which size is equal to the number of coordinates times the number of sources, it's very easy to eat your entire memory even with a not so large dataset. If you're interested on the solution that leouieda and I came up last year, please read the preprint of our paper: https://www.compgeolab.org/publications/eql-gradient-boosted.html On it we show that the new gradient-boosted equivalent sources can grid ~2 million data points using less than 10GB of RAM And, if you are registered at EGU21, this Friday I'll give a vPICO talk about Gradient-boosted equivalent sources: https://doi.org/10.5194/egusphere-egu21-1276
Cool resources! Are you explicitly computing and storing the Jacobian then? Would it be possible to work with matrix-vector products instead so that the Jacobian is never fully formed and stored?
18:34:03
27 Apr 2021
@_slack_fatiando_U020VF75KUY:matrix.orgLuca Guglielmetti joined the room.13:59:09
28 Apr 2021
@_slack_fatiando_UMSRSPEMA:matrix.orgleouieda
In reply to@_slack_fatiando_UMGLPTLAW:matrix.org
I'm also interested to learn more about this. For example can the equivalent layer/source be used to transform TMI into amplitude?
Craig Miller yes! I’ve been wanting to implement this paper for a while now: https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2019GL084607 If all goes well, I may have a PhD student working on this very thing later this year. 🤞🏽
13:26:12
@_slack_fatiando_UMSRSPEMA:matrix.orgleouieda
In reply to@_slack_fatiando_U01UHJAAEDD:matrix.org
Cool resources! Are you explicitly computing and storing the Jacobian then? Would it be possible to work with matrix-vector products instead so that the Jacobian is never fully formed and stored?
Thibaut Astic do you mean an adjoint operator like in SimPEG? If so, maybe but it would require a reformulation of the forward problem and we lose the simplicity and flexibility of the current approach. For example, we only need minimal changes to go from Cartesian to geodetic coordinates.
13:31:31
@_slack_fatiando_U01NE7SA5AQ:matrix.orgmohamed Sobh joined the room.13:54:17
@_slack_fatiando_U01NE7SA5AQ:matrix.orgmohamed Sobh changed their profile picture.13:54:18
@_slack_fatiando_U01UHJAAEDD:matrix.orgThibaut Astic
In reply to@_slack_fatiando_UMSRSPEMA:matrix.org
Thibaut Astic do you mean an adjoint operator like in SimPEG? If so, maybe but it would require a reformulation of the forward problem and we lose the simplicity and flexibility of the current approach. For example, we only need minimal changes to go from Cartesian to geodetic coordinates.
Thanks leouieda. Yes, I meant an adjoint operator.
15:40:00
29 Apr 2021
@_slack_fatiando_UMGLPTLAW:matrix.orgCraig Miller
In reply to@_slack_fatiando_UMSRSPEMA:matrix.org
Craig Miller yes! I’ve been wanting to implement this paper for a while now: https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2019GL084607 If all goes well, I may have a PhD student working on this very thing later this year. 🤞🏽
ah great, that would be very useful!!
08:03:07
30 Apr 2021
@_slack_fatiando_U020SC29PB6:matrix.orgMiguel Alejandro Alzate Gutierrez joined the room.17:50:56

There are no newer messages yet.


Back to Room List