-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Non-Conservative Zonal Mean #785
base: main
Are you sure you want to change the base?
Conversation
I've set up the initial boilerplate for your implementation. If you'd like to discuss anything about the implementation before getting started, we can chat about it here. One question though: will this implementation be the conservative or non-conservative one? |
Thank you very much for your great help! And can you also fill up the expected usage for me please? so that I have a better idea to start? Thanks |
Should be good now. I wanted to ask and see how you think this type of functionality will work on data variables mapped to different elements. For example, we want to support data mapped to:
Does your algorithm work on all these of these? |
A non-conservative zonal average is an average value on a constant latitude. So I will say it might be mapped to the faces on that latitude. In your example function call, where is the place that users input latitude for query? And how do users know it's a non-conservative zonal average? Such query should looks like |
I may have worded the first part of my question incorrectly. I was asking whether your zonal average algorithm can compute the zonal average of data variables that either node, edge, or face centered. Typically the data variables are mapped to the faces, but less commonly data can be stored on each node or edge. For this application thought, I think it should only apply to edge or face centered data, since computing the zonal average of data on each node doesn't make much spatial sense. For the second part, I can update it to reflect that (I put a very bare-bone outline) Below is a zonal-average figure, with the righthand plot being the zonal average across the latitudes. I think our The example you gave would be computing the zonal average at a specific latitude. This should be a helper function under # helper function
_zonal_average_at_lattitude(data: np.ndarray, lat: float, conservative: bool = False) This would contain the bulk of the artihmatic. This would allow the one implemented at for the # default, returns the zonal average for latitudes in the range [-90, 90] with a step size of 5
uxds['t2m'].zonal_average()
# specify range and step size
uxds['t2m'].zonal_average(lat = (-45, 45, 5))
# compute the zonal average for a single longitude
uxds['t2m'].zonal_average(lat = 45)
# parameter for conservative
uxds['t2m'].zonal_average(conservative=False) Just a thought, we may want some |
Here's a reference from NCL's zonal average page (which is where I got the figure above from) https://www.ncl.ucar.edu/Applications/zonal.shtml |
Thank you for your information. I was told that this function will be averaging the data for each face. So I only implemented for the scenario where: a data is attached to each face. So I expected this current branch to be for a single zonal_average only, the helper function you mentioned. Once I finished this helper function, you can feel free to implement the discrete values function by calling my function. |
Got it! Face-centered data is by far the most common scenario anyone will run into, so no worries.
Sounds good, I can handle the I'll get the boilerplate updated with what we just discussed. |
Great! Thank you very much for your help! |
Always happy to help! |
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
As part of this PR, I'll be putting together a user-guide section too. |
Hi @philipc2, I have a question regarding the expected usage of the
However, I checked Could you please clarify if Thank you! --Amber |
You can think of a I'd suggest giving this section in our user guide a read, it should hopefully clarify it! Let me know if you have any other questions! |
ASV BenchmarkingBenchmark Comparison ResultsBenchmarks that have improved:
Benchmarks that have stayed the same:
|
ASV BenchmarkingBenchmark Comparison ResultsBenchmarks that have improved:
Benchmarks that have stayed the same:
|
Results are looking great. Let's get #878 merged so we can test on more dataset (the plot below was on an MPAS ocean grid after normalizing coordinates) 480km Ocean Grid120km Ocean Grid |
With fma disabled, only the |
ASV BenchmarkingBenchmark Comparison ResultsBenchmarks that have improved:
Benchmarks that have stayed the same:
|
ASV BenchmarkingBenchmark Comparison ResultsBenchmarks that have improved:
Benchmarks that have stayed the same:
|
The floating point precision causes this. One possible cause is: Different compilers optimize the existing codes in different ways. In fact, GCC by default will use the fma in ( Clang makes the FMA off by default so we are good. One workaround is using another magic number error tolerance like Or if numba can allow you to turn |
ASV BenchmarkingBenchmark Comparison ResultsBenchmarks that have improved:
Benchmarks that have stayed the same:
Benchmarks that have got worse:
|
def _fmms(a, b, c, d):
"""
Calculate the difference of products using the FMA (fused multiply-add) operation: (a * b) - (c * d). So really just need to replace |
ASV BenchmarkingBenchmark Comparison ResultsBenchmarks that have improved:
Benchmarks that have stayed the same:
Benchmarks that have got worse:
|
ASV BenchmarkingBenchmark Comparison ResultsBenchmarks that have improved:
Benchmarks that have stayed the same:
|
ASV BenchmarkingBenchmark Comparison ResultsBenchmarks that have improved:
Benchmarks that have stayed the same:
|
ASV BenchmarkingBenchmark Comparison ResultsBenchmarks that have improved:
Benchmarks that have stayed the same:
|
weight_df = _get_zonal_faces_weight_at_constLat( | ||
candidate_face_edges_cart, | ||
np.sin(np.deg2rad(constLat_deg)), | ||
candidate_face_bounds, | ||
is_directed=False, | ||
is_latlonface=is_latlonface, | ||
) | ||
|
||
# Compute the zonal mean(weighted average) of the candidate faces | ||
weights = weight_df["weight"].values | ||
zonal_mean = np.sum(candidate_face_data * weights, axis=-1) / np.sum(weights) | ||
|
||
return zonal_mean |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't taking the weighted average here mean that we have a Conservative Zonal Average?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Non-conservative and conservative both take weighted averages.
Non-conservative tasks the edge length as the weight factor while the conservative ones take the area.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So if we take the non-conservative zonal average of a face-centered data variable, the length of the edge that the line intersects would be used as the weight?
Would this weight be applied to both faces that saddle that edge?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
“the length of the edge that the line intersects would be used as the weight?” Yes
"Would this weight be applied to both faces that saddle that edge?" It's already taken care of, they will split the shared section
Closes #93
Overview
_get_zonal_faces_weight_at_constLat
algorithmgca_constlat_intersection
algorithm_pole_point_inside_polygon
algorithmExpected Usage
PR Checklist
General
Testing
Documentation
_
) and have been added todocs/internal_api/index.rst
docs/user_api/index.rst
Examples
docs/examples/
folderdocs/examples.rst
toctreedocs/gallery.yml
with appropriate thumbnail photo indocs/_static/thumbnails/