Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test loading of package on unsupported platforms #509

Open
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

christiangnrd
Copy link
Contributor

Close #508

Also slightly more informative warning
Copy link

codecov bot commented Dec 24, 2024

Codecov Report

Attention: Patch coverage is 0% with 1 line in your changes missing coverage. Please review.

Project coverage is 75.86%. Comparing base (52d7056) to head (e26f66a).
Report is 391 commits behind head on main.

Files with missing lines Patch % Lines
src/state.jl 0.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #509      +/-   ##
==========================================
+ Coverage   71.04%   75.86%   +4.82%     
==========================================
  Files          36       57      +21     
  Lines        1143     2772    +1629     
==========================================
+ Hits          812     2103    +1291     
- Misses        331      669     +338     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Metal Benchmarks

Benchmark suite Current: 6590bcf Previous: 6a760a6 Ratio
private array/construct 25952.428571428572 ns 27270.785714285714 ns 0.95
private array/broadcast 462459 ns 460209 ns 1.00
private array/random/randn/Float32 894875 ns 804875 ns 1.11
private array/random/randn!/Float32 600375 ns 646333 ns 0.93
private array/random/rand!/Int64 555208 ns 548916 ns 1.01
private array/random/rand!/Float32 554750 ns 587375 ns 0.94
private array/random/rand/Int64 915562.5 ns 716083.5 ns 1.28
private array/random/rand/Float32 809375 ns 615791.5 ns 1.31
private array/copyto!/gpu_to_gpu 559125 ns 677125 ns 0.83
private array/copyto!/cpu_to_gpu 681417 ns 640688 ns 1.06
private array/copyto!/gpu_to_cpu 646854.5 ns 817625 ns 0.79
private array/accumulate/1d 1385875 ns 1329687.5 ns 1.04
private array/accumulate/2d 1477396 ns 1382229 ns 1.07
private array/iteration/findall/int 2264416.5 ns 2073708 ns 1.09
private array/iteration/findall/bool 1999208 ns 1799041 ns 1.11
private array/iteration/findfirst/int 1788354 ns 1688292 ns 1.06
private array/iteration/findfirst/bool 1724937.5 ns 1650020.5 ns 1.05
private array/iteration/scalar 2548375.5 ns 3252542 ns 0.78
private array/iteration/logical 3480583.5 ns 3147375 ns 1.11
private array/iteration/findmin/1d 1849938 ns 1736042 ns 1.07
private array/iteration/findmin/2d 1411645.5 ns 1348917 ns 1.05
private array/reductions/reduce/1d 951417 ns 1029542 ns 0.92
private array/reductions/reduce/2d 689958 ns 650292 ns 1.06
private array/reductions/mapreduce/1d 965603.5 ns 1025917 ns 0.94
private array/reductions/mapreduce/2d 699000 ns 657229.5 ns 1.06
private array/permutedims/4d 2661916.5 ns 2553708 ns 1.04
private array/permutedims/2d 1088833 ns 1027750 ns 1.06
private array/permutedims/3d 1816500.5 ns 1585916 ns 1.15
private array/copy 812416.5 ns 580417 ns 1.40
latency/precompile 5936877583 ns 5847134584 ns 1.02
latency/ttfp 6647680292 ns 6545482667 ns 1.02
latency/import 1193229374.5 ns 1169724375 ns 1.02
integration/metaldevrt 750083 ns 713125 ns 1.05
integration/byval/slices=1 1668437.5 ns 1580770.5 ns 1.06
integration/byval/slices=3 21250062.5 ns 9774042 ns 2.17
integration/byval/reference 1631708.5 ns 1598000 ns 1.02
integration/byval/slices=2 2800541 ns 2571895.5 ns 1.09
kernel/indexing 458292 ns 457542 ns 1.00
kernel/indexing_checked 465708 ns 458645.5 ns 1.02
kernel/launch 9208.333333333334 ns 8125 ns 1.13
metal/synchronization/stream 15125 ns 14209 ns 1.06
metal/synchronization/context 15625 ns 15000 ns 1.04
shared array/construct 25854.166666666668 ns 25166.714285714286 ns 1.03
shared array/broadcast 466167 ns 469917 ns 0.99
shared array/random/randn/Float32 914062.5 ns 825666 ns 1.11
shared array/random/randn!/Float32 591250 ns 616625 ns 0.96
shared array/random/rand!/Int64 552541 ns 547708 ns 1.01
shared array/random/rand!/Float32 560604 ns 591458 ns 0.95
shared array/random/rand/Int64 841292 ns 734166.5 ns 1.15
shared array/random/rand/Float32 860146 ns 610312 ns 1.41
shared array/copyto!/gpu_to_gpu 82458 ns 88000 ns 0.94
shared array/copyto!/cpu_to_gpu 81792 ns 86291 ns 0.95
shared array/copyto!/gpu_to_cpu 78958 ns 77959 ns 1.01
shared array/accumulate/1d 1374750 ns 1336542 ns 1.03
shared array/accumulate/2d 1489625 ns 1384250 ns 1.08
shared array/iteration/findall/int 1963229.5 ns 1765020.5 ns 1.11
shared array/iteration/findall/bool 1732125 ns 1558812 ns 1.11
shared array/iteration/findfirst/int 1485312.5 ns 1396125.5 ns 1.06
shared array/iteration/findfirst/bool 1425250 ns 1360145.5 ns 1.05
shared array/iteration/scalar 160750 ns 153334 ns 1.05
shared array/iteration/logical 3194084 ns 2940000 ns 1.09
shared array/iteration/findmin/1d 1557417 ns 1442291 ns 1.08
shared array/iteration/findmin/2d 1421208 ns 1358562.5 ns 1.05
shared array/reductions/reduce/1d 703625 ns 723750 ns 0.97
shared array/reductions/reduce/2d 691916.5 ns 654792 ns 1.06
shared array/reductions/mapreduce/1d 717770.5 ns 728687 ns 0.99
shared array/reductions/mapreduce/2d 699458 ns 660125 ns 1.06
shared array/permutedims/4d 2609792 ns 2522521 ns 1.03
shared array/permutedims/2d 1096375 ns 1019375 ns 1.08
shared array/permutedims/3d 1847042 ns 1579917 ns 1.17
shared array/copy 208375 ns 233417 ns 0.89

This comment was automatically generated by workflow using github-action-benchmark.

Copy link
Member

@maleadt maleadt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting approach. This should make it possible to remove the PkgEval blacklist, so that package loadability is also tested there. I wonder if we need to do something similar with CUDA.jl and the other back-ends.

Comment on lines +47 to +48
- name: Run tests
uses: julia-actions/julia-runtest@v1
Copy link
Member

@maleadt maleadt Dec 25, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On macos-13, this doesn't really just test Loading though but runs all tests. So maybe remove that platform?

EDIT: ah, I guess this does bail out because of being a virtualized platform. I wonder if the error thrown by the test suite shouldn't be a bit more verbose, though (i.e., mentioning the specific reason).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I fixed up the version check and moved it from setup.jl to runtests.jl so that it only runs once. If there was a specific reason that the check was in setup.jl I can move it back.

@christiangnrd
Copy link
Contributor Author

@maleadt I made the support check message more verbose for paravirtual devices.

I also added a helper function for safely checking the lower bound on all OSes as this'll become useful as we add more API wrappers for macos 14 and 15 features (like the MTLArchitecture wrapper I also snuck into this PR). Not duplicating all of MTLDevice to add the architecture::MTLArchitecture property that only exists since macOS 14 is what inspired JuliaInterop/ObjectiveC.jl#46

@christiangnrd
Copy link
Contributor Author

Failure is the riscv bug

@christiangnrd christiangnrd deleted the loadingCI branch December 29, 2024 20:48
@christiangnrd christiangnrd restored the loadingCI branch December 29, 2024 20:49
@christiangnrd
Copy link
Contributor Author

Accidentally deleted branch, reopening.

@christiangnrd christiangnrd reopened this Dec 29, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants