Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test loading of package on unsupported platforms #509

Merged
merged 8 commits into from
Jan 7, 2025

Conversation

christiangnrd
Copy link
Contributor

Close #508

Copy link

codecov bot commented Dec 24, 2024

Codecov Report

Attention: Patch coverage is 57.14286% with 3 lines in your changes missing coverage. Please review.

Project coverage is 75.05%. Comparing base (52d7056) to head (a319488).
Report is 395 commits behind head on main.

Files with missing lines Patch % Lines
src/version.jl 66.66% 2 Missing ⚠️
src/state.jl 0.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #509      +/-   ##
==========================================
+ Coverage   71.04%   75.05%   +4.01%     
==========================================
  Files          36       57      +21     
  Lines        1143     2702    +1559     
==========================================
+ Hits          812     2028    +1216     
- Misses        331      674     +343     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Metal Benchmarks

Benchmark suite Current: a319488 Previous: 6ac7f3c Ratio
private array/construct 26702.428571428572 ns 26131 ns 1.02
private array/broadcast 454916 ns 463417 ns 0.98
private array/random/randn/Float32 818833 ns 896416 ns 0.91
private array/random/randn!/Float32 641333 ns 591354.5 ns 1.08
private array/random/rand!/Int64 553292 ns 550958 ns 1.00
private array/random/rand!/Float32 592500 ns 550458 ns 1.08
private array/random/rand/Int64 776999.5 ns 892521 ns 0.87
private array/random/rand/Float32 571833.5 ns 776917 ns 0.74
private array/copyto!/gpu_to_gpu 694750 ns 594083.5 ns 1.17
private array/copyto!/cpu_to_gpu 802958 ns 658708 ns 1.22
private array/copyto!/gpu_to_cpu 630458 ns 649209 ns 0.97
private array/accumulate/1d 1318125 ns 1443834 ns 0.91
private array/accumulate/2d 1367354 ns 1500708 ns 0.91
private array/iteration/findall/int 2055833 ns 2321979 ns 0.89
private array/iteration/findall/bool 1813729 ns 2055750 ns 0.88
private array/iteration/findfirst/int 1674459 ns 1811916.5 ns 0.92
private array/iteration/findfirst/bool 1651021 ns 1734959 ns 0.95
private array/iteration/scalar 3525833 ns 2445625 ns 1.44
private array/iteration/logical 3145062.5 ns 3496146.5 ns 0.90
private array/iteration/findmin/1d 1754146 ns 1932458 ns 0.91
private array/iteration/findmin/2d 1334417 ns 1457562.5 ns 0.92
private array/reductions/reduce/1d 1016396.5 ns 970146 ns 1.05
private array/reductions/reduce/2d 649979 ns 729750 ns 0.89
private array/reductions/mapreduce/1d 1004875 ns 985166.5 ns 1.02
private array/reductions/mapreduce/2d 649792 ns 707166.5 ns 0.92
private array/permutedims/4d 2539917 ns 2696500 ns 0.94
private array/permutedims/2d 1007042 ns 1122916.5 ns 0.90
private array/permutedims/3d 1582666 ns 1861125 ns 0.85
private array/copy 593583 ns 809834 ns 0.73
latency/precompile 5744969750.5 ns 5830401479.5 ns 0.99
latency/ttfp 3042026375 ns 3083600896 ns 0.99
latency/import 1140101208 ns 1160891875 ns 0.98
integration/metaldevrt 704042 ns 760875 ns 0.93
integration/byval/slices=1 1530083 ns 1682042 ns 0.91
integration/byval/slices=3 9602916 ns 19850395.5 ns 0.48
integration/byval/reference 1555333.5 ns 1666916 ns 0.93
integration/byval/slices=2 2479250 ns 2824958 ns 0.88
kernel/indexing 470000 ns 460854.5 ns 1.02
kernel/indexing_checked 473250 ns 468479.5 ns 1.01
kernel/launch 36916.75 ns 8250 ns 4.47
metal/synchronization/stream 14333 ns 15042 ns 0.95
metal/synchronization/context 14417 ns 15375 ns 0.94
shared array/construct 26845.14285714286 ns 26416.666666666668 ns 1.02
shared array/broadcast 459520.5 ns 463291 ns 0.99
shared array/random/randn/Float32 817021 ns 933042 ns 0.88
shared array/random/randn!/Float32 665417 ns 589020.5 ns 1.13
shared array/random/rand!/Int64 558937.5 ns 551208 ns 1.01
shared array/random/rand!/Float32 603792 ns 559708 ns 1.08
shared array/random/rand/Int64 793166.5 ns 909896 ns 0.87
shared array/random/rand/Float32 605375 ns 823646 ns 0.73
shared array/copyto!/gpu_to_gpu 86292 ns 81542 ns 1.06
shared array/copyto!/cpu_to_gpu 88625 ns 82042 ns 1.08
shared array/copyto!/gpu_to_cpu 77000 ns 79958 ns 0.96
shared array/accumulate/1d 1323209 ns 1441770.5 ns 0.92
shared array/accumulate/2d 1369417 ns 1534062.5 ns 0.89
shared array/iteration/findall/int 1741250 ns 2050459 ns 0.85
shared array/iteration/findall/bool 1645854 ns 1778792 ns 0.93
shared array/iteration/findfirst/int 1373042 ns 1511104 ns 0.91
shared array/iteration/findfirst/bool 1347708 ns 1457771 ns 0.92
shared array/iteration/scalar 151084 ns 161334 ns 0.94
shared array/iteration/logical 2940292 ns 3295666.5 ns 0.89
shared array/iteration/findmin/1d 1463729.5 ns 1575500 ns 0.93
shared array/iteration/findmin/2d 1348458 ns 1465729 ns 0.92
shared array/reductions/reduce/1d 721354.5 ns 724292 ns 1.00
shared array/reductions/reduce/2d 659916 ns 704958 ns 0.94
shared array/reductions/mapreduce/1d 714250 ns 719478.5 ns 0.99
shared array/reductions/mapreduce/2d 657542 ns 710875 ns 0.92
shared array/permutedims/4d 2458084 ns 2675917 ns 0.92
shared array/permutedims/2d 1008729.5 ns 1127021 ns 0.90
shared array/permutedims/3d 1585250 ns 1878708.5 ns 0.84
shared array/copy 242958 ns 209292 ns 1.16

This comment was automatically generated by workflow using github-action-benchmark.

Copy link
Member

@maleadt maleadt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting approach. This should make it possible to remove the PkgEval blacklist, so that package loadability is also tested there. I wonder if we need to do something similar with CUDA.jl and the other back-ends.

@christiangnrd christiangnrd force-pushed the loadingCI branch 2 times, most recently from 139af7e to ef8ca82 Compare December 26, 2024 05:36
@christiangnrd
Copy link
Contributor Author

@maleadt I made the support check message more verbose for paravirtual devices.

I also added a helper function for safely checking the lower bound on all OSes as this'll become useful as we add more API wrappers for macos 14 and 15 features (like the MTLArchitecture wrapper I also snuck into this PR). Not duplicating all of MTLDevice to add the architecture::MTLArchitecture property that only exists since macOS 14 is what inspired JuliaInterop/ObjectiveC.jl#46

@christiangnrd
Copy link
Contributor Author

Failure is the riscv bug

@christiangnrd christiangnrd deleted the loadingCI branch December 29, 2024 20:48
@christiangnrd christiangnrd restored the loadingCI branch December 29, 2024 20:49
@christiangnrd
Copy link
Contributor Author

Accidentally deleted branch, reopening.

src/version.jl Outdated Show resolved Hide resolved
test/runtests.jl Outdated Show resolved Hide resolved
test/runtests.jl Outdated Show resolved Hide resolved
@maleadt maleadt merged commit aae82e4 into JuliaGPU:main Jan 7, 2025
5 of 6 checks passed
@christiangnrd christiangnrd deleted the loadingCI branch January 8, 2025 15:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants