-
Notifications
You must be signed in to change notification settings - Fork 631
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Video Super Resolution for Windows (AMD, Intel and NVIDIA) and MacOS #1180
base: master
Are you sure you want to change the base?
Conversation
Changes made: 1. Creation of a new class VideoEnhancement which check the liability to the feature. 2. Add the checkbox "Video AI-Enhancement" in the "Basic Settings" groupbox. 3. Disable VAE when fullscreen is selected 4. Add a registery record 5. On the Overlay and the mention "AI-Enhanced" when activated 6. Add a command line for the class VideoEnhancement
Adding VideoProcessor to D3D11VA to offline video processing from CPU to GPU, and leveraging additional GPU capabilities such as AI enhancement for the upscaling and some filtering. Changes made: 1. VideoProcessor is used to render the frame only when "Video AI-Enhancement" is enabled; when disabled, the whole process is unchanged. 2. Add methods to enable the Video Super Resolution for NVIDIA, and Intel. AMD method is currently empty, need to POC the solution with the AMF documentation. 3. Add methods to enable SDR to HDR. Currently only NVIDIA has such feature, but the code place is prepared if Intel and AMD will too. 4. Some existing variables local to a method (like BackBufferResource) changed to global scope to be consumed be also VideoProcessor methods. 5. In ::initialize(), the application checks if the system is capable of leveraging GPU AI enhancement, if yes, it inform the UI to display the feature. 6. ColorSpace setups (Source/Stream) for HDR are not optimal, further improvment might be possible. Issues observed are commented in the code at relevant places.
Changes made: 1. Creation of a new class VideoEnhancement which check the liability to the feature. 2. Add the checkbox "Video AI-Enhancement" in the "Basic Settings" groupbox. 3. Disable VAE when fullscreen is selected 4. Add a registery record 5. On the Overlay and the mention "AI-Enhanced" when activated 6. Add a command line for the class VideoEnhancement
Adding VideoProcessor to D3D11VA to offline video processing from CPU to GPU, and leveraging additional GPU capabilities such as AI enhancement for the upscaling and some filtering. Changes made: 1. VideoProcessor is used to render the frame only when "Video AI-Enhancement" is enabled; when disabled, the whole process is unchanged. 2. Add methods to enable the Video Super Resolution for NVIDIA, and Intel. AMD method is currently empty, need to POC the solution with the AMF documentation. 3. Add methods to enable SDR to HDR. Currently only NVIDIA has such feature, but the code place is prepared if Intel and AMD will too. 4. Some existing variables local to a method (like BackBufferResource) changed to global scope to be consumed be also VideoProcessor methods. 5. In ::initialize(), the application checks if the system is capable of leveraging GPU AI enhancement, if yes, it inform the UI to display the feature. 6. ColorSpace setups (Source/Stream) for HDR are not optimal, further improvment might be possible. Issues observed are commented in the code at relevant places.
I'm going to introduce usage of |
m_IsHDRenabled was a duplication of the existing condition "m_DecoderParams.videoFormat & VIDEO_FORMAT_MASK_10BIT". Replace the variable m_IsHDRenabled (2) m_IsHDRenabled was a duplication of the existing condition "m_DecoderParams.videoFormat & VIDEO_FORMAT_MASK_10BIT".
Remove VideoEnhancement::getVideoDriverInfo() method (which was based of Window Registry) and use the existing method CheckInterfaceSupport() from the Adapter.
No worry, I keep going with current ID3D11VideoProcessor implementation and will do the change once your code is available. |
- MaxCLL and MaxFALL at 0 as the source content is unknown in advance. - Output ColorSpace matched SwapChain
…oProcessor" "reset" was not used in latest code
- NVIDIA: After updating NVIDIA driver to 551.61, VSR works in Exclusive Fullscreen (Tested on a RTX 4070 Ti) - Intel: VSR works in Exclusive Fullscreen (Test on a Arc a380) - AMD: VSR is WIP
I did many tests about Color Space with HDR on, as part of the final result I found something interesting that I wanted to share. I have 2 Graphic cards on the same PC, a RTX 4070 Ti and a Arc a380. Conclusion: |
…nt is On - Simplification of the class VideoEnhancement as all properties will be set at D3D11va initialization - Since it never change during a whole session, only scan all GPU once at the application launch and keep track of the most suitable adapter index with VideoEnhancement->m_AdapterIndex. - Adapt setHDRoutput as the adapter might be different (not the one linked to the display). - In case Video Enhancement is Off, we keep using the previous behavior (=using the adapter linked to the display).
- Update setHDRoutput in case of multiple displays, to make sure we get the HDR information of the display where Moonlight is displayed
During the scan, it is useless to enable enhancement capabilities for all GPU as it will be done later right after for only the selected GPU.
I tried this out today on my M1 Mac and the upscaling is excellent, but it causes |
Thanks for the feedback, I probably know from where it comes, will look at it these days. |
Removing a loop "for" which was instantiating 2 times the same variables.
Hi @rygwdn , |
It works perfectly! 👍 awesome work |
Got this built successfully and it looks to add ~1ms decode latency on my Mac Studio M2 Max when using it to just clean up a 1440p stream displayed a to 1440p monitor from a 1440p source. I've tried a couple other source and stream resolution combinations and the decode latency remains reasonable, all within 1ms of each other. I tried a 4k source with a 1440p stream and a 4k source and 4k stream, all three configurations displayed on the same 1440p monitor. Can I please request one additional feature? Please add corresponding |
Do you think you can reduce rendering time on MacbookAir M1 ? If I understand correctly: According to that:
If the rendering time can be reduced, it will be a totally game changer! |
@peerobo I neglected to look at the render time statistic last night when I posted my previous comment, I do experience an increase in render time but for me it was another ~1ms increase compared to my non-super resolution build, not the ~10ms increase you experienced. What are you doing while you test? The fps in your examples do not match but are floating around 30fps, is that what you normally run games at? I also see that you have YUV 4:4:4 enabled. I've never used that setting so enabled wondering if that was the issues. After enabling it I did see a momentary spike close to the ~13ms mark while upscaling 1080p to 1440p, it cleared after a couple seconds and I didn't see it again during the few minutes I was testing. |
This morning I am no longer confident in my previously report of ~1ms increase in decode latency, it looks to be equal in either case. It is honestly hard to eyeball, making me wish I could dump some time series stats. I suppose this might make sense, there should not be any additional decode latency because the stream must be entirely decoded before it can be enhanced. The ~1ms increase on render time is obvious and stable though. |
@ody |
@peerobo |
@ody |
As only GPU (Hardware) acceleration is leveraging video enhancement feature, we disable the enhancement when Software decoding is selected
It think you just missed an additional spot where the option needs to be added. Add under line 369...
|
Hardware HEVC encoding and decoding. Source is a NVIDIA RTX 40470 SUPER, client is Mac Studio M2 Max |
@ody , thanks for notifying the line, I did commit the update and rebuilt it. Can you help me to test again the command line? |
@linckosz Rebuilt and toggling the feature works now. Thank you. |
I have both a Mac Studio M2 Max and a MacBook Pro Intel I5-1038NG7 so I gave the PR a try on the Intel MacBook. It doesn't throw any errors and does do something to the image. It is definitely sharper, "enhanced" is subjective. The most obvious modification to the image was when upscaling 1280x800 to the MacBook's native 2560x1600. When upscaling that much it'll put a heavy load on the computer, fans really start spinning fast and it gets really warm. The file size of the screenshots are interesting, enhanced is 8MB while the standard one is 5.4MB. I didn't play long though, no reason too. I primarily stream to the M2 Max or to my Steam Deck. I did notice a lot of warnings during compilation about our target version being macOS 11 but MetalFX API only be available as of macOS 13. This line here is how I assume that version mismatch issue is being avoided. StandardEnhanced |
@ody , Can you chek if it reaches these 2 lines ?
|
The Intel test I did yesterday was on macOS Sequoia 15.1. Our Intel MacBook Pro was the very last Intel hardware refresh before they switched to their own M chips. I think it is working as Apple intended. It is cool it "works" on late generation Intel Macs. I do not have virtualization software installed on my Intel MacBook or hardware restricted to that version and do not see a good reason to test further since you've validated what I assumed was true by reading through the code. |
When the GPU doesn't support the feature, we gray out the checkbox.
Hi,
Context:
Video Super Resolution (VSR) is to Video as DLSS is to 3D Rendering.
Why not let Moonlight being one of the first game streaming solution leveraging such technology?
AI upscaling means significantly less bandwidth usage without compromising the video quality!
NVIDIA, Intel (link in French), and more recently AMD start to advertise their respective technologies to enhance video quality using AI.
Solution:
VSR was not a straight forward implementation, I needed to add the component Video Processor to D3D11VA in order to offload the frame processing from the CPU to the GPU, and leveraging their additional GPU capabilities.
I added a UI checkbox in SettingsView.qml, but the main process logic has been done in d3d11va.cpp.
NVIDIA is providing VSR and HDR enhancement, I could implement VSR perfectly on SDR content, but could not yet HDR (more detail below).
Intel is providing VSR, it has been implemented, but yet to be tested on Arc GPU (I don't have it).
AMD just released AMF Video Upscaling, I prepared the code but need a RX 7000 series (I don't have it) and apparently it might be a quite different approach of implementation.
Testings:
The solution works stable on my rig, I did try different configuration (size, bandwidth, V-Sync, HDR, AV1, etc.) during few days.
AMD Ryzen 5600x
32GB DDR4 3200
RTX 4070 Ti
Moonlight v5.0.1
Sunshine v0.21.0
(Update May 6th, 2024)
A complete report is available at the comment below.
I could test it with a wider range of GPUs:
(Update November 16th, 2024)
Other development (iOS)
For those who are also interested of VSR for iOS, I developped moonlight-ios-MFX too, but this is a WIP.
On an iPhone 13 Pro, the upscaler works quite well but too power hungry due to the use of Metal renderer (not the upscaler itself), so doesn't make yet worth it. Maybe with newest iPhone version, I didn't try.
I don't have an AppleTV, but it may also probably work modulo few tests and code fixes. I won't continue to maintain iOS version, so feel free to improve it.
Improvements:
Results (comparison):
Commits description
USER INTERFACE
Add a new UI feature called "Video AI-Enhancement" (VAE).
Changes made:
BACKEND PROCESSING
Adding VideoProcessor to D3D11VA to offload video processing from CPU to GPU, and leveraging additional GPU capabilities such as AI enhancement for the upscaling and some filtering.
Changes made: