Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reverb EFX sound API #1367

Closed
damian-666 opened this issue Apr 1, 2024 · 5 comments
Closed

Reverb EFX sound API #1367

damian-666 opened this issue Apr 1, 2024 · 5 comments

Comments

@damian-666
Copy link

What version of MonoGame does the bug occur on:

  • MonoGame 3.8.1

What operating system are you using:

  • Windows

What MonoGame platform are you using:

  • DesktopGL
@damian-666 damian-666 changed the title can the sturcture on reverve paraters be bublic so i can try to set them, or do they require the x ancieint xna content souncd banks. can the sturctures on reverbe params be public so i can try to set them, or do they require the x ancient xna content sounce banks. Apr 1, 2024
@nkast
Copy link

nkast commented Apr 12, 2024

I wouldn't recommend using Xact at this point. #1479

I don't see it possible to provide a cross-platform solution without doing our own DSP.
the OpenAL *[1] doesn't support all the setting that XAudio/Xact had *[2].
And those Xact settings look to be High-Level abstructions. Whatever is RoomSizeFeet or PositionRightMatrix ?
To be honest, I've never tested it myself to see if they work.

I suppose end users would like something simpler, like setting at echo delay.
WebAudio has a more basic Convolver filter, but it does require an impulse-response buffer.

And the industry is moving into real time 'Rendering' audio from spatial geometry, instead of fine tuning filters,
but for that we probably need compute GPU.

Perhaps an alternative would be to add a convolution parameter to the SoundEffectProcessor,
which will not be real-time of course. *[4] *[5]

*[1] Https://github.com/kniEngine/kni/blob/e2c630cfd0f7499f79fc67be2a4e52a668714767/MonoGame.Framework/Audio/OpenAL/ConcreteAudioService.cs#L307
*[2] https://github.com/kniEngine/kni/blame/e2c630cfd0f7499f79fc67be2a4e52a668714767/MonoGame.Framework/Audio/XAudio/ConcreteAudioService.cs#L170-L191
*[3] https://developer.mozilla.org/en-US/docs/Web/API/ConvolverNode/ConvolverNode
*[4] https://ffmpeg.org/ffmpeg-filters.html#Examples-4
*[5] https://ffmpeg.org/ffmpeg-filters.html#Examples-9

@damian-666
Copy link
Author

OK, so I went over the code. I do see that some things were made public and it looks like I. I thought you almost had it read to test

*I wouldn't recommend using Xact at this point. #1479
agreeed

look good just to document the units.. ( feet ,meters ,ms ) etc..
or we make a test rig and put sliders to audition the sounds..

[1] Looks fine.. they are adjusting thte vaules to try to match the effects. I mean, if it works at all, it's great.
o/OpenAL/ConcreteAudioService.cs#L307
*[2] [https://github.com/kniEngine/kni/blame/e2c630cfd0f7499f79fc67be2a4e52a668714767/MonoGame.Framework/Audio/] I don't agree that those are problems.
//TODO doucment the units on these DB, min max values , ms vs seconds, etc.. standardize on the windows one.
var settings = new SharpDX.XAudio2.Fx.ReverbParameters
{
ReflectionsGain = reverbSettings.ReflectionsGainDb,
ReverbGain = reverbSettings.ReverbGainDb,
DecayTime = reverbSettings.DecayTimeSec,
ReflectionsDelay = (byte)(reverbSettings.ReflectionsDelayMs * timeScale),
ReverbDelay = (byte)(reverbSettings.ReverbDelayMs * timeScale),
RearDelay = (byte)(reverbSettings.RearDelayMs * timeScale),
RoomSize = reverbSettings.RoomSizeFeet,
Density = reverbSettings.DensityPct,
LowEQGain = (byte)reverbSettings.LowEqGain,
thisi is a bit scary , i hope it doesnt have tentacles

in 1 i think it changes hte OpenAL settings to match the DX ones, thats great.. if tis not implemnted put //TODO i doubt anyone will use that..

// Dont know what to do with these EFX has no mapping for them. Just ignore for now
// we can enable them as we go.
//efx.SetEffectParam(ReverbEffect, EfxEffectf.PositionLeft, reverbSettings.PositionLeft);
//efx.SetEffectParam(ReverbEffect, EfxEffectf.PositionRight, reverbSettings.PositionRight);
//efx.SetEffectParam(ReverbEffect, EfxEffectf.PositionLeftMatrix, reverbSettings.PositionLeftMatrix);

re:
*[1] https://github.com/kniEngine/kni/blob/e2c630cfd0f7499f79fc67be2a4e52a668714767/MonoGame.Framework/Audi if it ever worked. I mean, at some point it did work.You know, needs to be completely overhauled, which I doubt. I mean, it talks right to the thing. And you can ask, you can look at the silk code. It's one platform right there that's implemented, and it's using C sharp and it's doing a similar thing. It has a preset and then you can adjust the value of that preset and audition it. So I would look at the silk example that has reverb, and they might show Opengl 2 so you might have it all from the silk code base. But yeah, what's ever in there is probably would work if you just figured out how to load. Or instantiate whatever structures that basic the most basic. Minimal viable sound that's xa T data that has. Support that has reverb in it and then. I can describe you know how I would set up the a level designer tool that has the. has these effects in it. It will be a file watcher and. another list box with a bunch of canned effects. I would expect Chachi to PGTP to to make all this for me because it's so trivial I would I would give it the structure and tell it to put metadata around it so that and bind the UI the levers so that it can exceed those values do undo and the and notification is pretty much a property page. So if it can do that then avaloni is not to use to use. you know it's done. Ready, but I'd rather get away from wpf.

They're definitely good enough. I mean, if they work at all, they're good enough. If they work on platform, they're fine.Right? And um, the other important thing is there's some code that makes the open al version sound more like the other one. So I would only expose the one that silk exposes for windows. And then kind of like use that as the canonical one. And then if they're implementation is the open al just adjust the expose the XNA one. and then use its values to adjust the other one. I think that's how it was done in the code, something like that. That way you have one, you know, 1 set of parameters for all the platforms

So the key to that. may be setting up the original. the bare minimum structures that that file format gets loaded into and make sure that that is utilized and applied. So it's is XAcT is on. and you know that when you create more than one sound effect from one sound effect. I mean Oh, one sound effect instance from a sound effect that it will make a chain of voices or whatever. Because if that works, then. I would document carefully the limits of every parameter and what the meanings are. Milliseconds meters per square for the room size. I would just start to see if room size works and that's it. that F enough of them work, I think that's the way to go. To finish that. And there are three pieces would be the FNA. Maybe the other one would be looking at D. Wave, but that really is kind of like starting over. And maybe D wave doesn't doesn't support everything as well, because the. monogame one is old and it goes all the way deep into the drivers when it can. And there are dsps in those drivers and so on in some. Or the Qualcomm ones might, it might go right to the dsp. I don't know how they're going to implement it. It's meant to be very low power and the whole hexagon chip is DSP. All their AI is DSP. So everyone moved to a cpu's because they thought they could afford the 64 bit floating point and all that. But you know, when I worked in that field, everybody used the fixed point. Because it was a more. discrete. I don't know why everybody's saying that it's a more difficult, but everybody you see and they use that. It's definitely uses less power.

[3]And you've added more platforms, including some web platforms, yes, but I don't think that convolver node or your web API can replace the driver code. So that's not going to replace it. But yeah, it's basically the same convolver node is how you would implement those filters. So that all gets done under the hood.

I'm too tired. Call later if you want.
And there's this one guy from Ukraine who can do all this, but I can't get these guys to work for less. I mean, this guy for $20 did something that took me over two two weeks, and I paid another guy from Upwork. None of us could do it, and now I don't think I can get them to do anything for $150 an hour
. I mean, he's booked all the AI people are really booked too. So I'm learning it because at some point it's going to save me time, but right now it's wasting a lot of time. It's killing me. I got to go. I hope this helps and it's not too long. Figure if you want to redo it, I guess. You would look at

Nwave and I mean, if you don't want to do it at all, I can, like, maybe when I rest, I can actually just do this. But I don't need it right now. I still have to do the quantum gravity project. It's more important. I just thought that since you're working on it and I'd already spent time on it and I have three years of experience working in an EQ lab in LA with the THX guy and the USC lab, so I might know a little bit about it. But we use DSPs. We used a fixed point and that was the industry standard.

At least trying it on Windows would be worth it. The reason I'm telling you this is the reason for me to try it to set it all up. It will take about 4 hours I mean, for you it might take like 10 minutes. So I mean, if you look under a reverb and silk, you're going to find the presets. If you play a sound and apply it, it should change it. That's what I would expect. And then if you traced it, make sure the slots are being used or whatever. That would be something to know.

I would just serialize one of those things those are part of my sound effects. So every object can have an emitter and that has a little grip. And I drag a file name, stick it on an object, and that's its position. If that object spins, it's going to make a lot of different sounds as it spins because of the Doppler effect and if it hits things, it's going to have an impulse response normal, and it's going to have a frictional one. So all my sounds are very driven by the physics. So I have been doing sound tools and I don't like using separate tools. I even used to import graphics with SVG. Then if I would change the shape or the morphology of a creature, I would have to re-import all the graphics. So I'm going to make all the tooling done in Avalonia everything so that I don't have to import files and export them. So yeah. To completely go against the file format and you know I'm not doing music composition in that tool. Maybe if I were to do a lot of music stuff, I don't need to do that in the game. Because it's not really that interactive. So I would use some other tool, wherever, but this is what I would want. This would complete pretty much what MonoGame needs is that not having to create sounds from physics or that's a compressible fluid solver that's like shock waves. That's way more complicated than anything that I'm simulating and I'm doing, you know, so it's overkill. NASA did this like 10-15 years ago. So this is kind of like this, you know, it's not minimal, but it's great. It's all you need. Just put a sample of a sound and then add a little bit of reverb to make it fit in the context of the. with data members or whatever and then reload it. I would to set up a custom editor. I would have a file watcher every sound file that of a certain type goes into that folder gets shown up. I drag and drop that file onto an object like a circle and then I see an emitter, right?

And then I click on that, and I see a property sheet. And then I turn, you know, and it starts making a noise by default. Then I shut it off. Just so it does something, I always have things do something by default so you don't have to like wonder what's going on. You're not going to go look for the turning on button. It's going to make a noise and then you can hit the master volume if you hate it. So. Then you click on that emitter, and you see all the parameters that have to do with that sound effect. Can you add reverb? And then expand that out, you know, or I might have another list box that has a bunch of canned reverb files perhaps and. I guess you could use the original tool as a little bit of a model, but it is a mess, you know, with the banks and stuff. I think it's really hard to figure out but yeah, I would show an expandable I. I would have a reverb property on the sound effect. And um or. Okay, the sound's going to have a file name. And then I can adjust some parameters on that sound, pan, zoom. What I do is I add factors like how? What is the X factor of PAN related to where the guy is? This is just my custom tool. Yeah, it's off-topic. I'm giving you a use case but. Um, everybody's got there with who's making an original game. You cannot like big unity and say all, you know, they're, it's all gonna be 3D and it's. And I don't want to do 3D if I'm doing a 2D game, I'm not interviewing you zero because. anyways, I've already done it. All I have to do is add reverb. So how do I start? I want to start with something that sounds OK, so I don't want to start setting every property. A bunch of static reverb objects. And those could be like, um, either drop down, uh, um. You know, in a combo box that sets it to that and then lets you tweak the individual parameters. Or they could be dragged from another box of effects into that sound effect. I don't know exactly how the UI would work. But what's important is I get to try to get Avalonia to get MonoGame NKI,
back into their docking line because it's broken so that we can prototype some stuff and quickly, you know, set up a quick. rapid prototyping and build that into a level editor and I would share it. I don't want to post a hold on to that anyway and and we kind of needed just to develop these features unit testing. So for example, I would say,

hey, ChatGPT, bind a bunch of sliders in a grid to this structure using unod and the mix max vaules in the metadata.

I would, I would put strict, you know, going making sure that it works, you know, with any one parameter at real time and it doesn't stutter and garbage collection doesn't ruin it or whatever. His audio can be pretty quick. You can pick your speakers if you have. GC kicking in too many going on at once. It's quite intensive, especially. reverb. So it's really not that bad. I mean it. If you have thousands of voices, I'm probably going to have one gun shooting in a room. Design me that crazy and a few other noises, maybe 10 voices at once. So what I don't understand is the voices so, but that's right there in the code. So I think it used to work because somebody put code in there that makes the OpenAL sound like the other one. Good morning, I guess.

@damian-666 damian-666 changed the title can the sturctures on reverbe params be public so i can try to set them, or do they require the x ancient xna content sounce banks. can the structures on reverb params be public so i can try to set them, or do they require the x ancient XNA content source banks. can we load a basic XACT sturct to set it up and May 7, 2024
@damian-666 damian-666 changed the title can the structures on reverb params be public so i can try to set them, or do they require the x ancient XNA content source banks. can we load a basic XACT sturct to set it up and can the structures on reverb params be public so i can try to set them, or do they require the x ancient XNA content source banks. can we load a basic XACT sturct to set it up and see if that will make is_xact are to add vioce chains, ( also notes on rapid prototyping for tooling , custom IDE level edtor with Avalonia) May 7, 2024
@nkast
Copy link

nkast commented May 12, 2024

I 'll drop this here, for future reference.
https://openal-soft.org/misc-downloads/Effects%20Extension%20Guide.pdf

@nkast nkast changed the title can the structures on reverb params be public so i can try to set them, or do they require the x ancient XNA content source banks. can we load a basic XACT sturct to set it up and see if that will make is_xact are to add vioce chains, ( also notes on rapid prototyping for tooling , custom IDE level edtor with Avalonia) Reverb EFX sound API May 12, 2024
@damian-666
Copy link
Author

damian-666 commented May 12, 2024

thanks, wow i implemented the blocking using a rough sparse ray caster,

I found both iaudio2 and openAl extension s in silk.. they dont make the effort to factor commonality. i think many of the params are explained could expose it somehow , or make a superset and document what is special to what platform in the xml comments.

i tested windowing on android , windows, a while back. its not that great.

anyway as far as c# code to wrap the platform implementation, its either been generated and committed or hand rolled, the comments are there.... this is 3 years old commit . no code gen is part of the build. the commit is massive though.

https://github.com/dotnet/Silk.NET/blob/main/src/OpenAL/Extensions/Silk.NET.OpenAL.Extensions.Creative/EffectExtension.cs#L16
iihttps://github.com/dotnet/Silk.NET/blob/52f15a7993bddc0df6ccd5bf11da45ed75809647/src/OpenAL/Extensions/Silk.NET.OpenAL.Extensions.Creative/Presets/ReverbProperties.cs#L155

the tests are generated it might work. i looked for a new XACT tool, i didnt see that. in NWAVE there is active development.

also a feature request to allow for rigging and testing and tuning.
ar1st0crat/NWaves#40

not sure if i get time to try it soon ..it might take a few hours tho ,if i try i'll mention it first. ill would first try with silk and the other and see how hard it would be to put those bindings in to the this mg branch.

as for the "sound blocking in" the OpenAL extension pdf and the advice.. about using reverb thats how i hoped to use it . i implented it with rays. very rought but good enough. Tooling : sound folder. file watcher, drag drop emitter, setproperties via sheet to model view. bind slider and presets ( all in silk) via chatgp generted Avalonia code, then audition and play around with it.

chatgpt even wrote me a hyprid fx + cpu cutter , and ray caster. to measure interiors. the physics rays are a bit expensive to measure rooms. even in parallel. i coulnt d believe how good chatgpt code gen it is ( sometimes)
. i dont think it can do the internal stuff you are doing but coding on a mature api.. ( Bepuphysics2 and Avalonia latest major , are too new, it would not likely build)

xna fx, and pixel shaders are old so it can do all that.. it encoded a map from the hit points, to the cpu ray object handles in the texture. margin as a dictionary. the XACT pipe in MG is broken. noone has updated that

i think im going to quick and dirty , or ill simply never get gone. i do hope to get some kind of reverb in this thing, its needs a refresh.

i might have the Avalonia test rig with docking running but its not up to sync with the very latest avalonia and theres quite a few, and scripting in rosylinpad is bad..,
there is a web based soft debugger, a c# preparser, omlins plugin rosylin DOM scripting, that makes development outside of dev studio possible. lost of stuff. but Avalonia changes too much.

as for tooling the avalonia effort in stride has stalled... its too hard. for non gurus but for simple tests or ai widgets like mabye graphics nodes, its good.
ar1st0crat/NWaves#40 looking around this might be a better wave if its not just a fix to the current implementation which has been logged as broken.. thre are still tons of un merged PRs in the other branch.

i cant promise anything i way too many projects. i am fried. for 16 months on emerging tech.

example today i learn Los Alamos has a 1 billion neuron soliton spike signal , 2d neuromophic ai, some do physics, tsi 100x fssster and 2400watts for human leg.. each chip is 30 watts.
w have 8 billion neurons .. big spikes arrive faster. the memory are memristors. there no caches to worry about. its burned on regular silicon.

just wnt to finish what i started... i do like programing just by telling the computer want to do in plain natural language is ideal for me. its really hard to focus. as the stuff i want or though was decades away is coming in an really seems practical and truly 100x faster and in line with topological physics and use of the time dimension for something other than heat.
sorry for the typo my typo in context fixing ai is broken ai is broken, the commercial ones are bloat.. i need a break

i could break this up but its relaaed to rigging basic feaures in a visual way. Tests and samples and so great. Pico voice uses a tiny Avalonia box and thats whta i use for an assistant ChatGp3 ( clippy) thats in a broken staate.. i gotta send this with typos srorry is broken thsi was my surf day and gonna be a night surf i guess.

@damian-666
Copy link
Author

couple things.. Silk updated the openGL3 openALsoft and other bindings. very recently
the open problem they had left is OpenALSoft with the build system.. they seems to wrap a ton of platforms ( though not refactoring commonalities)
I don't know if using silk bindings helps at all for the vault. it does marshal and it is documenting presets for the reverb.

OpenAL doesnt support android seems.
OpenALSoft might be used more now. looking at the code it does write to
it looks like its going through the android NDK c# toolkit via that and might be the easiest way to update this framework .

summary ias i think using OpenAL soft and trying on Windows vis Xaudio2, sorting out voices, querying the device. failing gracefully and then trying windows via OpenGL then Android via OpenALSoft.. if that's not a big deal. to move from openALSoft to OpenAL.. there are some ifdef Android here and there..

i dont believe the 3d Listener has to be used.

hopefully the ARM windows ( qualcom has lost of dsps) hopefully that will be ok with xaudio2 as there are some newer apis..
dsp have been in the chips for forever, but not always accessible as they are used by the phone system.
i think the drivers try to use them.

someone mentioned they just go directly to the android api, jdk. but that might not from c# be easy. its in the ndk.. it probably accessibility its silk building and packaging. seems to be the trouble and why its not going to be fixed.

android.media.audiofx.AudioEffect this is the java version. some jut went directly to that and skipped silk.. /NDK

https://github.com/search?q=repo%3Akcat%2Fopenal-soft+reverb&type=code

there's an IAudioCllient in windows now api now. windows silk has cared about the windows on ARM and that has tons of DSP. FIRs or special sound with headphones can take a lot for resources, if in software.

i did get Freeverb to work ok.( software version , very basic) i see NWave and quite a few software implementations but with 8 cores and the GC and such its still probably a bad idea.

so i might get to it... at least try it..

chatgpt advice: on windows:
Use XAudio2 if you need complex audio processing and effects, which are common requirements in games.
Use WASAPI and IAudioClient if you need tight control over audio latency, or if you're integrating with professional audio workflows where you might need to manage shared and exclusive audio session

im still looking at using Avalonia and maybe some scripting to make a sort of basic test rig. with docking. its hosting the control that copies Mg to a writeable bitmap rather can sharing the buffer, via Avalonia.Inside.. that might not be the best way.
its. only for level design.
a basic IDE with scripting that might be Core2d.. something i can put Bepu2 Aether, and MG into. not using OpenTK, and writing to the surface if possible for wndows directX..

Avalonia isnt good to deploy especially on mobile , i use a another launcher and basic hand coded ui for that .

@kniEngine kniEngine locked and limited conversation to collaborators Jun 28, 2024
@nkast nkast converted this issue into discussion #1690 Jun 28, 2024

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants