Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Synthesize Sound Field Cases #98

Open
fs446 opened this issue Feb 28, 2019 · 1 comment
Open

Synthesize Sound Field Cases #98

fs446 opened this issue Feb 28, 2019 · 1 comment

Comments

@fs446
Copy link
Member

fs446 commented Feb 28, 2019

Since we'd like to make the handling of synthesize.generic() / soundfield.p_array() more consistent, it is worth to think about cases that might be appear:

in the frequency domain:
-use many spatial points (i.e. a grid) but only one frequency (at the moment realized in synthesize.generic())
-use a single spatial point but multiple frequencies (to obtain the acoustic transfer function)
-use both, multiple spatial points and frequencies (as most generic case)

-similar in the time domain:
-use many spatial points (i.e. a grid) but only one time instance (at the moment in soundfield.p_array())
-use a single spatial point but multiple time instances (to obtain the acoustic impulse response)
-use both, multiple spatial points and time instances (as most generic case)

Obviously, the existing functions could deliver all the requested cases. The question arises if we could optimize/enhance (spatial/temporal interpolation issue) performance for special cases, such as rendering binaural impulse responses at a single listening point or even find the one and only master method, that handles all cases.

@mgeier
Copy link
Member

mgeier commented Mar 14, 2019

I think this is somewhat realistic for the time-domain case. But we need an additional parameter, an "interpolator" of some sort, that has to choose an appropriate interpolation scheme, depending on the target: either a sound field plot (probably using PCHIP) or a time signal (probably using sinc interpolation or something like that).
I guess it's not feasible to have one interpolation scheme that fits for both cases.

One important question would be if we allow arbitrary points in time or if we assume regular sampling (I guess latter is more reasonable).

The resulting sound field would be multi-dimensional. It would have the spatial dimensions of the grid plus the time dimension.

If only one time instance is requested, the sound field would have the same dimensions as the grid (like it is now).
If only one point in space but multiple times are requested, the result would be a one-dimensional time signal.
If a flat list of points in space plus multiple times are requested, the result would be a multi-channel time signal.

I'm not sure if the same thing would make sense in the frequency domain. But I guess it should work.
It's probably even simpler because no interpolator is needed?
I guess the requested frequencies could have arbitrary spacing, there is no need for regular spacing.

I have no idea how this would work with BRIRs, but it would be great to have this at some point!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants