1. Ok I tried that in my current session and it did not work. However I made a new clean session, used asset synch, and it worked! I think this was the setting I was missing all along. I see now the events automatically show up in the content browser and blueprint node. Is Wwise just streaming data into Unreal or are the assets being copied into unreal when I generate sound data? I am just trying to better understand the file management here. Thank you so much for the help!

  2. You should be able to see the event in the content browser the moment you create it in Wwise. Which version of Wwise are you currently running? If it is the latest beta version, I suggest you switching back to the version 2021.1.10.7883.

  3. I recently downgraded to 2021.1.10 after I heard there were issues with the newer one. If content is supposed to be viewable in the content browser right away then what it the point of "Generate Sound Data"?

  4. It generates the soundbanks. You cannot get sound from the events without the soundbanks.

  5. Yes but not for humans... such a middle eastern answer to this question.

  6. https://www.izotope.com/en/learn/digital-audio-basics-sample-rate-and-bit-depth.html

  7. As long as the sample rate is twice the maximum frequency of the analog signal, you wont get aliasing.

  8. Not actually; The old analog-to-digital conversion systems, most of them that weren't based on Delta-Sigma modulation, but also the earlier Delta-Sigma A/D converters, could easily suffer from aliasing artifacts, even when sampling as the Nyquist-Shannon Theorem states as correct (mainly because the Low-Pass Filter circuitry of the past years used to be deficient, and because early Delta-Sigma topology converters didn't had good specs and quality); With the modern Delta-Sigma converters of the present-day, this is not a problem anymore.

  9. Joking aside. I have been working with ambisonics lately and setting channel count for each track was a nightmare. Eventually, I found a lua script which lets you set channel count for all selected tracks. So now with a shortcut, I am able to change the channel count for 200+ tracks.

  10. What do you mean by channel count? I'm confused

  11. You need a specific number of output channels for each track depending on the ambisonic order you are working with. That's what I meant by channel count. So let's say that you want to work with 7th order ambisonics. You need to set the output channels to 64 for each track that will be encoded. Unless you don't have an automation tool such as the lua script I mentioned earlier, it takes a significant time to set it for all tracks.

  12. No practical uses for 32 bit at this time. If you were recording low signal audio with very good recording equipment, you could increase the gain inside the DAW without increasing noise to an audible level. Motion picture companies might use 32 bit for that reason. DVD audio was 24 bit if I remember correctly.

  13. Some 32 bit float recorders such as Zoom F3 literally has no convetional pre-amp gain setting. You aim a good input level and if the audio clips, you just turn it down in the post without encountering any problems. So for field recording purposes, 32 bit recording is already a huge thing.

  14. Looks like a double M/S set up. The mic you asked is probably a directional mic.

  15. Thats the one! But now then, what is the backwards facing mic capturing, and why face it backwards? What kind of situation would call for that mic to add value to the recording?

  16. This is a surround recording technique. You've got two directional mics covering front and back, and a bi-directional mic to cover sides. Then it is decoded to be played properly on surround setups. You can find more info

  17. Yeah nothing on the automation or track is bypassed. Basically the automation isn't registering that it's going from 0 to 100 on the correction automation. The track isn't disabled or muted either.

  18. You are probably trying to automate the wrong variable then. Switch automation mode to write or latch and move the correction knob on the plugin. If you see a different variable name on the automation track, then select the correct variable on the trim window.

  19. What router do you have? I would try to leverage it if it's already working fine. There may be a scheduling feature or option available for it already.

  20. We are using an old quartz cp - 6408e (evertz) control panel to route signal manually depending on the broadcasting schedule.

  21. Hi! I have recently encountered the same problem with an ambisonics plugin suit I had. Downloading the latest Microsoft C++ Redisributable solved the problem. Hope this helps!

  22. Hey what does that mean I’m not tech savvy at all

  23. Don't get confused by the technical terms. You can use 44100 Hz audio with video. There is no obligation to use 48000 Hz unless you've been specifically told to deliver at 48000 Hz. And if you need your audio to be at 48000 Hz, you can just render it at 48000 Hz. Your daw will re-sample and render the audio file.

  24. If you're interested in ambisonics, you might wanna check out the IEM and Sparta plugin suits as well. Both are free and what they offer for free is just mind-blowing!

  25. Also, I would like to share this lua script which is a great time saver when working with ambisonics. It lets you set the channel number for all selected tracks so that you don't have to set them one by one which is just painful to do, especially when you have too many tracks.

  26. Yeah, I've already downloaded the free trial, and it seems fine. I was just wondering why people say Cubase is the superior product when it comes to midi sequencing.

  27. Just avoid people who say this daw or this plugin is gold standart for this or that. If a DAW meets your needs, then it's golden. Nothing else is important.

  28. You can use a tremolo effect along with some pitch shifting and some eq. You should get a pretty close result. You can also get some text-to-speech vibes in your design by changing the order of the words in the sentence while recording it and putting them in the correct order in your DAW afterwards.

  29. Non-charged shots lack the low end, in my opinion. The original sound design of the plasma pistol also lacks the low end so you could have bring more punch to it in you re-design to make it more satisfying to ear. Also the long reverb tail of the charged shot sounds so artificial. Apart from these two points, you did a great job!

  30. I haven't tried it so I'm not sure if you get the same results, but I hear multiple instances of pitch shifting along with the dry signal, also a low pass filter on probably one of the low pitched sounds and some boost around 50-100 Hz area to give that boomy feel. In addition, the dry signal is mono so the intelligible dialogue is coming from the middle while pitch shifted signals are widened to the sides probably with a stereo widening plugin or just simply using panning. And lastly I hear delay and some reverb. Hope this helps.

  31. Studying Sound by Karen Collins is good read for starters. It covers most of the basic stuff about sound and sound design.

  32. Which software allows you to render 32 bit fixed point and 64 bit floating point files?

  33. I used Cockos Reaper in this case, but I believe most of the DAWs allow you to render both fixed-point and floating-point files nowadays.

  34. I have both. The DR 40 is fine although the build quality isn’t as good as the Zooms. Also, the XLR ports aren’t combo ports like the Zooms. However, the battery life is better than the H4n and the H5. And I like the fact the the files aren’t buried in subdirectories like the Zoom files are. My fave of them all is the Zoom H6. Rugged, battery life is best of all of them. It’s rigged for a neck strap and the display is great.

  35. I don't know about the DR40 but DR40X has combo inputs (XLR+1/4").

  36. I don't own a h4n pro but I had the same struggle when I decided to buy a field recorder and ended up buying a Tascam dr40x. Here are the pros and cons of both devices:

  37. I have the DR40X! So will this save the audio files as separate tracks? To

  38. https://tascam.com/downloads/products/tascam/dr-40x/e_dr-40x_rm_vd.pdf

  39. I don't know about DR40 but I've got a DR40X and to do what you want to do I change the recording mode to 4CH and select EXT INDEP. for EXT IN option. If you leave it as EXT IN L/R, it will treat XLR inputs as one stereo input, a mono input for the left channel and another mono input for the right channel.

  40. In my opinion, you should at least have an understanding of how interactive music is composed and the best way to understand it to actually trying to implement the music by yourself. There are two approaches when you are composing for interactive media: vertical and horizontal. In order to better conceptualise these approaches, you should get more familiar with audio middlewares.

  41. It's really difficult to say something without seeing your I/O settings.

  42. According to screenshot, there is also a serious clipping problem. Are you sure your faders at 0 dBFS point?

  43. Yeah I have not touched anything. I checked

  44. Well I just downloaded the track, played it in Reaper, and compared it with the YouTube video. Downloaded track is definetely louder. So, I think the original file is very loud but youtube applies some sort of attenuation to the original track and pulls back the overall levels. But when you download the same track, you hear it without the attenuation and it sounds louder.

  45. He is completely new to REAPER and DAWs in general and I believe he thought it was the stock REAPER plug-in. Thank you so much for your reply!

  46. The reply above is the correct one. I don't know a lot about Macs but I believe it is possible to run VST plug-ins on MacOS. Your friend could probably find VST alternatives to the AU plug-ins and use those instead. I suppose the next best thing would have your friend render the stems as WAV files and then use those in your project. Of course, then you can't change any of the effects applied to the track.

  47. Actually I am teaching him sound design and he was supposed to use REAPER's stock plug-ins only, but like I said, he probably though that the AU plugin was a stock one. Since I don't know about Macs as well, I though that REAPER's stock plugins were in AU format for macOS devices and that's why I couldn't run it on my PC. Well, I am so glad that was not the issue!

Leave a Reply

Your email address will not be published. Required fields are marked *

Author: admin