Wednesday, January 13, 2010

Cymatics

Cymatics is the science of visualization of sounds. Pretty Cool!!!
I got a kick out of this........Enjoy!

Saturday, January 9, 2010

Why The SM58 is the best microphone in the world.....


Quite often we relate quality to price. The highest quality products cost the most amount of money. But this is not necessarily always true. Some of the best audio equipment is the stuff that has been around long enough to to stand the test of time.


I would like to introduce you to the Shure SM58 microphone.......You probably already know what it is.


Many sound engineers refer to this microphone as the best ever made. It is extremely durable, versatile, and has some of the best acoustic characteristics of any microphone used for live sound.

For the past 40 years the Shure SM58 has been the go to microphone for audio professionals all over the world. With a street value of roughly $100 no production crew is complete without at least 10 of these guys.

So i set out on the web to find out why exactly this microphone is such a staple in the audio community. I found 2 YouTube videos that summed it up quite well.



WOW!!! That was quite entertaining!!!

The next video I want to show you really sums up why the SM58 is the toughest microphone in the world.


After all that abuse..............It still works!

At Capsicum Pro Audio & Visual we don’t always buy into the latest and greatest new products. We strive to provide our clients with the highest quality audio visual services possible. That is why we utilize products like the SM58. By doing this our clients are guaranteed an event that goes off, without a hitch.


Noah Waldron

noah@capsicumpro.com

Tuesday, January 5, 2010

Line Array Speakers

Have you ever been to a concert, theater, or conference where near the front of the audience the sound was present and clear, but as you walked further away to go to the restroom, the sound became less present and became muffled almost. This is a common situation with many events. An inexperienced sound technician may attempt to fix this situation by simply turning up the sound system. This is only irritating to the people up front, as they are being deafened, and the people in the back are hearing a lot more of that unclear, muffled sound.


In recent years this issue has been remedied by Line Arrays.


Capsicum Pro Audio & Visual uses the JBL Vertec Line Array, an industry standard used by many of the nations top concert tour providers and event centers. When JBL decided to start the Vertec Training Program, Capsicum was invited to assist in the programs early development.

When line arrays are improperly deployed, they can sound horrible! They require an understanding of acoustic physics and training to set-up properly. When set-up properly, they can provide even coverage, and consistent sound to all parts of the venue. Simply put, someone in the back seats can have the same listening experience as any one in the front seats.


A line array is a loudspeaker system that is made up of a number of loudspeaker elements coupled together in a line segment to create a near-line source of sound. The distance between adjacent drivers is close enough that they constructively interfere with each other to send sound waves farther than traditional horn-loaded loudspeakers, and with a more evenly distributed sound output pattern.


Line arrays can be oriented in any direction, but their primary use in public address is in vertical arrays which provide a very narrow vertical output pattern useful for focusing sound at audiences without wasting output energy on ceilings or empty air above the audience. A vertical line array displays a normally-wide horizontal pattern useful for supplying sound to the majority of a concert audience. Horizontal line arrays, by contrast, have a very narrow horizontal output pattern and a tall vertical pattern. A row of subwoofers along the front edge of a concert stage can behave as a horizontal line array unless the signal supplied to them is adjusted (delayed, polarized, equalized) to shape the pattern otherwise.


The coupling between adjacent drivers results in a frequency-dependent reduction in dispersion along the axis of the line segment. A vertical line array has a narrow vertical dispersion. This results in less loss in sound pressure level over a given distance. Typically sound pressures is lost at 6dB per doubling of distance but in true line sources, it is only lost at 3dB per doubling of distance. The horizontal dispersion is unchanged.


Modern line arrays use separate drivers for high-, mid- and low-frequency passbands. For the line source to work, the drivers in each passband need to be in a line. Therefore, each enclosure must be designed to rig together closely to form columns composed of high-, mid- and low-frequency speaker drivers. Increasing the number of drivers in each enclosure increases the frequency range and maximum sound pressure level, whilst adding additional boxes to the array will also lower the frequency in which the array achieves a directional dispersion pattern.


The large format line array has become the standard for large concert venues and outdoor festivals, where such systems can be flown (rigged, suspended) from a structural beam, ground support tower or off a tall A-frame tower. Since the enclosures rig together and hang from a single point, they are more convenient to assemble and cable than other methods of arraying loudspeakers. The lower portion of the line array is generally curved backward to increase dispersion at the bottom of the array and allow sound to reach more audience members. Typically, cabinets used in line arrays are trapezoidal, connected together by specialized rigging hardware.

http://www.capsicumpro.com/

Speech Intelligibility

Have you ever been to a conference, church service, or any event where a speech is made, and it is hard to understand what is being said? Not because the guest speaker is Charlie Browns school teacher, but because your in a tin building, the speakers are aimed at the ceiling, and the microphone is 5’ from the speakers mouth.


Many sound system designers and day to day engineers do not take into consideration the importance of speech intelligibility. Though, for most of us sound engineers, it is not a life threatening issue. But imagine an event where your potential clients, employees, or congregation did not receive a clear message. Misunderstanding simple words, leading to a false understanding of the message. Or imagine a situation where an air traffic controller is giving instructions to two different pilots and one of them mis-understands one word. Now this situation could be life threatening!


Speech intelligibility issues can come from various situations in the public address system, the interaction of the public address system with the room, placement of microphones, and many more.

In a perfect world, we would only hear the direct sound of the speakers, eliminating the sounds of rooms, and their reflections.

The interaction of a speaker system in a room is very complex to understand, model, or measure. But there are some great tools out there that engineers are using to analyze rooms and their acoustical responses to a public address system. One of these tools is a software called SMAART. "Smaart" stands for Sound Measurement Acoustical Analysis Real Time Tool. Smaart is used to measure PA's in specific venues or recording studios, mainly to initially tune and adjust the PA to operate at it’s maximum potential in a specific venue.

The proper use of SMAART can greatly improve intelligibility, increasing the impact of a speech by the CEO who has something really important to say.


Speech Intelligibility is measured by %ALcons (Percentage Articulation Loss of Consonants). It is computed from measurements of the Direct-to-Reverberant Ratio and the Early Decay Time using a set of correlations defined by SynAudCon, and is specified in percentages.

Consonants play a more significant role in speech intelligibility than vowels. If the consonants are heard clearly, the speech can be understood more easily.

Since %ALcons expresses loss of consonant definition, lower values are associated with greater intelligibility. It is generally assumed that the maximum allowable value for typical paging applications is 10%, assuming that the environment is relatively free of masking noise. For learning environments and voice warning systems, the desired value is 5% or less.

Alcons is the measured percentage of Articulation Loss of Consonants by a listener. % Alcons of 0 indicates perfect clarity and intelligibility with no loss of consonant understanding, while 10% and beyond is heading toward bad intelligibility, and 15% typically is the maximum loss acceptable.


Capsicum Pro Audio & Visual uses SMAART to acoustically correct our sound systems to match the venues and address the issues of reverberation, time alignment, and phase problems that occur in most rooms and venues.

Line Arrays have allowed us to drastically reduce the amount of acoustical energy directed at reverberant surfaces, and focus the energy at the listeners.Greatly enhancing the quality of our services by allowing us to achieve lower %ALcons.

Feel Free to check out our Blog on Line Array technology.

http://www.capsicumpro.com/Site/Blog/Entries/2009/10/14_Line_Array_Speaker_technologies.html


Noah Waldron

http://www.capsicumpro.com/

noah@capsicumpro.com

Psychoacoustics and why people daydream during speeches

So you are at a church service or speaking engagement and watching the guest speaker (or pastor) but the sound seems to come from someplace other than his / her lips? If you closed your eyes and tried to guess where the sound was coming from, you might open your eyes to be staring at the speakers, or even worse, a wall! This kind of issue can be confusing and somewhat tiring for the brain to process, as your eyes are telling you one thing, and your ears another.


As this kind of acoustical phenomenon is tiring to process for the brain, it is also just plain tiring. Often people attending a church service, or speaking engagement, will wonder off into daydreaming or even drowsiness. Not because the person giving the speech is delivering horrible content, but because it is simply too difficult to listen. I recently read a great article on Pro Sound Web, that pointed out this issue of yawning, and daydreaming in church services.


From here I would like to delve into the realm of Psychoacoustics. Psychoacoustics is the study of how sounds are perceived and processed by the human mind. Many venues are plagued with issues relating to psychoacoustics, such as the issue mentioned earlier.


Our brain has many ways of filtering sounds, and processing what is coming into our ears. If we did not have these filters, we would not be able to have a conversation in a restaurant, localize sounds, or be able to decipher the intensity sounds. The less our brain has to process sounds the better we are able to absorb important information, or enjoy our sonic experiences.


One psychoacoustic effect that I would like to focus on is the “Haas Effect” also known as the Precedence Effect, or the Law of the First Wave Front. This effect is responsible

for the ability of listeners with two ears to accurately localize sounds coming from around them.

When two identical sounds originate from two sound sources at different distances, the sound created from the closest location is heard first. The listener perceives that this sound is from that location alone and all later arriving sounds are suppressed, even if the later arriving sound is louder. This could be called “involuntary sensory inhibition”

The Haas Effect occurs when the arrival times of the two sound sources are within 30-40 mili seconds. Any sounds arriving later than 40 mili seconds are perceived as delays or echoes.

So imagine a person on a stage, speaking, is 10 feet behind the main speakers. Obviously you would hear the sound from the speakers first as opposed to hearing the persons actual voice. You would hear the persons actual voice 8.9 mili seconds later than when you heard the main speakers. According to the Haas Effect, your mind would disregard the persons actual voice and perceive that the persons voice is coming from the main speakers.

The Haas Effect often happens because of reflections of sounds as well. Imagine if you’re in the back of a venue and the sound that arrives first is from a reflection off of the back wall. You would perceive that the persons voice is coming from behind you, which could be quite confusing, and would require your brain to work a little more, causing you to daydream as you’re looking at the guest speaker and his voice is coming from behind you.


Many of these issues can be dealt with by proper speaker placement, acoustical treatment of reflections, and time aligning speakers. But the sad case is that many local sound contractors do not understand some of these basic acoustic concepts that are critical to creating an accurate listening environment.


If you have any questions, don’t hesitate to send me an email. Capsicum Pro Audio & Visual would love to help in creating a better listening experience for your audience.


Noah Waldron

Capsicum Pro Audio & Visual

http://www.capsicumpro.com/

noah@capsicumpro.com