The Journal
of the
Acoustical Society of America
Vol. 136, No. 4, Pt. 2 of 2, October 2014
www.acousticalsociety.org
168th Meeting
Acoustical Society of America
Indianapolis Marriott Downtown Hotel
Indianapolis, Indiana
27–31 October 2014
Table of Contents on p. A5
Published by the Acoustical Society of America through AIP Publishing LLC
CODEN: JASMAN
ISSN: 0001-4966
INFORMATION REGARDING THE JOURNAL
Publication of the Journal is jointly inanced by the dues of members of
the Society, by contributions from Sustaining Members, by nonmember
subscriptions, and by publication charges contributed by the authors’
institutions. A peer-reviewed archival journal, its actual overall value includes extensive voluntary commitments of time by the Journal ’s Associate Editors and reviewers. The Journal has been published continuously
since 1929 and is a principal means by which the Acoustical Society
seeks to fulill its stated mission—to increase and diffuse the knowledge
of acoustics and to promote its practical applications.
Submission of Manuscripts: Detailed instructions are given in the
latest version of the “Information for Contributors” document, which is
printed in the January and July issues of the Journal; the most current
version can be found online at http://asadl.org/jasa/for_authors_jasa.
This document gives explicit instructions regarding the content of the
transmittal letter and speciies completed forms that must accompany
each submission. All research articles and letters to the editor should
be submitted electronically via an online process at the site <http://
jasa.peerx-press.org/>. The uploaded iles should include the complete
manuscript and the igures. The authors should identify, on the cover
page of the article, the principal PACS classiication. A listing of PACS
categories is printed with the index in the inal issues (June and December) of each volume of the Journal; the listing can also be found at the
online site of the Acoustical Society. The PACS (physics and astronomy
classiication scheme) listing also identiies, by means of initials enclosed
in brackets, just which associate editors have the primary responsibility
for the various topics that are listed. The initials correspond to the names
listed on the back cover of each issue of the Journal and on the title page
of each volume. Authors are requested to consult these listings and to
identify which associate editor should handle their manuscript; the decision regarding the acceptability of a manuscript will ordinarily be made
by that associate editor. The Journal also has special associate editors
who deal with applied acoustics, education in acoustics, computational
acoustics, and mathematical acoustics. Authors may suggest one of these
associate editors, if doing so is consistent with the content or emphasis of
their paper. Review and tutorial articles are ordinarily invited; submission
of unsolicited review articles or tutorial articles (other than those which
can be construed as papers on education in acoustics) without prior discussion with the Editor-in-Chief is discouraged. Authors are also encouraged to discuss contemplated submissions with appropriate members of
the Editorial Board before submission. Submission of papers is open to
everyone, and one need not be a member of the Society to submit a
paper.
JASA Express Letters: The Journal includes a special section
which has a separate submission process than that for the rest of the
Journal. Details concerning the nature of this section and information for
contributors can be found at the online site http://scitation.aip.org/content/
asa/journal/jasael/info/authors.
Publication Charge: To support the cost of wide dissemination of
acoustical information through publication of journal pages and production of a database of articles, the author’s institution is requested to pay
a page charge of $80 per page (with a one-page minimum). Acceptance
of a paper for publication is based on its technical merit and not on the
acceptance of the page charge. The page charge (if accepted) entitles the
author to 100 free reprints. For Errata the minimum page charge is $10,
with no free reprints. Although regular page charges commonly accepted
by authors’ institutions are not mandatory for articles that are 12 or fewer
pages, payment of the page charges for articles exceeding 12 pages is
mandatory. Payment of the publication fee for JASA Express Letters is
also mandatory.
Selection of Articles for Publication: All submitted articles are peer
reviewed. Responsibility for selection of articles for publication rests with
the Associate Editors and with the Editor-in-Chief. Selection is ordinarily
based on the following factors: adherence to the stylistic requirements of
the Journal, clarity and eloquence of exposition, originality of the contribution, demonstrated understanding of previously published literature
pertaining to the subject matter, appropriate discussion of the relationships of the reported research to other current research or applications,
appropriateness of the subject matter to the Journal, correctness of the
content of the article, completeness of the reporting of results, the reproducibility of the results, and the signiicance of the contribution. The Journal reserves the right to refuse publication of any submitted article without
giving extensively documented reasons. Associate Editors and reviewers
are volunteers and, while prompt and rapid processing of submitted
manuscripts is of high priority to the Editorial Board and the Society, there
is no a priori guarantee that such will be the case for every submission.
Supplemental Material: Authors may submit material that is part
supplemental to a paper. Deposits must be in electronic media, and can
include text, igures, movies, computer programs, etc. Retrieval instructions
are footnoted in the related published paper. Direct requests to the JASA
office at jasa@aip.org for additional information, see http://publishing.aip.
org/authors.
Role of AIP Publishing: AIP Publishing LLC has been under contract
with the Acoustical Society of America (ASA) continuously since 1933
to provide administrative and editorial services. The providing of these
services is independent of the fact that the ASA is one of the member
societies of AIP Publishing. Services provided in relation to the Journal
include production editing, copyediting, composition of the monthly issues
of the Journal, and the administration of all inancial tasks associated with
the Journal. AIP Publishing’s administrative services include the billing and
collection of nonmember subscriptions, the billing and collection of page
charges, and the administration of copyright-related services. In carrying
out these services, AIP Publishing acts in accordance with guidelines
established by the ASA. All further processing of manuscripts, once they
have been selected by the Associate Editors for publication, is handled by
AIP Publishing. In the event that a manuscript, in spite of the prior review
process, still does not adhere to the stylistic requirements of the Journal,
AIP Publishing may notify the authors that processing will be delayed until
a suitably revised manuscript is transmitted via the appropriate Associate
Editor. If it appears that the nature of the manuscript is such that processing
and eventual printing of a manuscript may result in excessive costs, AIP
Publishing is authorized to directly bill the authors. Publication of papers is
ordinarily delayed until all such charges have been paid.
Copyright © 2014, Acoustical Society of America. All rights reserved.
Copying: Single copies of individual articles may be made for private use or research. Authorization is given to copy
articles beyond the free use permitted under Sections 107 and 108 of the U.S. Copyright Law, provided that the copying
fee of $30.00 per copy per article is paid to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923,
USA, www.copyright.com. (Note: The ISSN for this journal is 0001-4966.)
Authorization does not extend to systematic or multiple reproduction, to copying for promotional purposes, to electronic
storage or distribution, or to republication in any form. In all such cases, speciic written permission from AIP Publishing
LLC must be obtained.
Note: Copies of individual articles may also be purchased online via AlP’s DocumentStore service.
Permission for Other Use: Permission is granted to quote from the Journal with the customary acknowledgment of
the source. Republication of an article or portions thereof (e.g., extensive excerpts, igures, tables, etc.) in original form
or in translation, as well as other types of reuse (e.g., in course packs) require formal permission from AIP Publishing
and may be subject to fees. As a courtesy, the author of the original journal article should be informed of any request for
republication/reuse.
Obtaining Permission and Payment of Fees: Using Rightslink®: AIP Publishing has partnered with the Copyright
Clearance Center to offer Rightslink, a convenient online service that streamlines the permissions process. Rightslink
allows users to instantly obtain permissions and pay any related fees for reuse of copyrighted material, directly from AlP’s
website. Once licensed, the material may be reused legally, according to the terms and conditions set forth in each unique
license agreement.
To use the service, access the article you wish to license on our site and simply click on the Rightslink icon/ “Permissions
for Reuse” link in the abstract. If you have questions about Rightslink, click on the link as described, then click the “Help”
button located in the top right-hand corner of the Rightslink page.
Without using Rightslink: Address requests for permission for republication or other reuse of journal articles or portions
thereof to: Office of Rights and Permissions, AIP Publishing LLC, 1305 Walt Whitman Road, Suite 300, Melville, NY 117474300, USA; FAX: 516-576-2450; Tel.: 516-576-2268; E-mail: rights@aip.org
Micr
Microphone
ophone set
set
for llow
ow SPL
a
applications
pplications
G.R.A.S.
G.R.A.S
S 46BL
•
•
•
•
High se
Low no
Constan
TEDS I
We make m
microphones
grras.dk
CODEN: JASMAN
ISSN: 0001-4966
The Journal
of the
Acoustical Society of America
Acoustical Society of America Editor-in-Chief: Allan D. Pierce
ASSOCIATE EDITORS OF JASA
General Linear Acoustics: J.B. Lawrie, Brunel Univ.; A.N. Norris, Rutgers
University; O. Umnova, Univ. Salford; R.M. Waxler, Natl. Ctr. for Physical
Acoustics; S.F. Wu, Wayne State Univ.
Nonlinear Acoustics: R.O. Cleveland, Univ. of Oxford; M. Destrade, Natl. Univ.
Ireland, Galway; L. Huang, Univ. of Hong Kong; V.E. Ostashev, Natl. Oceanic
and Atmospheric Admin; O.A. Sapozhnikov, Moscow State Univ.
Atmospheric Acoustics and Aeroacoustics: P. Blanc-Benon, Ecole Centrale
de Lyon; A. Hirschberg, Eindhoven Univ. of Technol.; J.W Posey, NASA Langley
Res. Ctr. (ret.); D.K. Wilson, Army Cold Regions Res. Lab.
Underwater Sound: J.I. Arvelo, Johns Hopkins Univ.; N.P. Chotiros, Univ. of
Texas; J.A. Colosi, Naval Postgraduate School; S.E. Dosso, Univ. of Victoria; T.F.
Duda, Woods Hole Oceanographic Inst.; K.G. Foote, Woods Hole Oceanographic
Inst.; A.P. Lyons, Pennsylvania State Univ.; Martin Siderius, Portland State
Univ.; H.C. Song, Scripps Inst. of Oceanography; A.M. Thode, Scripps Inst. of
Oceanography
Ultrasonics and Physical Acoustics: T. Biwa, Tohoku Univ.; M.F. Hamilton,
Univ. Texas, Austin; T.G. Leighton, Inst. for Sound and Vibration Res.
Southampton; J.D. Maynard, Pennsylvania State Univ.; R. Raspet, Univ. of
Mississippi; R.K. Snieder, Colorado School of Mines; J.A. Turner, Univ. of
Nebraska—Lincoln; M.D. Verweij, Delft Univ. of Technol.
Transduction, Acoustical Measurements, Instrumentation, Applied Acoustics: M.R. Bai, Natl., Tsinghua Univ.; D.A. Brown, Univ. of MassachusettsDartmouth; D.D. Ebenezer, Naval Physical and Oceanographic Lab., India; T.R.
Howarth, NAVSEA, Newport; M. Sheplak, Univ. of Florida
Structural Acoustics and Vibration: L. Cheng, Hong Kong Polytechnic Univ.;
D. Feit, Applied Physical Sciences Corp.; L.P. Franzoni, Duke Univ.; J.H. Ginsberg,
Georgia Inst. of Technol. (emeritus); T. Kundu, Univ. of Arizona; K.M. Li, Purdue
Univ.; J.G. McDaniel, Boston Univ.; E.G. Williams, Naval Research Lab.
Noise: Its Effects and Control: G. Brambilla, Natl. Center for Research
(CNR), Rome; B.S. Cazzolato, Univ. of Adelaide; S. Fidell, Fidell Assoc.; K.V.
Horoshenkov, Univ. of Bradford; R. Kirby, Brunel Univ.; B. Schulte-Fortkamp,
Technical Univ. of Berlin
Architectural Acoustics: F. Sgard, Quebec Occupational Health and Safety
Res. Ctr.; J.E. Summers, Appl. Res. Acoust., Washington; M. Vorlaender, Univ.
Aachen; L.M. Wang, Univ. of Nebraska—Lincoln
Acoustic Signal Processing: S.A. Fulop, California State Univ., Fresno; P.J.
Loughlin, Univ. of Pittsburgh; Z-H. Michalopoulou, New Jersey Inst. Technol.;
K.G. Sabra, Georgia Inst. Tech.
Physiological Acoustics: C. Abdala, House Research Inst.; I.C. Bruce, McMaster
Univ.; K. Grosh, Univ. of Michigan; C.A. Shera, Harvard Medical School
Psychological Acoustics: L.R. Bernstein, Univ. Conn.; V. Best, Natl. Acoust.
Lab., Australia; E. Buss, Univ. of North Carolina, Chapel Hill; J.F. Culling,
Cardiff Univ.; F.J. Gallun, Dept. Veteran Affairs, Portland; Enrique LopezPoveda, Univ. of Salamanca; V.M. Richards, Univ. California, Irvine; M.A.
Stone, Univ. of Cambridge; E.A. Strickland, Purdue Univ.
Speech Production: D.A. Berry, UCLA School of Medicine; L.L. Koenig, Long
Island Univ. and Haskins Labs.; C.H. Shadle, Haskins Labs.; B.H. Story, Univ. of
Arizona; Z. Zhang, Univ. of California, Los Angeles
Speech Perception: D. Baskent, Univ. Medical Center, Groningen; C.G. Clopper,
Ohio State Univ.; B.R. Munson, Univ. of Minnesota; P.B. Nelson, Univ. of Minnesota
Speech Processing: C.Y. Espy-Wilson, Univ. of Maryland; College Park; M.A.
Hasegawa-Johnson, Univ. of Illinois; S.S. Narayanan, Univ. of Southern California
Musical Acoustics: D. Deutsch, Univ. of California, San Diego; T.R. Moore,
Rollins College; J. Wolfe, Univ. of New South Wales
Bioacoustics: W.W.L. Au, Hawaii Inst. of Marine Biology; C.C. Church, Univ.
of Mississippi; R.R. Fay, Loyola Univ., Chicago; J.J. Finneran, Navy Marine
Mammal Program; M.C. Hastings, Georgia Inst. of Technol; G. Haïat, Natl.
Ctr. for Scientifi c Res. (CNRS); D.K. Mellinger, Oregon State Univ.; D.L.
Miller, Univ. of Michigan; M.J. Owren, Georgia State Univ., A.N. Popper, Univ.
Maryland; A.M. Simmons, Brown Univ.; K.A. Wear, Food and Drug Admin; Suk
Wang Yoon, Sungkyunkwan Univ.
Computational Acoustics: D.S. Burnett, Naval Surface Warfare Ctr., Panama
City; N.A. Gumerov, Univ. of Maryland; L.L. Thompson, Clemson Univ.
Mathematical Acoustics: R. Martinez, Applied Physical Sciences
Education in Acoustics: B.E. Anderson, Los Alamos National Lab.; V.W.
Sparrow, Pennsylvania State Univ.; P.S. Wilson, Univ. of Texas at Austin
Reviews and Tutorials: W.W.L. Au, Univ. Hawaii
Forum and Technical Notes: N. Xiang, Rensselaer Polytechnic Univ.
Acoustical News: E. Moran, Acoustical Society of America
Standards News, Standards: S. Blaeser, Acoustical Society of America; P.D.
Schomer, Schomer & Assoc., Inc.
Book Reviews: P.L. Marston, Washington State Univ.
Patent Reviews: S.A. Fulop, California State Univ., Fresno; D.L. Rice,
Computalker Consultants (ret.)
ASSOCIATE EDITORS OF JASA EXPRESS LETTERS
Editor: J.F. Lynch, Woods Hole Oceanographic Inst.
General Linear Acoustics: A.J.M. Davis, Univ. California, San Diego; O.A.
Godin, NOAA-Earth System Research Laboratory; S.F. Wu, Wayne State Univ.
Nonlinear Acoustics: M.F. Hamilton, Univ. of Texas at Austin
Aeroacoustics and Atmospheric Sound: V.E. Ostashev, Natl. Oceanic and
Atmospheric Admin.
Underwater Sound: G.B. Deane, Univ. of California, San Diego; D.R. Dowling,
Univ. of Michigan, A.C. Lavery, Woods Hole Oceanographic Inst.; J.F. Lynch,
Woods Hole Oceanographic Inst.; W.L. Siegmann, Rensselaer Polytechnic Institute
Ultrasonics, Quantum Acoustics, and Physical Effects of Sound: P.E.
Barbone, Boston Univ.; T.D. Mast, Univ of Cincinatti; J.S. Mobley, Univ. of
Mississippi
Transduction: Acoustical Devices for the Generation and Reproduction
of Sound; Acoustical Measurements and Instrumentation: M.D. Sheplak,
Univ. of Florida
Structural Acoustics and Vibration: J.G. McDaniel, Boston Univ.
Noise: S.D. Sommerfeldt, Brigham Young Univ.
Architectural Acoustics: N. Xiang, Rensselaer Polytechnic Inst.
Acoustic Signal Processing: D.H. Chambers, Lawrence Livermore Natl. Lab.;
C.F. Gaumond, Naval Research Lab.
Physiological Acoustics: B.L. Lonsbury-Martin, Loma Linda VA Medical Ctr.
Psychological Acoustics: Q.-J. Fu, House Ear Inst.
Speech Production: A. Lofqvist, Univ. Hospital, Lund, Sweden
Speech Perception: A. Cutler, Univ. of Western Sydney; S. Gordon-Salant, Univ.
of Maryland
Speech Processing and Communication Systems and Speech Perception:
D.D. O’Shaughnessy, INRS-Telecommunications
Music and Musical Instruments: D.M. Campbell, Univ. of Edinburgh;
D. Deutsch, Univ. of California, San Diego; T.R. Moore, Rollins College; T.D.
Rossing, Stanford Univ.
Bioacoustics—Biomedical: C.C. Church, Natl. Ctr. for Physical Acoustics
Bioacoustics—Animal: W.W.L. Au, Univ. Hawaii; C.F. Moss, Univ. of Maryland
Computational Acoustics: D.S. Burnett, Naval Surface Warfare Ctr., Panama City;
L.L. Thompson, Clemson Univ.
CONTENTS
page
Technical Program Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A8
Schedule of Technical Session Starting Times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A10
Map of Meeting Rooms at Marriott . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A11
Map of Indianapolis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A12
Calendar—Technical Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A13
Schedule—Other Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A16
Meeting Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A17
Guidelines for Presentations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A23
Dates of Future Meetings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A25
Technical Sessions (1a__), Monday Morning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2073
Technical Sessions (1p__), Monday Afternoon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2088
Tutorial Session (1eID), Monday Evening . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2113
Technical Sessions (2a__), Tuesday Morning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2114
Technical Sessions (2p__), Tuesday Afternoon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2150
Technical Sessions (3a__), Wednesday Morning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2181
Technical Sessions (3p__), Wednesday Afternoon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2218
Plenary Session and Awards Ceremony, Wednesday Afternoon . . . . . . . . . . . . . . . . . . . . . . . . . . 2228
Pioneers of Underwater Acoustics Medal encomium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2229
Silver Medal in Speech Communication encomium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2232
Wallace Clement Sabine Medal encomium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2237
Technical Sessions (4a__), Thursday Morning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2241
Technical Sessions (4p__), Thursday Afternoon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2270
Technical Sessions (5a__), Friday Morning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2300
Sustaining Members . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2344
Application Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2348
Regional Chapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2349
Author Index to Abstracts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2355
Index to Advertisers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2366
A5
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
A5
ACOUSTICAL SOCIETY OF AMERICA
The Acoustical Society of America was founded in 1929 to increase and diffuse the knowledge of acoustics and promote
its practical applications. Any person or corporation interested in acoustics is eligible for membership in this Society. Further
information concerning membership, together with application forms, may be obtained by addressing Elaine Moran, ASA
Office Manager, 1305 Walt Whitman Road, Suite 300, Melville, NY 11747-4300, T: 516-576-2360, F: 631-923-2875; E-mail:
asa@aip.org; Web: http://acousticalsociety.org
Officers 2014-2015
Barbara G. Shinn-Cunningham,
Vice President
Judy R. Dubno, President
Department of Otolaryngology–Head and Neck Surgery
Medical University of South Carolina
135 Rutledge Avenue, MSC5500
Charleston, SC 29425-5500
(843) 792-7978
dubnojr@musc.edu
Cognitive and Neural Systems
Biomedical Engineering
Boston University
677 Beacon Street
Boston, MA 02215
(617) 353-5764
shinn@cns.bu.edu
Christy K. Holland, President-Elect
University of Cincinnati
ML 0586
231 Albert Sabin Way
Cincinnati, OH 45267-0586
(513) 558-5675
christy.holland@uc.edu
Durham School of Architectural Engineering and Construction
University of Nebraska-Lincoln
1110 South 67th Street
Omaha, NE 68182-0816
(402) 554-2065
lwang4@unl.edu
David Feit, Treasurer
Acoustical Society of America
1305 Walt Whitman Road, Suite 300
Melville, NY 11747-4300
(516) 576-2360
dfeit@aip.org
Members of the Executive Council
Peter H. Dahl
Applied Physics Laboratory
and Department of Mechanical
Engineering
University of Washington
1013 N.E. 40th Street
Seattle, WA 98105
(206) 543-2667
dahl@apl.washington.edu
Michael R. Bailey
Applied Physics Laboratory
Center for Industrial and Medical
Ultrasound
1013 N.E. 40th St.
Seattle, WA 98105
(206) 685-8618
bailey@apl.washington.edu
Center for Industrial and Medical
Ultrasound
Applied Physics Laboratory
University of Washington
1013 N.E. 40th Street
Seattle, WA 98105
(206) 221-6585
vera@apl.washington.edu
Christine H. Shadle
Ann R. Bradlow
Haskins Laboratories
300 George Street, Suite 900
New Haven, CT 06511
(203) 865-6163 x 228
shadle@haskins.yale.edu
Department of Linguistics
Northwestern University
2016 Sheridan Road
Evanston, IL 60208
(847) 491-8054
abradlow@northwestern.edu
Applied Research Laboratories
The University of Texas at Austin
P. O. Box 8029
Austin, TX 78713-8029
(512) 835-3790
misakson@arlut.utexas.edu
Air Freight
North, Central,
& S. America
ASA Members
(on membership)
Institutions (print + online) $2155.00
$2315.00
Institutions (online only)
$1990.00
$1990.00
Acoustical Society of America
1305 Walt Whitman Road, Suite 300
Melville, NY 11747-4300
(516) 576-2360
sfox@aip.org
B.G. Shinn-Cunningham, Vice President
L.M. Wang, Vice President-Elect
P.H. Dahl, Past Vice President
A.C. Lavery, Acoustical Oceanography
C.F. Moss, Animal Bioacoustics
K.W. Good, Jr., Architectural Acoustics
N. McDannold, Biomedical Acoustics
R.T. Richards, Engineering Acoustics
A.C.H. Morrison, Musical Acoustics
S.D. Sommerfeldt, Noise
J.R. Gladden, Physical Acoustics
M. Wojtczak, Psychological and Physiological Acoustics
N. Xiang, Signal Processing in Acoustics
C.L. Rogers, Speech Communication
J.E. Phillips, Structural Acoustics and Vibration
D. Tang, Underwater Acoustics
K.J. de Jong, P. Davies, General Cochairs
R.F. Port, Technical Program Chair
K.M. Li, T. Lorenzen, Audio/Visual and WiFi
D. Kewley-Port, T. Bent, Food and Beverage
C. Richie, Volunteer Coordination
W.J. Murphy, Technical Tour
U.J. Hansen, Educational Activities
D. Kewley-Port, Special Events
M. Kondaurova, G. Li, M. Hayward, Indianapolis Visitor
Information
T. Bent, Student Activities
M.C. Morgan, Meeting Administrator
U.S. Army Research Laboratory
RDRL-SES-P
2800 Powder Mill Road
Adelphi, MD 20783-1197
(301) 394-3081
michael.v.scanlon2.civ@mail.mil
Subscription Prices, 2014
U.S.A.
& Poss.
Susan E. Fox, Executive Director
Organizing Committee
Michael V. Scanlon
Marcia J. Isakson
Schomer & Associates Inc.
2117 Robert Drive
Champaign, IL 61821
(217) 359-6602
schomer@schomerandassociates.com
Members of the Technical Council
Vera A. Khokhlova
Department of Ocean Engineering
University of Rhode Island
Narragansett Bay Campus
Narragansett, Rl 02882
(401) 874-6540
miller@uri.edu
Acoustical Society of America
P.O. Box 274
West Barnstable, MA 02668
(508) 362-1200
allanpierce@verizon.net
Paul D. Schomer, Standards Director
Lily M. Wang, Vice President-Elect
James H. Miller
Allan D. Pierce, Editor-in-Chief
Europe
Mideast, Africa,
Asia & Oceania
$ 160.00
$2315.00
$1990.00
$ 160.00
$ 2315.00
$1990.00
The Journal of the Acoustical Society of America (ISSN: 0001-4966) is published monthly by the Acoustical Society of America through the AIP Publishing
LLC. POSTMASTER: Send address changes to The Journal of the Acoustical
Society of America, 1305 Walt Whitman Road, Suite 300, Melville, NY 117474300. Periodicals postage paid at Huntington Station, NY 11746 and additional
mailing offices.
Editions: The Journal of the Acoustical Society of America is published simultaneously in print and online. Journal articles are available online from Volume
1 (1929) to the present. Abstracts of journal articles published by ASA, AIP
Publishing and its Member Societies (and several other publishers) are available
from AIP Publishing’s SPIN database, via AIP Publishing’s Scitation Service
(http://scitation.aip.org).
Back Numbers: All back issues of the Journal are available online. Some,
but not all, print issues are also available. Prices will be supplied upon request
to Elaine Moran, ASA Office Manager, 1305 Walt Whitman Road, Suite 300,
Melville, NY 11747-4300. Telephone: (516) 576-2360; FAX: (631) 923-2875;
E-mail: asa@aip.org.
Subscription, renewals, and address changes should be addressed to AIP
Publishing LLC - FMS, 1305 Walt Whitman Road, Suite 300, Melville, NY 117474300. Allow at least six weeks advance notice. For address changes please send
both old and new addresses and, if possible, include a mailing label from a recent
issue.
Claims, Single Copy Replacement and Back Volumes: Missing issue
requests will be honored only if received within six months of publication date
(nine months for Australia and Asia). Single copies of a journal may be ordered
and back volumes are available. Members—contact AIP Publishing Member
Services at (516) 576-2288; (800) 344-6901. Nonmember subscribers—contact
AIP Publishing Subscriber Services at (516) 576-2270; (800) 344-6902; E-mail:
subs@aip.org.
Page Charge and Reprint Billing: Contact: AIP Publishing Publication Page
Charge and Reprints—CFD, 1305 Walt Whitman Road, Suite 300, Melville, NY
11747-4300; (516) 576-2234; (800) 344-6909; E-mail: prc@aip.org.
Document Delivery: Copies of journal articles can be purchased for immediate download at www.asadl.org.
Your Source for all Your
Sound and Vibration Instrumentation
Sales/Calibration/Rental
Brands you Know
DELTA
Tools you Need
Calibrators
Sound Sources
Impedance Tubes
Tapping Machines
Prediction Software
Construction Noise Monitoring
Intensity Systems
Acoustic Camera
Vibration Meters
Sound Level Meters
Multi-Channel Systems
Microphones & Accelerometers
Calibrations you Trust
Distributed by:
Scantek, Inc.
www.Scantekinc.com
410.290.7726
TECHNICAL PROGRAM SUMMARY
*Indicates Special Session
Monday morning
1aAB
Topics in Animal Bioacoustics I
*1aNS
Metamaterials for Noise Control I
*1aPA
Jet Noise Measurements and Analyses I
1aSC
Speech Processing and Technology (Poster Session)
*1aSP
Sampling Methods for Bayesian Signal Processing
*1aUW Understanding the Target/Waveguide System–Measurement and
Modeling I
Monday afternoon
*1pAA
Computer Auralization as an Aid to Acoustically Proper Owner/
Architect Design Decisions
*1pAB
Array Localization of Vocalizing Animals
1pBA
Medical Ultrasound
*1pNS
Metamaterials for Noise Control II
*1pPA
Jet Noise Measurements and Analyses II
*1pSCa Findings and Methods in Ultrasound Speech Articulation Tracking
1pSCb Issues in Cross Language and Dialect Perception (Poster Session)
*1pUW Understanding the Target/Waveguide System–Measurement and
Modeling II
Monday evening
*1eID
Tutorial Lecture on Musical Acoustics: Science and Performance
Tuesday morning
*2aAA
Architectural Acoustics and Audio I
*2aAB
Mobile Autonomous Platforms for Bioacoustic Sensing
*2aAO
Parameter Estimation in Environments That Include Out-of-Plane
Propagation Effects
*2aBA
Quantitative Ultrasound I
*2aED
Undergraduate Research Exposition (Poster Session)
*2aID
Historical Transducers
*2aMU Piano Acoustics
*2aNSa New Frontiers in Hearing Protection I
*2aNSb Launch Vehicle Acoustics I
2aPA
Outdoor Sound Propagation
*2aSAa Computational Methods in Structural Acoustics and Vibration
*2aSAb Vehicle Interior Noise
2aSC
Speech Production and Articulation (Poster Session)
2aUW Signal Processing and Ambient Noise
Tuesday afternoon
*2pAA
Architectural Acoustics and Audio II
2pAB
Topics in Animal Bioacoustics II
2pAO
General Topics in Acoustical Oceanography
*2pBA
Quantitative Ultrasound II
2pEDa General Topics in Education in Acoustics
*2pEDb Take’s 5
*2pID
Centennial Tribute to Leo Beranek’s Contributions in Acoustics
*2pMU Synchronization Models in Musical Acoustics and Psychology
*2pNSa New Frontiers in Hearing Protection II
*2pNSb Launch Vehicle Acoustics II
*2pPA
Demonstrations in Acoustics
*2pSA
Nearfield Acoustical Holography
2pSC
Segments and Suprasegmentals (Poster Session)
2pUW Propagation and Scattering
Wednesday morning
*3aAA
Design and Performance of Office Workspaces in High
Performance Buildings
*3aAB
Predator–Prey Relationships
*3aAO
Education in Acoustical Oceanography and Underwater Acoustics
3aBA
Kidney Stone Lithotripsy
*3aEA
Mechanics of Continuous Media
*3aID
Graduate Studies in Acoustics (Poster Session)
3aMU Topics in Musical Acoustics
*3aNS
Wind Turbine Noise
*3aPA
Acoustics of Pile Driving: Models, Measurements, and Mitigation
A8
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
*3aSAa
3aSAb
*3aSC
3aSPa
3aSPb
*3aUW
Vibration Reduction in Air-Handling Systems
General Topics in Structural Acoustics and Vibration
Vowels = Space + Time, and Beyond: A Session in Honor of Diane
Kewley-Port
Beamforming and Source Tracking
Spectral Analysis, Source Tracking, and System Identification
(Poster Session)
Standardization of Measurement, Modeling, and Terminology of
Underwater Sound
Wednesday afternoon
3pAA
Architectural Acoustics Medley
*3pBA
History of High Intensity Focused Ultrasound
*3pED
Acoustics Education Prize Lecture
*3pID
Hot Topics in Acoustics
3pNS
Sonic Boom and Numerical Methods
*3pUW Shallow Water Reverberation I
Thursday morning
*4aAAa Room Acoustics Effects on Speech Comprehension and Recall I
*4aAAb Uses, Measurements, and Advancements in the Use of Diffusion
and Scattering Devices
*4aAB
Use of Passive Acoustics for Estimation of Animal Population
Density I
*4aBA
Mechanical Tissue Fractionation by Ultrasound: Methods, Tissue
Effects, and Clinical Applications I
4aEA
Acoustic Transduction: Theory and Practice I
*4aPAa Borehole Acoustic Logging and Micro-Seismics for Hydrocarbon
Reservoir Characterization
4aPAb Topics in Physical Acoustics I
*4aPP
Physiological and Psychological Aspects of Central Auditory
Processing Dysfunction I
*4aSCa Subglottal Resonances in Speech Production and Perception
4aSCb Learning and Acquisition of Speech (Poster Session)
4aSPa
Imaging and Classification
4aSPb Beamforming, Spectral Estimation, and Sonar Design
*4aUW Shallow Water Reverberation II
Thursday afternoon
*4pAAa Acoustic Trick-or-Treat: Eerie Noises, Spooky Speech, and
Creative Masking
*4pAAb Room Acoustics Effects on Speech Comprehension and Recall II
*4pAB
Use of Passive Acoustics for Estimation of Animal Population
Density II
*4pBA
Mechanical Tissue Fractionation by Ultrasound: Methods, Tissue
Effects, and Clinical Applications II
4pEA
Acoustic Transduction: Theory and Practice II
*4pMU Assessing the Quality of Musical Instruments
*4pNS
Virtual Acoustic Simulation
4pPA
Topics in Physical Acoustics II
*4pPP
Physiological and Psychological Aspects of Central Auditory
Processing Dysfunction II
4pSC
Voice (Poster Session)
*4pUW Shallow Water Reverberation III
Friday morning
*5aBA
Cavitation Control and Detection Techniques
*5aED
Hands-On Acoustics: Demonstrations for Indianapolis Area
Students
5aNS
Transportation Noise, Soundscapes, and Related Topics
5aPPa
Psychological and Physiological Acoustics Potpourri (Poster
Session)
5aPPb Perceptual and Physiological Mechanisms, Modeling, and
Assessment
5aSC
Speech Perception and Production in Challenging Conditions
(Poster Session)
*5aUW Acoustics, Ocean Dynamics, and Geology of Canyons
168th Meeting: Acoustical Society of America
A8
The Art of Sound.
Thinking Sound.
>OH[KV1VOU(KHTZ4VU[YL\_1Haa-LZ[P]HS+H]L4H[[OL^Z
4L[HSSPJH4HYR4VYYPZ;OL*VUJLY[NLIV\^;OL(WWLS9VVT
;OL4\ZPR]LYLPU9V`HS(SILY[/HSSHUK:HU-YHUJPZJV6WLYH
OH]LPUJVTTVU&4L`LY:V\UK;OL[VWJOVPJLPUJ\[[PUNLKNL
ZVUPJZVS\[PVUZMVY]LU\LZHUKHY[PZ[Z^VYSK^PKL
@V\»SSOLHY[OLKPMMLYLUJL>LN\HYHU[LLP[
^^^TL`LYZV\UKJVT
Original image made available by NASA, ESA, and the Hubble Heritage Team (STscI/Aura). Digital montage by Deborah O’Grady.
The Science of Sound.
A10
SCHEDULE OF STARTING TIMES FOR TECHNICAL SESSIONS AND TECHNICAL COMMITTEE (TC) MEETINGS
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
M am
M pm
Tu am
Tu pm
1aPA
8:15
1pBA
1:15
1pPA
1:15
2aBA
7:55
2aPA
8:30
1aUW
8:45
1aSP
8:40
1pUW
1:25
2aNSb
8:15
2aUW
8:00
2aAO
8:25
2pBA
1:30
2pPA
1:00
2pEDa
2:45
2pEDb
3:30
2pID
1:55
2pUW
1:00
2pAO
1:45
1aAB
8:25
1pAB
1:00
1pSCa
1:00
Indiana A/B
Indiana C/D
M eve
Indiana E
Indiana F
Indiana G
Lincoln
Marriott 1/2
Marriott 3/4
168th Meeting: Acoustical Society of America
Marriott 5
1aNS
7:55
1aSC
9:30
2aAB
8:25
2aSAa
8:00
2aSAb
10:30
2aNSa
9:25
2aSC
8:00
1pNS
12:55
1pSCb
1:00
Marriott 6
Marriott 7/8
2aED
9:00
2aAA
7:55
2aID
8:00
2aMU
9:00
1pAA
1:00
Marriott 9/10
Santa Fe
Hilbert Theater
A10
1eID
7:00
2pAB
1:25
2pSA
2:00
2pNSa
1:25
2pSC
1:00
2pAA
1:00
2pNSb
1:00
2pMU
1:00
Tu ev
W am
W pm
W ev
Th am
Th pm
3pBA
1:00
3pED
2:00
TCBA
7:30
TCPA
8:00
3aBA
8:00
3aPA
8:20
4aBA
7:55
4aPAa
8:00
4aPAb
10:30
4pBA
1:30
4pPA
1:30
3aAO
8:00
3aUW
9:00
3aSPa
8:30
8aSPb
10:15
3aAB
8:25
3aSAa
8:00
3aSAb
10:00
3aNS
8:45
3aSC
8:00
3pID
1:00
TCAO
8:00
TCSA
8:00
TCSC
8:00
TCAA
8:00
TCEA
4:30
3aID
9:00
3aAA
8:20
3aEA
8:00
3aMU
9:00
4pUW
1:00
4pAAa
1:10
TCUW
7:30
TCSP
7:30
4pAB
1:15
4pPP
1:30
TCAB
7:30
TCPP
7:30
TCNS
7:30
4aSCb
8:00
4pNS
1:15
4pSC
1:00
4aAAa
8:40
4aEA
8:30
4aSCa
8:00
4aAAb
10:35
4pAAb
1:15
4pEA
1:30
4pMU
1:00
Fri am
5aBA
8:00
4aUW
8:00
4aSPa
9:00
4aSPb
10:15
4aAB
8:00
4aPP
8:30
3pNS
1:00
3pAA
1:00
3pUW
1:00
Th ev
5aED
10:00
5aUW
8:00
5aPPb
10:15
5aPPa
8:00
5aSC
8:00
5aNS
9:45
TCMU
7:30
Acoustical Society of America / Indianapolis Marriott Downtown
Guest
Guest
Elevators Elevators
Clubhouse
Colorado
1st Floor
Front
Desk
IN A
IN G
IN B
W
Avenue
Indiana Ballroom
Lobby
IN E
IN C
IN F
M
IN D
Monument
Missouri
Entrance
Florida
Illinois
Texas
Michigan
Escalators
Parking
Garage
Elevators
Utah
Phoenix
M
W
Lincoln
M
W
Guest
Elevators
2nd Floor
Guest
Elevators
Albany
Atlanta
Austin
Boston
Columbus
Santa Fe
Business
Center
MB1
Indy
Board
Room
Skywalk
MB10
Stairway
MB9
MB2
Marriott
Ballroom
MB5
MB6
MB3
MB8
MB4
MB7
MB 7-10
Foyer
for
Exhibits
Open to
Lobby
Below
Stairway
Denver Foyer
Denver
Marriott Foyer
Reg 1
A11
Reg 2
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Parking
Garage
Elevators
Escalators
168th Meeting: Acoustical Society of America
A11
A12
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
A12
TECHNICAL PROGRAM CALENDAR
168th Meeting
Indianapolis, Indiana
27–31 October 2014
MONDAY MORNING
8:25
1aAB
Animal Bioacoustics: Topics in Animal
Bioacoustics I. Lincoln
7:55
1aNS
Noise, Physical Acoustics, Structural
Acoustics and Vibration, and Engineering
Acoustics: Metamaterials for Noise Control I.
Marriott 3/4
8:15
1aPA
Physical Acoustics and Noise: Jet Noise
Measurements and Analyses I. Indiana C/D
9:30
1aSC
Speech Communication: Speech
Processing and Technology (Poster
Session). Marriott 5
8:40
1aSP
Signal Processing in Acoustics: Sampling
Methods for Bayesian Signal Processing.
Indiana G
8:45
1aUW
Underwater Acoustics: Understanding the
Target/Waveguide System-Measurement and
Modeling I. Indiana F
TUESDAY MORNING
7:55
2aAA
Architectural Acoustics and Engineering
Acoustics : Architectural Acoustics and
Audio I. Marriott 7/8
8:25
2aAB
Animal Bioacoustics, Acoustical
Oceanography, and Signal Processing in
Acoustics: Mobile Autonomous Platforms
for Bioacoustic Sensing. Lincoln
8:25
2aAO
Acoustical Oceanography, Underwater
Acoustics, and Signal Processing in
Acoustics: Parameter Estimation in
Environments that Include Out-of-Plane
Propagation Effects. Indiana G
7:55
2aBA
Biomedical Acoustics: Quantitative
Ultrasound I. Indiana A/B
9:00
2aED
Education in Acoustics: Undergraduate
Research Exposition (Poster Session).
Marriott 6
8:00
2aID
Archives and History and Engineering
Acoustics: Historical Transducers. Marriott
9/10
9:00
2aMU
Musical Acoustics: Piano Acoustics.
Santa Fe
9:25
2aNSa
Noise and Psychological and Physiological
Acoustics: New Frontiers in Hearing
Protection I. Marriott 3/4
8:15
2aNSb
Noise and Structural Acoustics and
Vibration: Launch Vehicle Acoustics I.
Indiana E
8:30
2aPA
Physical Acoustics: Outdoor Sound
Propagation. Indiana C/D
8:00
2aSAa
Structural Acoustics and Vibration and
Noise: Computational Methods in Structural
Acoustics and Vibration. Marriott 1/2
MONDAY AFTERNOON
1:00
1:00
1pAA
1pAB
Architectural Acoustics: Computer
Auralization as an Aid to Acoustically
Proper Owner/Architect Design Decisions.
Marriott 7/8
Animal Bioacoustics and Signal
Processing in Acoustics : Array
Localization of Vocalizing Animals. Lincoln
1pBA
Biomedical Acoustics: Medical Ultrasound.
Indiana A/B
12:55 1pNS
Noise and Physical Acoustics: Metamaterials
for Noise Control II. Marriott 3/4
1:15
1pPA
Physical Acoustics and Noise: Jet Noise
Measurements and Analyses II. Indiana C/D
1:00
1pSCa
Speech Communication and Biomedical
Acoustics: Findings and Methods in
Ultrasound Speech Articulation Tracking.
Marriott 1/2
1:15
1:00
1:25
1pSCb
1pUW
10:30 2aSAb
Structural Acoustics and Vibration and
Noise: Vehicle Interior Noise. Marriott 1/2
8:00
2aSC
Speech Communication: Issues in Cross
Language and Dialect Perception (Poster
Session). Marriott 5
Speech Communication: Speech
Production and Articulation (Poster
Session). Marriott 5
8:00
2aUW
Underwater Acoustics: Understanding the
Target/Waveguide System-Measurement and
Modeling II. Indiana F
Underwater Acoustics: Signal Processing
and Ambient Noise. Indiana F
TUESDAY AFTERNOON
MONDAY EVENING
7:00
A13
1eID
Interdisciplinary: Tutorial Lecture
on Musical Acoustics: Science and
Performance. Hilbert Theater
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
1:00
2pAA
Architectural Acoustics and Engineering
Acoustics: Architectural Acoustics and
Audio II. Marriott 7/8
1:25
2pAB
Animal Bioacoustics: Topics in Animal
Bioacoustics II. Lincoln
168th Meeting: Acoustical Society of America
A13
1:45
2pAO
Acoustical Oceanography: General Topics
in Acoustical Oceanography. Indiana G
9:00
3aMU
Musical Acoustics: Topics in Musical
Acoustics. Santa Fe
1:30
2pBA
Biomedical Acoustics: Quantitative
Ultrasound II. Indiana A/B
8:45
3aNS
Noise and ASA Committee on Standard:
Wind Turbine Noise. Marriott 3/4
2:45
2pEDa
Education in Acoustics: General Topics in
Education in Acoustics. Indiana C/D
8:20
3aPA
3:30
2pEDb Education in Acoustics: Take 5’s. Indiana
C/D
1:55
2pID
Interdisciplinary: Centennial Tribute to
Leo Beranek’s Contributions in Acoustics.
Indiana E
Physical Acoustics, Underwater
Acoustics, Structural Acoustics and
Vibration, and Noise: Acoustics of Pile
Driving: Models, Measurements, and
Mitigation. Indiana C/D
8:00
3aSAa
1:00
2pMU
Musical Acoustics: Synchronization
Models in Musical Acoustics and
Psychology. Santa Fe
Structural Acoustics and Vibration,
Architectural Acoustics, and Noise:
Vibration Reduction in Air-Handling
Systems. Marriott 1/2
10:00 3aSAb
Noise and Psychological and Physiological
Acoustics: New Frontiers in Hearing
Protection II. Marriott 3/4
Structural Acoustics and Vibration:
General Topics in Structural Acoustics and
Vibration. Marriott 1/2
8:00
3aSC
Noise and Structural Acoustics and
Vibration: Launch Vehicle Acoustics II.
Marriott 9/10
Speech Communication: Vowels = Space
+ Time, and Beyond: A Session in Honor of
Diane Kewley-Port. Marriott 5
8:30
3aSPa
Physical Acoustics and Education in
Acoustics: Demonstrations in Acoustics.
Indiana C/D
Signal Processing in Acoustics:
Beamforming and Source Tracking.
Indiana G
10:15 3aSPb
Signal Processing in Acoustics: Spectral
Analysis, Source Tracking, and System
Identification (Poster Session). Indiana G
9:00
Underwater Acoustics, Acoustical
Oceanography, Animal Bioacoustics,
and ASA Committee on Standards:
Standardization of Measurement, Modeling,
and Terminology of Underwater Sound.
Indiana F
1:25
1:00
1:00
2:00
2pNSa
2pNSb
2pPA
2pSA
Structural Acoustics and Vibration,
Signal Processing in Acoustics, and
Engineering Acoustics: Nearfield
Acoustical Holography. Marriott 1/2
1:00
2pSC
Speech Communication: Segments and
Suprasegmentals (Poster Session). Marriott 5
1:00
2pUW
Underwater Acoustics: Propagation and
Scattering. Indiana F
WEDNESDAY MORNING
8:20
3aAA
Architectural Acoustics and Noise:
Design and Performance of Office
Workspaces in High Performance
Buildings. Marriott 7/8
8:25
3aAB
Animal Bioacoustics: Predator-Prey
Relationships. Lincoln
8:00
3aAO
Acoustical Oceanography, Underwater
Acoustics, and Education in Acoustics:
Education in Acoustical Oceanography and
Underwater Acoustics. Indiana E
8:00
3aBA
Biomedical Acoustics: Kidney Stone
Lithotripsy. Indiana A/B
8:00
3aEA
Engineering Acoustics and Structural
Acoustics and Vibration: Mechanics of
Continuous Media. Marriott 9/10
Student Council, Education in Acoustics,
and Acoustical Oceanography: Graduate
Studies in Acoustics (Poster Session).
Marriott 6
9:00
3aID
A14
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
3aUW
WEDNESDAY AFTERNOON
1:00
3pAA
Architectural Acoustics: Architectural
Acoustics Medley. Marriott 7/8
1:00
3pBA
Biomedical Acoustics: History of High
Intensity Focused Ultrasound. Indiana A/B
2:00
3pED
Education in Acoustics: Acoustics
Education Prize Lecture. Indiana C/D
1:00
3pID
Interdisciplinary: Hot Topics in Acoustics.
Indiana E
1:00
3pNS
Noise: Sonic Boom and Numerical
Methods. Marriott 3/4
1:00
3pUW
Underwater Acoustics: Shallow Water
Reverberation I. Marriott 9/10
THURSDAY MORNING
8:40
4aAAa Architectural Acoustics, Speech
Communication, and Noise:
Room Acoustics Effects on Speech
Comprehension and Recall I. Marriott 7/8
168th Meeting: Acoustical Society of America
A14
10:35 4aAAb Architectural Acoustics: Uses,
Measurements, and Advancements in the Use
of Diffusion and Scattering Devices. Santa Fe
8:00
7:55
4aAB
4aBA
Animal Bioacoustics and Acoustical
Oceanography: Use of Passive Acoustics
for Estimation of Animal Population
Density I. Lincoln
Biomedical Acoustics: Mechanical Tissue
Fractionation by Ultrasound: Methods,
Tissue Effects, and Clinical Applications I.
Indiana A/B
8:30
4aEA
Engineering Acoustics: Acoustic
Transduction: Theory and Practice I.
Marriott 9/10
8:00
4aPAa
Physical Acoustics, Underwater
Acoustics, Signal Processing in Acoustics,
Structural Acoustics and Vibration, and
Noise: Borehole Acoustic Logging and
Micro-Seismics for Hydrocarbon Reservoir
Characterization. Indiana C/D
10:30 4aPAb
Physical Acoustics: Topics in Physical
Acoustics I. Indiana C/D
8:30
Psychological and Physiological
Acoustics: Physiological and Psychological
Aspects of Central Auditory Processing
Dysfunction I. Marriott 1/2
4aPP
1:15
4pAB
Animal Bioacoustics and Acoustical
Oceanography: Use of Passive Acoustics
for Estimation of Animal Population
Density II. Lincoln
1:30
4pBA
Biomedical Acoustics: Mechanical Tissue
Fractionation by Ultrasound: Methods,
Tissue Effects, and Clinical Applications II.
Indiana A/B
1:30
4pEA
Engineering Acoustics: Acoustic
Transduction: Theory and Practice II.
Marriott 9/10
1:00
4pMU
Musical Acoustics: Assessing the Quality
of Musical Instruments. Santa Fe
1:15
4pNS
Noise: Virtual Acoustic Simulation.
Marriott 3/4
1:30
4pPA
Physical Acoustics: Topics in Physical
Acoustics II. Indiana C/D
1:30
4pPP
Psychological and Physiological
Acoustics: Physiological and Psychological
Aspects of Central Auditory Processing
Dysfunction II. Marriott 1/2
1:00
4pSC
Speech Communication: Voice (Poster
Session). Marriott 5
1:00
4pUW
Underwater Acoustics: Shallow Water
Reverberation III. Indiana F
8:00
4aSCa
Speech Communication: Subglottal
Resonances in Speech Production and
Perception. Santa Fe
FRIDAY MORNING
8:00
4aSCb
Speech Communication: Learning and
Acquisition of Speech (Poster Session).
Marriott 5
10:00 5aED
Education in Acoustics: Hands-On
Acoustics: Demonstrations for Indianapolis
Area Students. Indiana E
9:45
5aNS
Noise: Transportation Noise, Soundscapes,
and Related Topics. Marriott 7/8
8:00
5aPPa
Psychological and Physiological
Acoustics: Psychological and Physiological
Acoustics Potpourri (Poster Session).
Marriott 5
9:00
4aSPa
10:15 4aSPb
8:00
4aUW
Signal Processing in Acoustics: Imaging
and Classification. Indiana G
Signal Processing in Acoustics:
Beamforming, Spectral Estimation, and
Sonar Design. Indiana G
1:15
A15
5aBA
Underwater Acoustics: Shallow Water
Reverberation II. Indiana F
THURSDAY AFTERNOON
1:10
8:00
4pAAa Architectural Acoustics and Speech
Communication: Acoustic Trick-or-Treat:
Eerie Noises, Spooky Speech, and Creative
Masking. Indiana G
4pAAb Architectural Acoustics, Speech
Communication, and Noise:
Room Acoustics Effects on Speech
Comprehension and Recall II. Marriott 7/8
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Biomedical Acoustics: Cavitation Control
and Detection Techniques. Indiana A/B
10:15 5aPPb
Psychological and Physiological
Acoustics: Perceptual and Physiological
Mechanisms, Modeling, and Assessment.
Marriott 1/2
8:00
5aSC
Speech Communication: Speech
Perception and Production in Challenging
Conditions (Poster Session). Marriott 5
8:00
5aUW
Underwater Acoustics: Acoustics, Ocean
Dynamics, and Geology of Canyons.
Indiana F
168th Meeting: Acoustical Society of America
A15
SCHEDULE OF COMMITTEE MEETINGS AND OTHER EVENTS
COUNCIL AND ADMINISTRATIVE COMMITTEES AND OTHER
GROUPS
Mon, 27 Oct,7:30 a.m.
Mon, 27 Oct, 3:30 p.m.
Tue, 28 Oct, 7:00 a.m.
Tue, 28 Oct, 7:00 a.m.
Tue, 28 Oct, 7:30 a.m.
Tue, 28 Oct, 7:30 a.m.
Tue, 28 Oct, 11:45 a.m.
Executive Council
Technical Council
ASA Press Editorial
Board
POMA Editorial Board
Panel on Public Policy
Translation of Chinese
Journals
Editorial Board
Tue, 28 Oct, 12:00 noon
Tue, 28 Oct, 12:00 noon
Activity Kit
Prizes & Special
Fellowships
Tue, 28 Oct, 12:00 noon Student Council
Tue, 28 Oct, 1:30 p.m.
Meetings
Tue, 28 Oct, 4:00 p.m.
Books+
Tue, 28 Oct, 4:00 p.m.
Education in Acoustics
Tue, 28 Oct, 4:30 p.m.
Newman Fund Advisory
Tue, 28 Oct, 5:00 p.m.
Women in Acoustics
Wed, 29 Oct, 6:45 a.m. International Research &
Education
Wed, 29 Oct, 7:00 a.m. College of Fellows
Wed, 29 Oct, 7:00 a.m. Publication Policy
Wed, 29 Oct, 7:00 a.m. Regional Chapters
Wed, 29 Oct, 11:00 a.m. Medals and Awards
Wed, 29 Oct, 11:15 a.m. Public Relations
Wed, 29 Oct, 12:00 noon Membership
Wed, 29 Oct, 1:30 p.m. AS Foundation Board
Wed, 29 Oct, 5:30 p.m. Health Care Acoustics
Thu, 30 Oct, 7:00 a.m.
Archives & History
Thu, 30 Oct, 7:00 a.m.
Tutorials
Thu, 30 Oct, 7:30 a.m.
Investment
Thu, 30 Oct, 11:00 a.m. Acoustics Today
Advisory
Thu, 30 Oct, 2:00 p.m.
Publishing Services
Thu, 30 Oct, 4:30 p.m.
External Affairs
Thu, 30 Oct, 4:30 p.m.
Internal Affairs
Fri, 31 Oct, 7:00 a.m.
Technical Council
Fri, 31 Oct, 11:00 a.m.
Executive Council
Denver
Denver
Illinois
Denver
Michigan
Indy Boardroom
Circle City Bar &
Grille
Illinois
Utah
Atlanta
Denver
Illinois
Indiana C/D
Utah
Denver
Michigan
Florida
Illinois
Denver
Denver
Michigan
Florida
Illinois
Utah
Denver
Florida
Utah
Illinois
Florida
Michigan
Illinois
Denver
Denver
TECHNICAL COMMITTEEE OPEN MEETINGS
Tue, 28 Oct, 4:30 p.m.
Tue, 28 Oct, 8:00 p.m.
Tue, 28 Oct, 8:00 p.m.
Tue, 28 Oct, 8:00 p.m.
Tue, 28 Oct, 8:00 p.m.
Tue, 28 Oct, 8:00 p.m.
Wed, 29 Oct, 7:30 p.m.
Thu, 30 Oct, 7:30 p.m.
Thu, 30 Oct, 7:30 p.m.
Thu, 30 Oct, 7:30 p.m.
Thu, 30 Oct, 7:30 p.m.
Thu, 30 Oct, 7:30 p.m.
Thu, 30 Oct, 7:30 p.m.
A16
Engineering Acoustics
Acoustical
Oceanography
Architectural Acoustics
Physical Acoustics
Speech Communication
Structural Acoustics and
Vibration
Biomedical Acoustics
Animal Bioacoustics
Musical Acoustics
Noise
Psychological and
Physiological Acoustics
Signal Processing in
Acoustics
Underwater Acoustics
Santa Fe
Indiana G
Marriott 7/8
Indiana C/D
Marriott 3/4
Marriott 1/2
Indiana A/B
Lincoln
Santa Fe
Marriott 3/4
Marriott 1/2
Indiana G
Indiana F
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
STANDARDS COMMITTEEES AND WORKING GROUPS
Mon, 27 Oct, 1:00 p.m.
Mon, 27 Oct, 7:00 p.m.
Tue, 28 Oct, 7:00 a.m.
Tue, 28 Oct, 7:00 a.m.
Tue, 28 Oct, 4:00 p.m.
S12/WG11-Hearing
Protectors
ASACOS Steering
S1/WG4-Sound Pressure
Levels
ASACOS
S1/WG20-Ground
Impedance
Atlanta
Atlanta
Atlanta
Boston/Austin
Atlanta
MEEETING SERVICES, SPECIAL EVENTS, SOCIAL EVENTS
Mon-Thu, 27-30 Oct,
7:30 a.m. - 5:00 p.m.
Fri, 31 Oct,
7:30 a.m. - 12:00 noon
Mon-Thu, 27-30 Oct,
7:00 a.m. - 5:00 p.m.
Fri, 31 Oct,
7:00 a.m. - 12:00 noon
Mon-Thu, 27-30 Oct,
7:00 a.m. - 5:00 p.m.
Fri, 31 Oct,
7:00 a.m. - 12:00 noon
Mon-Thu, 27-30 Oct,
8:00 a.m. to 10:00 a.m.
Mon-Fri, 27-31 Oct,
9:40 a.m. - 10:40 a.m.
Tue-Thu, 28-30 Oct,
Registration
Marriott Foyer
E-mail/Internet Café
Marriott 6
A/V Preview
Albany
Accompanying Persons
Texas
Coffee Break
Marriott 6
Sun, 26 Oct,
Short Course
1:00 p.m. - 5:00 p.m.
Mon, 27 Oct,
7:30 a.m. - 12:30 p.m.
Mon-Thu, 27-30 Oct,
Gallery of Acoustics
9:00 a.m.-5:00 p.m.
Tue-Thu, 28-30 Oct,
Resume Help Desk
12:00 noon - 1:00 p.m.
Mon, 27 Oct,
Student Orientation
5:00 p.m. - 5:30 p.m.
Mon, 27 Oct,
Student Meet and Greet
5:30 p.m. - 6:45 p.m.
Mon, 27 Oct,
Pre-Tutorial Tour of
6:00 p.m. - 7:00 p.m.
Hilbert Circle Theater
Mon, 27 Oct,
Tutorial Lecture
7:00 p.m.-9:00 p.m.
Tue, 28 Oct,
Tour: Center for the
10:00 a.m. - 12:00 noon Performing Arts
Tue, 28 Oct,
Social at Eiteljorg
6:00 p.m. - 9:00 p.m.
Museum
Wed, 29 Oct,
Women in Acoustics
11:30 a.m. - 1:30 p.m.
Luncheon
Wed, 29 Oct,
Annual Membership
3:30 p.m.
Meeting
Wed, 29 Oct,
Plenary Session and
3:30 p.m. - 4:30 p.m.
Awards Ceremony
Wed, 29 Oct,
Student Reception
6:45 p.m.-8:15 p.m.
Wed, 29 Oct,
ASA Jam
8:00 p.m. - 12:00 midnight
Thu, 30 Oct,
Society Luncheon
12:00 noon - 2:00 p.m.
and Lecture
Thu, 30 Oct,
Tour: 3M Acoustics
3:00 p.m. - 6:00 p.m.
Facilities
Thu, 30 Oct,
Social
6:00 p.m. - 7:30 p.m.
Indiana Ballroom
Foyer
Santa Fe
Marriott 6
Marriott Foyer
Marriott 9/10
Marriott 6
Hilbert Circle Theater
Hilbert Circle Theater
Missouri Street
Entrance
Eiteljorg Museum
Circle City Bar and
Grille
Marriott 5
Marriott 5
Indiana E
Marriott 6
Indiana E
Missouri Street
Entrance
Marriott 5/6
168th Meeting: Acoustical Society of America
A16
168th Meeting of the Acoustical Society of America
The 168th meeting of the Acoustical Society of America will
be held Monday through Friday, 27–31 October 2014 at the
Marriott Indianapolis Downtown Hotel, Indianapolis, Indiana,
USA.
SECTION HEADINGS
1. HOTEL INFORMATION
2. TRANSPORTATION AND TRAVEL DIRECTIONS
3. STUDENT TRANSPORTATION SUBSIDIES
4. MESSAGES FOR ATTENDEES
5. REGISTRATION
6. ASSISTIVE LISTENING DEVICES
7. TECHNICAL SESSIONS
8. TECHNICAL SESSION DESIGNATIONS
9. HOT TOPICS SESSION
10. ROSSING PRIZE IN ACOUSTICS EDUCATION AND
ACOUSTICS EDUCATION PRIZE LECTURE
11. TUTORIAL LECTURE
12. SHORT COURSE
13. UNDERGRADUATE RESEARCH POSTER
EXPOSITION
14. RESUME DESK
15. TECHNICAL COMMITTEE OPEN MEETINGS
16. TECHNICAL TOURS
17. GALLERY OF ACOUSTICS
18. ANNUAL MEMBERSHIP MEETING
19. PLENARY SESSION AND AWARDS CEREMONY
20. ANSI STANDARDS COMMITTEES
21. COFFEE BREAKS
22. A/V PREVIEW ROOM
23. PROCEEDINGS OF MEETINGS ON ACOUSTICS
24. E-MAIL ACCESS, INTERNET CAFÉ AND BREAK
ROOM
25. SOCIALS
26. SOCIETY LUNCHEON AND LECTURE
27. STUDENTS MEET MEMBERS FOR LUNCH
28. STUDENT EVENTS: NEW STUDENT ORIENTATION, MEET AND GREET, STUDENT RECEPTION
29. WOMEN IN ACOUSTICS LUNCHEON
30. JAM SESSION
31. ACCOMPANYING PERSONS PROGRAM
32. WEATHER
33. TECHNICAL PROGRAM ORGANIZING COMMITTEE
34. MEETING ORGANIZING COMMITTEE
35. PHOTOGRAPHING AND RECORDING
36. ABSTRACT ERRATA
37. GUIDELINES FOR ORAL PRESENTATIONS
38. SUGGESTIONS FOR EFFECTIVE POSTER PRESENTATIONS
39. GUIDELINES FOR USE OF COMPUTER PROJECTION
40. DATES OF FUTURE ASA MEETINGS
1. HOTEL INFORMATION
The Indianapolis Marriott Downtown Hotel is the
headquarters hotel where all meeting events will be held.
A17
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Note that there are three Marriott hotels in Indianapolis
so please specify the Downtown as your destination when
traveling.
The cut-off date for reserving rooms at special rates has
passed. Please contact the Indianapolis Marriott Downtown
Hotel for reservation information: 350 West Maryland Street,
Indianapolis, IN 46225, Tel: 317-822-3500.
2. TRANSPORTATION AND TRAVEL DIRECTIONS
Indianapolis is served by many major airlines through
Indianapolis International Airport (IND). Information is
available at www.indianapolisairport.com. The airport
terminal consists of one centralized check-in area with gates
on two concourses A and B which are connected via walkways
to each other as well as to the check-in and reception areas
of the airport. You can easily walk from the terminal via an
elevated walkway to the car rental desks, which are in the
Ground Transportation Center. Also, located in the same area
are the limos and ground transportation information desks.
TAXI. Taxis depart from just outside the baggage claim area on
the ground floor of the terminal. There is a minimum charge of
USD $15 for all taxis, whatever the distance travelled. Typical
costs to downtown Indianapolis are USD $15 to USD $20.
Driving time to reach the Downtown Marriott is about 30
minutes.
GO GREEN LINE AIRPORT SHUTTLE. The shuttle leaves the
airport on the hour and the half hour from Zone #7 on the
road just outside the Ground Transportation Center. The cost
is USD $10 (debit/credit card only accepted by drivers) and
takes about 36 minutes to reach the Marriott complex which
includes the downtown Marriott (as well as Springhill Suites,
the JW Marriott, Courtyard Marriott and Fairfield Inn). You
can book online at goexpresstravel.com.
BUS SERVICE TO AND FROM THE AIRPORT. IndyGo’s Route 8
(www.indygo.net/maps-schedules/airport-service) provides
non-express, fixed-route service from the airport to downtown via stops along Washington Street. Cost is USD $1.75
per ride. For further information and route maps visit http://
www.indygo.net/maps-schedules/airport-service. The buses
stop close to all the major downtown hotels. The stop at West
and Washington is just northwest of the hotel. Pick-up stops
are slightly different but still nearby. The stop at the airport is
at Zone #6 on the road just outside the Ground Transportation
Center.
SHARED-RIDE AND PERSONAL LUXURY LIMOUSINE SERVICES. These
transportation services are available. Information desks are
located in the Ground Transportation Center. A list of limousine companies can be found at www.indianapolisairport.com.
RENTAL CAR. Renting a car is not recommended unless you are
planning trips out of town. Most everything you need should
be within walking distance of the hotel. There are a lot of very
nice restaurants, museums and shops reasonably close to the
hotel. If you do need a rental car, the desks are located in the
Ground Transportation Center on the 1st floor (ground level)
of the parking garage. Alamo, Avis, Budget, Dollar, Enterprise, Hertz, National, and Thrifty all have desks at the airport
168th Meeting: Acoustical Society of America
A17
and ACE has an off-airport location with a shuttle service to
and from the airport, pick up just outside the Ground Transportation Center.
Amtrak and Greyhound both serve Indianapolis and
the train and bus stations are within walking distance of the
conference hotel. However, trains to do not run very often,
e.g., one a day from Chicago to Indianapolis, versus seven
a day Greyhound buses from Chicago to Indianapolis. The
Amtrak station is at 350 S. Illinois Street a 10-minute walk
(0.5 miles) from the Marriott and the Greyhound Station is
next to the Amtrak station at 154 W. South St. See www.
greyhound.com and tickets.amtrak.com for more information.
3. STUDENT TRANSPORTATION SUBSIDIES
To encourage student participation, limited funds are
available to defray partially the cost of travel expenses of
students to attend Acoustical Society meetings. Instructions
for applying for travel subsidies are given in the Call for
Papers which can be found online at http://acousticalsociety.
org. The deadline for the present meeting has passed but this
information may be useful in the future.
4. MESSAGES FOR ATTENDEES
Messages for attendees may be left by calling the
Indianapolis Marriott Downtown Hotel, 317-822-3500, and
asking for the ASA Registration Desk during the meeting,
where a message board will be located. This board may also
be used by attendees who wish to contact one another.
5. REGISTRATION
Registration is required for all attendees and accompanying
persons. Registration badges must be worn in order to
participate in technical sessions and other meeting activities.
Registration will open on Monday, 27 October, at 7:30 a.m.
in the Marriott Ballroom Foyer on the second floor (see floor
plan on page A11).
Checks or travelers checks in U.S. funds drawn on U.S.
banks and Visa, MasterCard and American Express credit
cards will be accepted for payment of registration. Meeting
attendees who have pre-registered may pick up their badges
and registration materials at the pre-registration desk.
The registration fees (in USD) are $545 for members of
the Acoustical Society of America; $645 for non-members,
$150 for Emeritus members (Emeritus status pre-approved
by ASA), $275 for ASA Early Career members (for ASA
members within three years of their most recent degrees
– proof of date of degree required), $90 for ASA Student
members, $130 for students who are not members of ASA,
$115 for Undergraduate Students, and $150 for accompanying
persons.
One-day registration is available at $275 for members and
$325 for nonmembers (one-day means attending the meeting
on only one day either to present a paper and/or to attend
sessions). A nonmember who pays the $645 nonmember
registration fee and simultaneously applies for Associate
Membership in the Acoustical Society of America will be
given a $50 discount off their dues payment for 2015 dues.
Invited speakers who are members of the Acoustical
Society of America are expected to pay the registration fee, but
A18
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
nonmember invited speakers who participate in the meeting
only on the day of their presentation may register without
charge. The registration fee for nonmember invited speakers
who wish to participate for more than one day is $110 and
includes a one-year Associate Membership in the ASA upon
completion of an application form.
Special note to students who pre-registered online: You
will also be required to show your student id card when
picking-up your registration materials at the meeting.
6. ASSISTIVE LISTENING DEVICES
The ASA has purchased assistive listening devices (ALDs)
for the benefit of meeting attendees who need them at
technical sessions. Any attendee who will require an assistive
listening device should advise the Society in advance of the
meeting by writing to: Acoustical Society of America, 1305
Walt Whitman Road, Suite 300, Melville, NY 11747-4300;
asa@aip.org
7. TECHNICAL SESSIONS
The technical program includes 92 sessions with 948 papers
scheduled for presentation during the meeting.
A floor plan of the Marriott Hotel appears on page A11.
Session Chairs have been instructed to adhere strictly to the
printed time schedule, both to be fair to all speakers and to
permit attendees to schedule moving from one session to
another to hear specific papers. If an author is not present to
deliver a lecture-style paper, the Session Chairs have been
instructed either to call for additional discussion of papers
already given or to declare a short recess so that subsequent
papers are not given ahead of the designated times.
Several sessions are scheduled in poster format, with the
display times indicated in the program schedule.
8. TECHNICAL SESSION DESIGNATIONS
The first character is a number indicating the day the session
will be held, as follows:
1-Monday, 27 October
2-Tuesday, 28 October
3-Wednesday, 29 October
4-Thursday, 30 October
5-Friday, 31 October
The second character is a lower case “a” for a.m., “p” for
p.m., or “e” for evening corresponding to the time of day the
session will take place. The third and fourth characters are
capital letters indicating the primary Technical Committee
that organized the session using the following abbreviations
or codes:
AA Architectural Acoustics
AB Animal Bioacoustics
AO Acoustical Oceanography
BA Biomedical Acoustics
EA Engineering Acoustics
ED Education in Acoustics
ID Interdisciplinary
MU Musical Acoustics
NS Noise
PA Physical Acoustics
168th Meeting: Acoustical Society of America
A18
PP Psychological and Physiological Acoustics
SA Structural Acoustics and Vibration
SC Speech Communication
SP Signal Processing in Acoustics
UW Underwater Acoustics
In sessions where the same group is the primary organizer
of more than one session scheduled in the same morning or
afternoon, a fifth character, either lower-case “a” or “b” is
used to distinguish the sessions. Each paper within a session is
identified by a paper number following the session-designating
characters, in conventional manner. As hypothetical examples:
paper 2pEA3 would be the third paper in a session on Tuesday
afternoon organized by the Engineering Acoustics Technical
Committee; 3pSAb5 would be the fifth paper in the second
of two sessions on Wednesday afternoon sponsored by the
Structural Acoustics and Vibration Technical Committee.
Note that technical sessions are listed both in the calendar
and the body of the program in the numerical and alphabetical
order of the session designations rather than the order of their
starting times. For example, session 3aAA would be listed
ahead of session 3aAO even if the latter session began earlier
in the same morning.
9. HOT TOPICS SESSION
Hot Topics session 3pID will be held on Wednesday, 29
October, at 1:00 p.m. in Indiana E. Papers will be presented on
current topics in the fields of Education in Acoustics, Signal
Processing in Acoustics, and Acoustical Oceanography.
10. ROSSING PRIZE IN ACOUSTICS EDUCATION
AND ACOUSTICS EDUCATION PRIZE LECTURE
The 2014 Rossing Prize in Acoustics Education will be
awarded to Colin Hansen, University of Adelaide, at the
Plenary Session on Wednesday, 29 October. Colin Hansen
will present the Acoustics Education Prize Lecture titled
“Educating mechanical engineers in the art of noise control”
on Wednesday, 29 October, at 2:00 p.m. in Session 3pED in
Indiana C/D.
11. TUTORIAL LECTURE: MUSICAL ACOUSTICS:
SCIENCE AND PERFORMANCE
A tutorial presentation on “Musical Acoustics: Science and
Performance” will be given by Professor Uwe J. Hansen of
Indiana State University, and the New World Youth Symphony,
directed by Susan Kitterman, on Monday, 27 October at 7:00
p.m. in the Hilbert Circle Theater.
The Tutorial Concert will be preceded by a tour of Hilbert
Circle Theater, home of the Indianapolis Symphony. Hilbert
Circle Theater was a movie house. It underwent major
revisions to make it suitable as a concert hall. Since the last
ASA meeting in Indianapolis in 1996, the concert hall has
undergone additional major remodeling, mainly in the stage
area, but also in the hall itself. The tour will begin at 6:00 p.m.
Hilbert Circle Theater is well within easy walking distance
of the hotel (allow 15 minutes to get there), however, in the
event of inclement weather, and for those with additional
needs, limited bus transportation will be available (from
5:30pm onwards).
A19
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Lecture notes will be available at the meeting in limited
supply; only preregistrants will be guaranteed receipt of a set
of notes.
All Students (K – Grad school) will be admitted free of
charge. General admission, for both the general Public and
ASA members is USD $20.00. ASA members who include
attendance in this tutorial concert in their pre-registration by
22 September pay the reduced fee of USD $15.00.
12. SHORT COURSE ON ELECTROACOUSTIC
TRANSDUCERS
A short course on Electroacoustic Transducers:
Fundamentals and Applications will be given in two parts:
Sunday, 26 October, from 1:00 p.m. to 5:00 p.m. and Monday,
27 October, from 7:30 a.m. to 12:30 p.m. in the Santa Fe Room.
The objectives are (1) to introduce the physical principles,
basic performance, and system design aspects required for
effective application of receiving and transmitting transducers
and (2) to present common problems and potential solutions.
The instructor is Thomas Gabrielson, a Senior Scientist and
Professor of Acoustics at Penn State University, previously
worked in underwater-acoustic transducer design, modeling,
and measurement for 22 years at the Naval Air Warfare Center
in Warminster, PA.
The registration fee is USD$300.00 (USD$125 for
students) and covers attendance, instructional materials and
coffee breaks. Onsite registration at the meeting will be on a
space-available basis.
13. UNDERGRADUATE RESEARCH POSTER
EXPOSITION
The Undergraduate Research Exposition will be held
Tuesday morning, 28 October, 9:00 a.m. to 11:00 a.m. in
session 2aED in Marriott 6. The 2014 Undergraduate Research
Exposition is a forum for undergraduate students to present
their research pertaining to any area of acoustics and can also
include overview papers on undergraduate research programs,
designed to inspire and foster growth of undergraduate
research throughout the Society. It is intended to encourage
undergraduates to express their knowledge and interest in
acoustics and foster their participation in the Society. Four
awards, up to $500 each, will be made to help undergraduates
with travel costs associated with attending the meeting and
presenting a poster.
14. RESUME HELP DESK
Are you interested in applying for graduate school, a
postdoctoral opportunity, a research scientist position, a
faculty opening, or other position involving acoustics? If
you are, please stop by the ASA Resume Help Desk in the
Marriott Ballroom Foyer near the registration desk. Members
of the ASA experienced in hiring will be available to look
at your CV, cover letter, and research & teaching statements
to provide tips and suggestions to help you most effectively
present yourself in today’s competitive job market. The ASA
Resume Help Desk will be staffed on Tuesday, Wednesday,
and Thursday during the lunch hour for walk-up meetings.
Appointments during these three lunch hours will be available
via a sign-up sheet, too.
168th Meeting: Acoustical Society of America
A19
15. TECHNICAL COMMITTEE OPEN MEETINGS
Technical Committees will hold open meetings on Tuesday,
Wednesday, and Thursday at the Indianapolis Marriott
Downtown. The meetings on Tuesday and Thursday will be
held in the evenings after the socials, except Engineering
Acoustics which will meet at 4:30 p.m. on Tuesday. The
schedule and rooms for each Committee meeting are given
on page A16.
These are working, collegial meetings. Much of the work
of the Society is accomplished by actions that originate and
are taken in these meetings including proposals for special
sessions, workshops and technical initiatives. All meeting
participants are cordially invited to attend these meetings and
to participate actively in the discussions.
16. TECHNICAL TOURS
Note: Tour buses leave from the Marriott’s Missouri Street
Exit.
Monday, 27 October, 6:00 p.m.-8:30 p.m. Tour and
Tutorial Lecture at the Hilbert Circle Theater, 45 Monument
Circle, Indianapolis. Tour fees: USD $15 preregistration
and USD $20 on-site for non-students/No fee for students.
Prior to becoming the home of the Indianapolis Symphony,
Hilbert Circle Theater was a movie house. It underwent major
revisions to make it suitable as a concert hall. Tour starts at
6:00 p.m. at the theater and tutorial presentation starts at 7:00
p.m. The Theater is half a mile walking distance (about 15
minutes from the hotel, so leave at 5:45 p.m. at the latest). See
the Tutorial Lecture section above for full details.
Tuesday, 28 October: 10:00 a.m.-12:00 noon. Tour of the
Center for the Performing Arts, 355 City Center Drive,
Carmel. Tour limited to 30 participants. Tour fee: USD $25.
The Center for Performing Arts houses the Palladium (1,600
seat concert hall), the Tarkington Theater (500 seats proscenium
stage) and the Studio Theater (small flexible black box space).
This is a recently completed facility north of Indianapolis.
The Palladium is a space that rivals the world’s great concert
halls. David M. Schwarz Architects, a Washington, DC based
architectural firm, drew inspiration for the Palladium from the
famous Villa Capra “La Rotunda” (or Villa Rotunda) built in
1566 in Italy and designed by Italian Renaissance architect
Andrea Palladio (1508–1580). For more information about the
Center visit www.thecenterfortheperformingarts.org.
Tuesday, 28 October: 3:00 p.m.-6:00 p.m. Indiana
University School of Medicine, 699 Riley Hospital Drive,
Indianapolis. Tour limited to 30 participants. Tour fee:
USD $25. The Department of Otolaryngology-Head and
Neck Surgery was organized as an independent department
within the Indiana School of Medicine in 1909 by John
Barnhill, M.D., an internationally recognized head and neck
surgeon and anatomist. Since then the specialty has undergone
tremendous expansion in managing disorders of the ear,
nose, throat, head and neck. The DeVault Otologic Research
Laboratory is the primary behavioral research venue for the
Department. Occupying approximately 3000 square feet on
two floors of the research wing of the James Whitcomb Riley
Hospital for Children, the laboratory is named for its principal
early benefactor, Dr. Virgil T. DeVault (1901-2000), a native
Hoosier and alumnus of Indiana University. In the laboratory
A20
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
researchers examine the short-term and long-term effects of
cochlear implantation and/or therapeutic amplification in deaf
and hard-of-hearing infants, children, and adults, as well as
the factors underlying variability in behavioral outcomes of
cochlear implantation and/or therapeutic amplification.
Thursday, 30 October: 3:00 p.m.-6:00 p.m. Tour of 3M
Acoustics Facilities, 7911 Zionsville Road, Indianapolis.
Tour limited to 30 participants. Tour fee: USD $25
Elliott Berger and Steve Sorenson will give tours of 3M’s
E•A•RCAL hearing protection laboratory and Acoustic
Technology Center (ATC) laboratory for noise control research
and application. The E•A•RCAL facility consists of a NVLAP
accredited 113-m3 reverberation chamber instrumented for
real-ear attenuation testing, an 18-m3 electroacoustic sound
lab supporting high-level tests up to 120 dB SPL, and a 300m3 hemi-anechoic facility used for impulse testing via a shock
tube that generates blasts up to 168 dB SPL for measuring
the level-dependent performance of hearing protectors. The
ATC includes a 900-m3 hemi-anechoic chamber with an
inbuilt chassis dynamometer ideal for testing heavy trucks
under real-world load conditions, a smaller hemi-anechoic
chamber for product sound power testing, and 2 reverberation
chambers for a wide variety of sound transmission loss and
sound absorption test/development. Note that no photographs
are allowed to be taken on this tour. Conference attendees who
work for 3M/Aearo/E-A-R competitors may not be allowed to
participate in the tour – the registration fee will be refunded in
full should the request to participate (through pre-registration)
not be approved.
Start times are when the bus leaves the hotel, so plan on
being there ahead of time.
On-site registration will be on a space-available basis.
17. GALLERY OF ACOUSTICS
The Technical Committee on Signal Processing in
Acoustics will sponsor the 15th Gallery of Acoustics at the
Acoustical Society of America meeting in Indianapolis. Its
purpose is to enhance ASA meetings by providing a setting
for researchers to display their work to all meeting attendees
in a forum emphasizing the diversity, interdisciplinary, and
artistic nature of acoustics. The Gallery of Acoustics provides
a means by which we can all share and appreciate the natural
beauty, aesthetic, and artistic appeal of acoustic phenomena:
This is a forum where science meets art.
The Gallery will be held in the Marriott Ballroom 6,
Monday through Thursday, 27-30 October, from 9:00 a.m. to
5:00 p.m.
18. ANNUAL MEMBERSHIP MEETING
The Annual Membership Meeting of the Acoustical Society
of America will be held at 3:30 p.m. on Wednesday, 29 October
2014, in Marriott 5 at the at the Indianapolis Downtown
Marriott Hotel, 350 West Maryland Street, Indianapolis, IN
46225.
19. PLENARY SESSION AND AWARDS CEREMONY
A plenary session will be held Wednesday, 29 October, at
3:30 p.m. in Marriott 5.
168th Meeting: Acoustical Society of America
A20
The Rossing Prize in Acoustics Education will be presented
to Colin Hansen. The Pioneers of Underwater Acoustics
Medal will be presented to Michael B. Porter, the Silver
Medal in Speech Communication will be presented to Sheila
E. Blumstein and the Wallace Clement Sabine Medal will
be presented to Ning Xiang. Certificates will be presented to
Fellows elected at the Providence meeting of the Society. See
page 2228 for a list of fellows.
20. ANSI STANDARDS COMMITTEES
Meetings of ANSI Accredited Standards Committees will
not be held at the Indianapolis meeting.
Meetings of selected advisory working groups are often
held in conjunction with Society meetings and are listed in the
Schedule of Committee Meetings and Other Events on page
A16 or on the standards bulletin board in the registration area,
e.g., S12/WGI8-Room Criteria.
People interested in attending and in becoming involved in
working group activities should contact the ASA Standards
Manager for further information about these groups, or about
the ASA Standards Program in general, at the following
address: Susan Blaeser, ASA Standards Manager, Standards
Secretariat, Acoustical Society of America, 1305 Walt
Whitman Road, Suite 300, Melville, NY 11747-4300; T.: 631390-0215; F: 631-923-2875; E: asastds@aip.org
21. COFFEE BREAKS
Morning coffee breaks will be held each day from 9:40 a.m.
to 10:40 a.m. in Marriott 6.
22. A/V PREVIEW ROOM
The Albany Room on the second floor will be set up as
an A/V preview room for authors’ convenience, and will be
available on Monday through Thursday from 7:00 a.m. to 5:00
p.m. and Friday from 7:00 a.m. to 12:00 noon.
23. PROCEEDINGS OF MEETINGS ON ACOUSTICS
(POMA)
The Indianapolis meeting will have a published proceedings,
and submission is optional. The proceedings will be a separate
volume of the online journal, “Proceedings of Meetings on
Acoustics” (POMA). This is an open access journal, so that its
articles are available in pdf format without charge to anyone
in the world for downloading. Authors who are scheduled
to present papers at the meeting are encouraged to prepare a
suitable version in pdf format that will appear in POMA. The
format requirements for POMA are somewhat more stringent
than for posting on the ASA Online Meetings Papers Site, but
the two versions could be the same. The posting at the Online
Meetings Papers site, however, is not archival, and posted
papers will be taken down six months after the meeting. The
POMA online site for submission of papers from the meeting
will be opened about one-month after authors are notified
that their papers have been accepted for presentation. It is not
necessary to wait until after the meeting to submit one’s paper
to POMA. Further information regarding POMA can be found
at the site http://asadl/poma/for_authors_poma. Published
papers from previous meeting can be seen at the site http://
asadl/poma.
A21
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
24. E-MAIL ACCESS, INTERNET CAFÉ, AND BREAK
ROOM
Computers providing e-mail access will be available 7:00
a.m. to 5:00 p.m., Monday to Thursday and 7:00 a.m. to 12:00
noon on Friday in Marriott 6.
A unique feature of this ASA meeting is that a ballroom
located directly opposite the registration area will be dedicated
as a central gathering area for discussion, wi-fi, coffee breaks,
the Gallery of Acoustics and more. Join your colleagues the
Break Room every day to discuss the latest ASA topics and
news.
Wifi will be available in all ASA meeting rooms and spaces.
25. SOCIALS
The Eiteljorg Museum of American Indian and Western
art will be the site for the social on Tuesday, October 28, from
6:00 p.m. to 9:00 p.m. Galleries will be open for viewing art
that promotes understanding of the history and cultures of
North American people, including their contemporary Native
art collection that has been ranked among the world’s best.
The collections are housed in a striking building, located
along the White River Canal within easy walking distance
(about 6 minutes) of the Indianapolis Marriott Downtown. For
those who prefer not to walk, shuttle service will be available
throughout the evening to and from the Missouri Street exit
of the hotel. In keeping with the Museum, the reception will
feature a delectable array of food selections having a slightly
Southwestern flair.
A Halloween Social for all, even noisy spirits or eerie
creatures, will take place on Thursday, October 30 in the
Marriott Ballroom from 6:00 p.m. to 7:30 p.m. Costumes
are positively encouraged, so don’t forget to pack one. Get
ready for a few fun surprises organized by a team of young
acousticians that are sure to provide some great photo ops. To
set the stage for Thursday night’s activities, Halloween fun is
included in a Thursday afternoon technical session sponsored
by Architectural Acoustics and Speech Communication. In
this session other worldly minds offer 13 talks from 1:00 p.m.
to 5:00 p.m. in Marriott Ballroom 5/6. Come to learn about
the acoustics of supernatural spirits, bumps in the night, eerie
voices and other sorts of spooky audition.
The ASA hosts these social hours to provide a relaxing
setting for meeting attendees to meet and mingle with their
friends and colleagues as well as an opportunity for new
members and first-time attendees to meet and introduce
themselves to others in the field. A second goal of the socials
is to provide a sufficient meal so that meeting attendees
can attend the Technical Committees meetings that begin
immediately after the socials
26. SOCIETY LUNCHEON AND LECTURE
The Society Luncheon and Lecture will be held on
Thursday, 30 October, at 12:00 noon in Indiana E. The
luncheon is open to all attendees and their guests. The speaker
is Larry E. Humes, Distinguished Professor and Department
Chair, Department of Speech and Hearing Sciences, Indiana
University. Purchase your tickets at the Registration Desk
before 10:00 a.m. on Wednesday, 29 October. The cost is
$30.00 per ticket.
168th Meeting: Acoustical Society of America
A21
27. STUDENTS MEET MEMBERS FOR LUNCH
The ASA Education Committee arranges for a student to
meet one-on-one with a member of the Acoustical Society
over lunch. The purpose is to make it easier for students to
meet and interact with members at ASA Meetings. Each lunch
pairing is arranged separately. Students who are interested
should contact Dr. David Blackstock, University of Texas
at Austin, by email dtb@mail.utexas.edu Please provide
your name, university, department, degree you are seeking
(BS, MS, or PhD), research field, acoustical interests, your
supervisor’s name, days you are free for lunch, and abstract
number (or title) of any paper(s) you are presenting. The signup deadline is 12 days before the start of the Meeting, but an
earlier sign-up is strongly encouraged. Each participant pays
for his/her own meal.
28. STUDENT EVENTS: NEW STUDENTS
ORIENTATION, MEET AND GREET, STUDENT
RECEPTION
Follow the student twitter throughout the meeting @
ASAStudents.
A New Students Orientation will be held from 5:00 p.m.
to 5:30 p.m. on Monday, 27 October, in Marriott 9/10 for
all students to learn about the activities and opportunities
available for students at the Indianapolis ASA meeting. This
will be followed by the Student Meet and Greet from 5:30
p.m. to 6:45 p.m. in Marriott 6. Refreshments and a cash
bar will be available. Students are encouraged to attend the
tutorial lecture on which begins at 7:00 p.m. in The Hilbert
Theater. Student registration for this event is free.
The Students’ Reception will be held on Wednesday,
29 October, from 6:45 p.m. to 8:15 p.m. in Indiana E. This
reception, sponsored by the Acoustical Society of America and
supported by the National Council of Acoustical Consultants,
will provide an opportunity for students to meet informally
with fellow students and other members of the Acoustical
Society. All students are encouraged to attend, especially
students who are first time attendees or those from smaller
universities.
Students will find a sticker to place on their name tags
identifying them as students in their registration envelopes.
Although wearing the sticker is not mandatory, it will allow
for easier networking between students and other meeting
attendees.
Students are encouraged to refer to the student guide, also
found in their envelopes, for important program and meeting
information pertaining only to students attending the ASA
meeting.
They are also encouraged to visit the official ASA Student
Home Page at www.acosoc.org/student/ to learn more about
student involvement in ASA.
29. WOMEN IN ACOUSTICS LUNCHEON
The Women in Acoustics luncheon will be held at 11:30
a.m. on Wednesday, 29 October, in the Circle City Bar and
Grille on the first floor of the Marriott. Those who wish to
attend must purchase their tickets in advance by 10:00 a.m. on
Tuesday, 28 October. The fee is USD$30 for non-students and
USD$15 for students.
A22
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
30. JAM SESSION
You are invited to Marriott 6 on Wednesday night, 29
October, in Marriott 6 from 8:00 p.m. to midnight for the JAM
SESSION. Bring your axe, horn, sticks, voice, or anything
else that makes music. Musicians and non-musicians are all
welcome to attend. A full PA system, backline equipment,
guitars, bass, keyboard, and drum set will be provided. All
attendees will enjoy live music, a cash bar with snacks, and
all-around good times. Don’t miss out.
31. ACCOMPANYING PERSONS PROGRAM
Spouses and other visitors are welcome at the Indianapolis
meeting. The on-site registration fee for accompanying persons
is USD$150. A hospitality room for accompanying persons
will be open in the Texas Room at the Indianapolis Marriott
Downtown Hotel from 8:00 a.m. to 10:00 a.m. Monday through
Thursday. For updates about the accompanying persons
program please check the ASA website at AcousticalSociety.
org/meetings.html.
Visit: http://visitindy.com to learn about what is going on
in Indianapolis. Good places to visit within walking distance
include the Eiteljorg Museum (Tuesday night social venue),
the Indiana State Museum (with IMAX theater), the NCAA
Hall of Champions, the Indianapolis Zoo, and White River
State Park (you can hire bikes there). Further away requiring
transportation (taxi or bus - http://www.indygo.net/pages/
system-map) is the Indianapolis Speedway Museum which is
at the Indy 500 track, the Children’s Museum, and the Indiana
Museum of Art. Close to the hotel, there is the Circle Center
Mall which is a great place for shopping.
32. WEATHER
Weather in Indianapolis in the last week in October can
vary a lot from year to year. Make sure you are prepared
for rain so you can take full advantage of nearby restaurants
and attractions. See http://visitindy.com and the hotel
website http://www.marriott.com/hotels/travel-guide/indccindianapolis-marriott for more information about Indianapolis.
There is a 35% chance of some sort of precipitation (rain) and
snow is very rare at that time of year. Average low and high
temperatures at that time of year are 41 and 60 degrees F,
respectively.
33. TECHNICAL PROGRAM ORGANIZING
COMMITTEE
Robert F. Port, Chair; David R. Dowling, Acoustical
Oceanography; Roderick J. Suthers, Animal Bioacoustics;
Norman H. Philipp, Architectural Acoustics; Robert J.
McGough, Biomedical Acoustics; Uwe J. Hansen, Education
in Acoustics; Roger T. Richards, Engineering Acoustics;
Andrew C.H. Morrison, Musical Acoustics; William J.
Murphy, Noise; Kai Ming Li, Physical Acoustics; Jennifer
Lentz, Psychological and Physiological Acoustics; R. Lee
Culver, Cameron Fackler, Signal Processing in Acoustics;
Diane Kewley-Port, Alexander L. Francis, Speech
Communication; Benjamin M. Shafer, Structural Acoustics
and Vibration; Kevin L Williams, Underwater Acoustics.
168th Meeting: Acoustical Society of America
A22
34. MEETING ORGANIZING COMMITTEE
Kenneth de Jong and Patricia Davies, Cochairs; Robert F.
Port, Technical Program Chair; Diane Kewley-Port, Tessa
Bent, Mary C. Morgan, Food and Beverage; Mary C. Morgan,
Kai Ming Li, Tom Lorenzen, Audio-Visual and WiFi; Caroline
Richie, Volunteer Coordination; William J. Murphy, Technical
Tours; Uwe Hansen, Educational Activities, Tutorials; Diane
Kewley-Port, Special Events; Maria Kondaurova, Guanguan
Li, Michael Hayward, Indianapolis Visitor Information;
Tessa Bent, Student Activities; Mary C. Morgan, Meeting
Administrator.
35. PHOTOGRAPHING AND RECORDING
Photographing and recording during regular sessions are
not permitted without prior permission from the Acoustical
Society.
36. ABSTRACT ERRATA
This meeting program is Part 2 of the October 2014 issue of
The Journal of the Acoustical Society of America. Corrections,
for printer’s errors only, may be submitted for publication in
the Errata section of the Journal.
37. GUIDELINES FOR ORAL PRESENTATIONS
Preparation of Visual Aids
See the enclosed guidelines for computer projection.
• Allow at least one minute of your talk for each slide (e.g.,
Powerpoint). No more than 12 slides for a 15-minute talk
(with 3 minutes for questions and answers).
• Minimize the number of lines of text on one visual aid. 12
lines of text should be a maximum. Include no more than 2
graphs/plots/figures on a single slide. Generally, too little
information is better than too much.
• Presentations should contain simple, legible text that is
readable from the back of the room.
• Characters should be at least 0.25 inches (6.5 mm) in
height to be legible when projected. A good rule of thumb
is that text should be 20 point or larger (including labels
in inserted graphics). Anything smaller is difficult to read.
• Make symbols at least 1/3 the height of a capital letter.
• For computer presentations, use all of the available screen
area using landscape orientation with very thin margins. If
your institutions logo must be included, place it at the bottom of the slide.
• Sans serif fonts (e.g., Arial, Calibri, and Helvetica) are
much easier to read than serif fonts (e.g., Times New Roman) especially from afar. Avoid thin fonts (e.g., the horizontal bar of an e may be lost at low resolution thereby
registering as a c.)
• Do not use underlining to emphasize text. It makes the text
harder to read.
• All axes on figures should be labeled and the text size for
labels and axis numbers or letters should be large enough
to read.
• No more than 3–5 major points per slide.
• Consistency across slides is desirable. Use the same background, font, font size, etc. across all slides.
• Use appropriate colors. Avoid complicated backgrounds
and do not exceed four colors per slide. Backgrounds that
A23
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
change from dark to light and back again are difficult to
read. Keep it simple.
• If using a dark background (dark blue works best), use
white or yellow lettering. If you are preparing slides that
may be printed to paper, a dark background is not appropriate.
• If using light backgrounds (white, off-white), use dark
blue, dark brown or black lettering.
• DVDs should be in standard format.
Presentation
• Organize your talk with introduction, body, and summary
or conclusion. Include only ideas, results, and concepts that
can be explained adequately in the allotted time. Four elements to include are:
(1) Statement of research problem
(2) Research methodology
(3) Review of results
(4) Conclusions
• Generally, no more than 3–5 key points can be covered adequately in a 15-minute talk so keep it concise.
• Rehearse your talk so you can confidently deliver it in the
allotted time. Session Chairs have been instructed to adhere
to the time schedule and to stop your presentation if you
run over.
• An A/V preview room will be available for viewing computer presentations before your session starts. It is advisable to preview your presentation because in most cases
you will be asked to load your presentation onto a computer, which may have different software or a different
configuration from your own computer.
• Arrive early enough so that you can meet the session chair,
load your presentation on the computer provided, and familiarize yourself with the microphone, computer slide
controls, laser pointer, and other equipment that you will
use during your presentation. There will be many presenters loading their materials just prior to the session so it is
very important that you check that all multi-media elements (e.g., sounds or videos) play accurately prior to the
day of your session.
• Each time you display a visual aid the audience needs time
to interpret it. Describe the abscissa, ordinate, units, and the
legend for each figure. If the shape of a curve or some other
feature is important, tell the audience what they should observe to grasp the point. They won’t have time to figure it
out for themselves. A popular myth is that a technical audience requires a lot of technical details. Less can be more.
• Turn off your cell phone prior to your talk and put it away
from your body. Cell phones can interfere with the speakers and the wireless microphone.
38. SUGGESTIONS FOR EFFECTIVE POSTER
PRESENTATIONS
Content
• The poster should be centered around two or three key
points supported by the title, figures, and text.
• The poster should be able to “stand alone.” That is, it
should be understandable even when you are not present
to explain, discuss, and answer questions. This quality is
168th Meeting: Acoustical Society of America
A23
highly desirable since you may not be present the entire
time posters are on display, and when you are engaged in
discussion with one person, others may want to study the
poster without interrupting an ongoing dialogue.
• To meet the “stand alone” criteria, it is suggested that the
poster include the following elements, as appropriate:
○ Background
○ Objective, purpose, or goal
○ Hypotheses
○ Methodology
○ Results (including data, figures, or tables)
○ Discussion
○ Implications and future research
○ References and Acknowledgments
Design and layout
• A board approximately 8 ft. wide × 4 ft. high will be provided for the display of each poster. Supplies will be available for attaching the poster to the display board. Each
board will be marked with an abstract number.
• Typically posters are arranged from left to right and top
to bottom. Numbering sections or placing arrows between
sections can help guide the viewer through the poster.
• Centered at the top of the poster, include a section with
the abstract number, paper title, and author names and affiliations. An institutional logo may be added. Keep the design relatively simple and uncluttered. Avoid glossy paper.
Lettering and text
• Font size for the title should be large (e.g., 70-point font)
• Font size for the main elements should be large enough
to facilitate readability from 2 yards away (e.g., 32 point
font). The font size for other elements, such as references,
may be smaller (e.g., 20–24 point font).
• Sans serif fonts (e.g., Arial, Calibri, Helvetica) are much
easier to read than serif fonts (e.g., Times New Roman).
• Text should be brief and presented in a bullet-point list as
much as possible. Long paragraphs are difficult to read in a
poster presentation setting.
Visuals
• Graphs, photographs, and schematics should be large
enough to see from 2 yards (e.g., 8 × 10 inches).
• Figure captions or bulleted annotation of major findings
next to figures are essential. To ensure that all visual elements are “stand alone,” axes should be labeled and all
symbols should be explained.
• Tables should be used sparingly and presented in a simplified format.
Presentation
• Prepare a brief oral summary of your poster and short answers to likely questions in advance.
• The presentation should cover the key points of the poster
so that the audience can understand the main findings. Further details of the work should be left for discussion after
the initial poster presentation.
• It is recommended that authors practice their poster presentation in front of colleagues before the meeting. Authors
should request feedback about the oral presentation as well
as poster content and layout.
A24
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Other suggestions
• You may wish to prepare reduced-size copies of the poster
(e.g., 8 1/2 × 11 sheets) to distribute to interested audience
members.
39. GUIDELINES FOR USE OF COMPUTER
PROJECTION
• A PC computer with audio playback capability and a projector will be provided in each meeting room on which all
authors who plan to use computer projection should load
their presentations.
• Authors should bring computer presentations on a USB
drive to load onto the provided computer and should arrive
at the meeting rooms at least 30 minutes before the start of
their sessions.
• Assistance in loading presentations onto the computers
will be provided.
• Note that only PC format will be supported so authors using Macs to prepare their presentation must save their presentations so that the projection works when the presentation is run from the PC in the session room. Also, authors
who plan to play audio or video clips during their presentations should insure that their sound (or other) files are
also saved on the USB drive and are also uploaded to the
PC in the session room. Presenters should also check that
the links to the sound (and other) files in the presentation
still work after everything has been loaded onto the session
room computer.
Using your own computer (only if you really need to!)
It is essential that each speaker who plans to use his/her
own laptop connect to the computer projection system in the
A/V preview room prior to session start time to verify that
the presentation will work properly. Technical assistance is
available in the A/V preview room at the meeting, but not in
session rooms. Presenters whose computers fail to project for
any reason will not be granted extra time.
General Guidelines
• Set your computer’s screen resolution to 1024x768 pixels
or to the resolution indicated by the AV technical support.
If it looks OK, it will probably look OK to your audience
during your presentation.
• Remember that graphics can be animated or quickly toggled among several options: Comparisons between figures
may be made temporally rather than spatially.
• Animations often run more slowly on laptops connected
to computer video projectors than when not so connected.
Test the effectiveness of your animations before your assigned presentation time on a similar projection system
(e.g., in the A/V preview room). Avoid real-time calculations in favor of pre-calculation and saving of images.
• If you will use your own laptop instead of the computer provided, connect your laptop to the projector during the question/answer period of the previous speaker. It is good protocol
to initiate your slide show (e.g., run PowerPoint) immediately
once connected, so the audience doesn’t have to wait. If there
are any problems, the session chair will endeavor to assist
you, but it is your responsibility to ensure that the technical
details have been worked out ahead of time.
168th Meeting: Acoustical Society of America
A24
• During the presentation have your laptop running with
main power instead of using battery power to insure that
the laptop is running at full CPU speed. This will also guarantee that your laptop does not run out of power during
your presentation.
SPECIFIC HARDWARE CONFIGURATIONS
Macintosh
• Older Macs require a special adapter to connect the video
output port to the standard 15-pin male DIN connector.
Make sure you have one with you.
• Hook everything up before powering anything on. (Connect the computer to the RGB input on the projector).
• Turn the projector on and boot up the Macintosh. If this
doesn’t work immediately, you should make sure that your
monitor resolution is set to 1024x768 for an XGA projector
or at least 640x480 for an older VGA projector. (1024x768
will most always work.). You should also make sure that
your monitor controls are set to mirroring. If it’s an older
powerbook, it may not have video mirroring, but something called simulscan, which is essentially the same.
• Depending upon the vintage of your Mac, you may have
to reboot once it is connected to the computer projector
or switcher. Hint: you can reboot while connected to the
computer projector in the A/V preview room in advance of
your presentation, then put your computer to sleep. Macs
thus booted will retain the memory of this connection when
awakened from sleep.
• Depending upon the vintage of your system software, you
may find that the default video mode is a side-by-side configuration of monitor windows (the test for this will be that
you see no menus or cursor on your desktop; the cursor will
slide from the projected image onto your laptop’s screen as
it is moved). Go to Control Panels, Monitors, configuration, and drag the larger window onto the smaller one. This
produces a mirror-image of the projected image on your
laptop’s screen.
• Also depending upon your system software, either the Control Panels will automatically detect the video projector’s
resolution and frame rate, or you will have to set it manually. If it is not set at a commensurable resolution, the projector may not show an image. Experiment ahead of time
with resolution and color depth settings in the A/V preview
A25
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
room (please don’t waste valuable time adjusting the Control Panel settings during your allotted session time).
PC
• Make sure your computer has the standard female 15-pin
DE-15 video output connector. Some computers require an
adaptor.
• Once your computer is physically connected, you will need
to toggle the video display on. Most PCS use either ALTF5 or F6, as indicated by a little video monitor icon on
the appropriate key. Some systems require more elaborate
keystroke combinations to activate this feature. Verify your
laptop’s compatibility with the projector in the A/V preview room. Likewise, you may have to set your laptop’s
resolution and color depth via the monitor’s Control Panel
to match that of the projector, which settings you should
verify prior to your session.
Linux
• Most Linux laptops have a function key marked CRT/LCD
or two symbols representing computer versus projector.
Often that key toggles on and off the VGA output of the
computer, but in some cases, doing so will cause the computer to crash. One fix for this is to boot up the BIOS and
look for a field marked CRT/LCD (or similar). This field
can be set to Both, in which case the signal to the laptop
is always presented to the VGA output jack on the back
of the computer. Once connected to a computer projector,
the signal will appear automatically, without toggling the
function key. Once you get it working, don’t touch it and it
should continue to work, even after reboot.
40. DATES OF FUTURE ASA MEETINGS
For further information on any ASA meeting, or to obtain
instructions for the preparation and submission of meeting
abstracts, contact the Acoustical Society of America, 1305
Walt Whitman Road, Suite 300, Melville, NY 11747-4300;
Telephone: 516-576-2360; Fax: 631-923-2875; E-mail: asa@
aip.org
169th Meeting, Pittsburgh, Pennsylvania, 18–22 May 2015
170th Meeting, Jacksonville, Florida, 2–6 November 2015
171st Meeting, Salt Lake City, Utah, 23–27 May 2016
172nd Meeting, Honolulu, Hawaii, 28 November–2 December
2016.
168th Meeting: Acoustical Society of America
A25
Price: $52.00
Due to security risks and Payment Card Industry (PCI) data security standards e-mail is NOT an acceptable way to transmit credit
card information. Please use our secure web page to process your credit card payment (http://www.abdi-ecommerce10.com/asa)
or securely fax this form to (516-576-2377).
LINCOLN, 8:25 A.M. TO 11:45 A.M.
1a MON. AM
MONDAY MORNING, 27 OCTOBER 2014
Session 1aAB
Animal Bioacoustics: Topics in Animal Bioacoustics I
James A. Simmons, Chair
Neuroscience, Brown University, 185 Meeting St., Box GL-N, Providence, RI 02912
Chair’s Introduction—8:25
Contributed Papers
8:30
9:00
1aAB1. Spinner dolphins (Stenella longirostris GRAY, 1828) acoustic
parameters recorded in the Western South Atlantic Ocean. Juliana R.
Moron, Artur Andriolo (Instituto de Ci^encias Biol
ogicas, Universidade
Federal de Juiz de Fora, Rua Batista de Oliveira 1110 apto 404 B, Juiz de
Fora 36010520, Brazil, julianamoron@hotmail.com), and Marcos Rossiogicas, Universidade
Santos (Centro de Ci^encias Agrarias, Ambientais e Biol
Federal do Rec^oncavo da Bahia, Cruz das Almas, Brazil)
1aAB3. Spatio-temporal distribution of beaked whales in southern California waters. Simone Baumann-Pickering, Jennifer S. Trickey (Scripps
Inst. of Oceanogr., Univ. of California San Diego, 9500 Gilman Dr., La
Jolla, CA 92093, sbaumann@ucsd.edu), Marie A. Roch (Dept. of Comput.
Sci., San Diego State Univ., San Diego, CA), and Sean M. Wiggins (Scripps
Inst. of Oceanogr., Univ. of California San Diego, La Jolla, CA)
Spinner dolphins bioacoustics were study only in Fernando de Noronha
Archipelago region in the Western South Atlantic Ocean. Our study aimed
to describe the acoustic parameters of this species recorded approximately
3500 km south of Fernando de Noronha Archipelago. An one-element
hydrophone was towed 250 m behind the vessel R/V Atl^antico Sul over the
continental shelf break. Continuous mono recording was performed with the
R FR-2 LE, recording at 96
hydrophone passing signals to a digital FostexV
kHz/24 bits. A group of approximate 400 dolphins were recorded on June 3,
2013, at 168.9 km shore distance (27o 24’29”S, 46o50’05”W). The wavfiles were analyzed through the spectrogram configured as DFT 512 samples, 50% overlap and Hamming window of 1024 points generated by software Raven Pro 1.4. The preliminary results of 10 min recording allowed
the extraction of 693 whistles that were classified in contours shapes as:
upsweep (42%), chirp (17.3%), downsweep (14%), sinusoidal (10.5%), convex (5.9%), constant (5.4%), and concave (4.9%). Minimum frequencies
ranged from 3.32 kHz to 23.30 kHz (mean = 10.88 kHz); maximum frequencies ranged from 6.61 kHz to 35.34 kHz (mean = 15.77 kHz); whistle duration ranged from 0.03 s to 2.58 s (mean = 0.68 s). These results are
important to understand populations and/or species distributed in different
ocean basins.
8:45
1aAB2. A new method for detection of North Atlantic right whale upcalls. Mahdi Esfahanian, Hanqi Zhuang, and Nurgun Erdol (Comput. and
Elec. Eng. and Comput. Sci., Florida Atlantic Univ., 777 Glades Rd., Bldg:
EE 96, Rm. 409, Boca Raton, FL 33431, mesfahan@fau.edu)
A study of detecting North Atlantic Right Whale (NARW) up-calls has
been conducted with measurements from passive acoustic monitoring devices.
Denoising and normalization algorithms are applied to remove local variance
and narrowband noise in order to isolate the NARW up-calls in spectrograms.
The resulting spectrograms, after binarization, are treated with a region detection procedure called the Moor-Neighbor algorithm to find continuous objects
that are candidates of up-call contours. After selected properties of each
detected object are computed, they are compared with a pair of low and high
empirical thresholds to estimate the probability of the detected object being
an up-call; therefore, those objects that are determined with certainty to be
non up-calls are discarded. The final stage in the proposed call detection
method is to separate true up-calls from the rest of potential up-calls with
classifiers such as linear discriminate analysis (LDA), Na€ıve Bayes, and decision tree. Experimental results using the data set obtained by Cornell University show that the proposed method can achieve accuracy to 96%.
2073
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Cuvier’s beaked whales are the dominant beaked whales offshore of
southern California. Their abundance, distribution, and seasonality are
poorly understood. Insights on the spatio-temporal distribution of both Cuvier’s and a rare beaked whale with signal type BW43, likely Perrin’s beaked
whale, have been derived from long-term autonomous recordings of beaked
whale echolocation clicks. Acoustic recordings were collected at 18 sites
offshore of southern California since 2006, resulting in a total of ~26 years
of recordings. About 23,000 acoustic encounters with Cuvier’s beaked
whales were detected. In contrast, there were ~100 acoustic encounters of
the BW43 signal type. Cuvier’s beaked whales were predominantly detected
at deeper, more southern, and farther offshore sites, and there appears to be
a seasonal pattern to their presence, with lower probability of detection during summer and early fall. The BW43 signal type had higher detection rates
in the central basins, indicating a possible difference in habitat preference
and niche separation between the two species. Further investigation is
needed to reveal if these distribution patterns are purely based on bathymetric preference, driven by water masses that determine prey species composition and distribution, or possibly by anthropogenic activity.
9:15
1aAB4. The acoustic characteristics of greater prairie-chicken vocalizations. Cara Whalen, Mary Bomberger Brown (School of Natural Resources,
Univ. of Nebraska - Lincoln, 3310 Holdrege St., Lincoln, NE 68583, carawhalen@gmail.com), JoAnn McGee (Developmental Auditory Physiol.
Lab., Boys Town National Res. Hospital, Omaha, NE), Larkin A. Powell,
Jennifer A. Smith (School of Natural Resources, Univ. of Nebraska Lincoln, Lincoln, NE), and Edward J. Walsh (Developmental Auditory
Physiol. Lab., Boys Town National Res. Hospital, Omaha, NE)
Male Greater Prairie-Chickens (Tympanuchus cupido pinnatus) congregate in groups known as “leks” each spring to perform vocal and visual displays to attract females. Four widely recognized vocalization types
produced by males occupying leks are referred to as “booms,” “cackles,”
“whines,” and “whoops.” As part of a larger effort to determine the influence of wind turbine farm noise on lek vocal behavior, we studied the
acoustic properties of vocalizations recorded between March and June in
2013 and 2014 at leks near Ainsworth, Nebraska. Although all four calls are
produced by males occupying leks, the boom is generally regarded as the
dominant call type associated with courtship behavior. Our findings suggest
that the bulk of acoustic power carried by boom vocalizations is in a relatively narrow, low frequency band, approximately 100-Hz wide at 20 dB
below the peak frequency centered on approximately 0.3 kHz. The boom
vocalization is harmonic in character, has a fundamental frequency of
168th Meeting: Acoustical Society of America
2073
approximately 0.3060.01 kHz, and lasts approximately 1.8160.18 s.
Understanding Greater Prairie-Chicken vocal attributes is an essential element in the effort to understand the influence of environmental sound, prominently including anthropogenic sources like wind turbine farms, on vocal
communication success.
9:30
1aAB5. Bioacoustics of Trachymyrmex fuscus, Trachymyrmex tucumanus, and Atta sexdens rubropilosa (Hymenoptera: Formicidae). Amanda
A. Carlos, Francesca Barbero, Luca P. Cassaci, Simona Bonelli (Life Sci.
and System Biology, Univ. of Turin, Dipartimento di Biologia Animale e
dell’Uomo Via Accademia Albertina 13, Turin 10123, Italy, amandacarlos@yahoo.com.br), and Odair C. Bueno (Centro de Estudos de Insetos
Sociais (CEIS), Universidade Estadual Paulista J
ulio de Mesquita Filho
(UNESP), Rio Claro, Brazil)
The capability to produce species-specific sounds is common among
ants. Ants of the genus Trachymyrmex occur in an intermediate phylogenetic position within the Attini tribe, between the leafcutters, such as Atta
sexdens rubropilosa, and more basal species. The study of stridulations
would provide important cues on the evolution of the tribe’s diverse biological aspects. Therefore, in the present study, we described the stridulation
signals produced by Trachymyrmex fuscus, Trachymyrmex tucumanus, and
A. sexdens rubropilosa workers. Ant workers were recorded, and their stridulatory organs were measured. The following parameters were analyzed:
chirp length [ms], inter-chirp (pause) [ms], cycle (chirp + inter-chirp) [ms],
cycle repetition rate [Hz], and the peak frequency [Hz], as well as the number of ridges on the pars stridens. During the inter-chirp, there is no measurable signal for A. sexdens rubropilosa, whereas for Trachymyrmex fuscus
and Trachymyrmex tucumanus, a low intensity signal was detected. In other
words, the plectrum and the pars stridens of A. sexdens rubropilosa have no
contact during the lowering of the gaster. Principal component analysis, to
which mainly the duration of chirps contributed, showed that stridulation is
an efficient tool to differentiate ant species at least in the case of the Attini
tribe.
9:45
1aAB6. Robustness of perceptual features used for passive acoustic classification of cetaceans to the ocean environment. Carolyn Binder (Oceanogr. Dept., Dalhousie Univ., LSC Ocean Wing, 1355 Oxford St., PO Box
15000, Halifax, NS B3H 4R2, Canada, carolyn.binder@dal.ca) and Paul C.
Hines (Dept. of Elec. and Comput. Eng., Dalhousie Univ., Halifax, NS,
Canada)
Passive acoustic monitoring (PAM) is used to study cetaceans in their
habitats, which cover diverse underwater environments. It is well known
that properties of the ocean environment can be markedly different between
regions, which can result in distinct propagation characteristics. These can
in turn lead to differences in the time-frequency characteristics of a recorded
signal and may impact the accuracy of PAM systems. To develop an automatic PAM system capable of operating under numerous environmental
conditions, one must account for the impact of propagation conditions. A
prototype aural classifier developed at Defence R&D Canada has successfully been used for inter-species discrimination of cetaceans. The aural classifier achieves accurate results by using perceptual signal features that
model the features employed by the human auditory system. The current
work uses a combination of at-sea experiments and pulse propagation modeling to examine the robustness of the perceptual features with respect to
propagation effects. Preliminary results will be presented from bowhead and
humpback vocalizations that were transmitted over 1–20 km ranges during a
two-day sea trial in the Gulf of Mexico. Insight gained from experimental
results will be augmented with model results. [Work supported by the U.S.
Office of Naval Research.]
10:00–10:15 Break
10:15
1aAB7. Passive acoustic monitoring on the seasonal species composition
of cetaceans from a marine observatory. Tzu-Hao Lin, Hsin-Yi Yu (Inst.
of Ecology and Evolutionary Biology, National Taiwan Univ., No. 1, Sec.
4, Roosevelt Rd., Taipei 10617, Taiwan, schonkopf@gmail.com), Chi-Fang
Chen (Dept. of Eng. Sci. and Ocean Eng., National Taiwan Univ., Taipei,
Taiwan), and Lien-Siang Chou (Inst. of Ecology and Evolutionary Biology,
National Taiwan Univ., Taipei, Taiwan)
Information on the species diversity of cetaceans can help us to understand the community ecology of marine top predators. Passive acoustic
monitoring has been widely applied in the cetacean research, however, species identification based on tonal sounds remains challenging. In order to
examine the seasonal changing pattern of species diversity, we applied an
automatic detection and classification algorithm on acoustic recordings collected from the marine cable hosted observatory off the northeastern Taiwan. Representative frequencies of cetacean tonal sounds were detected.
Statistical features were extracted based on the distribution of representative
frequency and were used to classify four cetacean groups. The correct classification rate was 72.2% based on the field recordings collected from
onboard surveys. Analysis on one-year recordings revealed that the species
diversity was highest in winter and spring. Short finned pilot whales and
Risso’s dolphins were the most common species, they mainly occurred in
winter and summer. False killer whales were mostly detected in winter and
spring. Spinner dolphins, spotted dolphins, and Fraser’s dolphins were
mainly detected in summer. Bottlenose dolphins represent the least common
species. In the future, the biodiversity, species-specific habitat use, and
inter-specific interaction of cetaceans can be investigated through an underwater acoustic monitoring network.
10:30
1aAB8. The effects of road noise on the calling behavior of Pacific chorus frogs. Danielle V. Nelson (Dept. of Forest Ecosystems and Society,
Oregon State Univ., Oregon State University, 321 Richardson Hall, Corvallis, OR 97331, danielle.nelson@oregonstate.edu), Holger Klinck (Fisheries
and Wildlife, Oregon State Univ., Newport, OR), and Tiffany S. Garcia
(Fisheries and Wildlife, Oregon State Univ., Corvallis, OR)
Fitness consequences of anthropogenic noise on organisms that have
chorus-dependent breeding requirements, such as frogs, are not well understood. While frogs were thought to have innate and fixed call structure, species-specific vocal plasticity has been observed in populations experiencing
high noise conditions. Adjustment to call structure, however, can have negative fitness implications in terms of energy expenditure and female choice.
The Pacific chorus frog (Pseudacris regilla), a common vocal species
broadly distributed throughout the Pacific Northwest, often breeds in waters
impacted by road noise. We compared Pacific chorus frog call structure
from breeding populations at 11 high- and low-traffic sites in the Willamette
Valley, Oregon. We used passive acoustic monitoring and directional
recordings to determine mean dominant frequency, amplitude, and call rate
of breeding populations, individual frogs, and to quantify ambient road
noise levels. Preliminary results indicate that while individuals do not differ
in call rate or structure across noisy and quiet sites, high road noise levels
decrease the effective communication distance of both the chorus and the
individual. This research enhances our understanding of acoustic habitat in
the Willamette Valley and the impacts of anthropogenic noise on a native
amphibian species.
10:45
1aAB9. Inter-individual difference of one type of pulsed sounds produced by beluga whales (Delphinapterus leucas). Yuka Mishima (Tokyo
Univ. of Marine Sci. and Technol., Konan 4-5-7, Minato-ku, Tokyo 1088477, Japan, thank_you_for_email_5yuka@yahoo.co.jp), Tadamichi Morisaka (Tokai Univ. Inst. of Innovative Sci. and Technol., Shizuoka-shi,
Japan), Miho Itoh (The Port of Nagoya Public Aquarium, Nagoya-shi,
Japan), Ryota Suzuki, Kenji Okutsu (Yokohama Hakkeijima Sea Paradise,
Yokohama-shi, Japan), Aiko Sakaguchi, and Yoshinori Miyamoto (Tokyo
Univ. of Marine Sci. and Technol., Minato-ku, Japan)
Belugas often exchange one type of broadband pulsed sounds (termed
PS1 calls) which possibly functions as a contact calls (Morisaka et al.,
2074
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2074
11:00
11:15
1aAB11. Evidence for a possible functional significance of horseshoe bat
biosonar dynamics. Rolf M€
uller, Anupam K. Gupta (Mech. Eng., Virginia
Tech, 1075 Life Sci. Cir, Blacksburg, VA 24061, rolf.mueller@vt.edu),
Uzair Gillani (Elec. and Comput. Eng., Virginia Tech, Blacksburg, VA),
Yanqing Fu (Eng. Sci. and Mech., Virginia Tech, Blacksburg, VA), and
Hongxiao Zhu (Dept. of Statistics, Virginia Tech, Blacksburg, VA)
The periphery of the biosonar system of horseshoe bats is characterized
by a conspicuous dynamics where the shapes of the noseleaves (structures
that surround the nostrils) and the outer ears (pinnae) undergo fast changes
that can coincide with pulse emission and echo reception. These changes in
the geometries of the sound-reflecting surfaces affect the device characteristics, e.g., as represented by beampatterns. Hence, this dynamics could give
horseshoe bats an opportunity to view their environments through a set of
different device characteristics. It is not clear at present whether horseshoe
bats make use of this opportunity, but there is evidence from various sources, namely, anatomy, behavior, evolution, and information theory. Anatomical studies have shown the existence of specialized muscular actuation
systems that are clearly directed toward geometrical changes. Behavioral
observations indicate that these changes are linked to contexts where the bat
is confronted with a novel or otherwise demanding situation. Evolutionary
evidence comes from the occurrence of qualitatively similar ear deformation
patterns in mustached bats (Pteronotus) that have independently evolved a
biosonar for Doppler-shift detection. Finally, an information-theoretic analysis demonstrates that the capacity of the biosonar system for encoding sensory information is enhanced by these dynamic processes.
1aAB10. Numerical study of biosonar beam forming in finless porpoise
(Neophocaena asiaeorientalis). Chong Wei (College of Ocean & Earth
Sci., Xiamen Univ., 1502 Spreckels St. Apt 302A, Honolulu, Hawaii 96822,
weichong3310@foxmail.com), Zhitao Wang (Key Lab. of Aquatic Biodiversity and Conservation of the Chinese Acad. of Sci., Inst. of Hydrobiology
of the Chinese Acad. of Sci., Wuhan, China), Zhongchang Song (College of
Ocean & Earth Sci., Xiamen Univ., Xiamen, China), Whitlow Au (Hawaii
Inst. of Marine Biology, Univ. of Hawaii at Manoa, Kaneohe, HI), Ding
Wang (Key Lab. of Aquatic Biodiversity and Conservation of the Chinese
Acad. of Sci., Inst. of Hydrobiology of the Chinese Acad. of Sci., Wuhan,
China), and Yu Zhang (Key Lab. of Underwater Acoust. Commun. and Marine Information Technol. of the Ministry of Education, Xiamen Univ., Xiamen, China)
1aAB12. Analysis of some special buzz clicks. Odile Gerard (DGA, Ave.
de la Tour Royale, Toulon 83000, France, odigea@gmail.com), Craig Carthel, and Stefano Coraluppi (Systems & Technol. Res., Woburn, MA)
Finless porpoise (Neophocaena asiaeorientalis) is known to use the narrow band signals for echolocation living in the Yangtze River and in the
adjoining Poyang and Dongting Lakes in China. In this study, the sound velocity and density of different tissues (including melon, muscle, bony structure, connective tissues, blubber, and mandibular fat) in the porpoise’s head
were obtained by measurement. The sound velocity and density were found
out to have a linear relationship with Hounsfield unit (HU) obtained from
the CT scan. The acoustic property of the head of the porpoise was reconstructed from the HU distribution. Numerical simulations of the acoustic
propagation through finless porpoise’s head were performed by a finite element approach. The beam formation was compared with those of the baiji,
Indo-pacific humpback dolphin, and bottlenose dolphin. The role of the different structures in the head such as air sacs, melon, muscle, bony structure,
connective tissues, blubber, and mandibular fat on biosonar beam was investigated. The results might provide useful information for better understanding of the sound propagation in finless porpoise’s head.
Toothed whales are known to click regularly to find prey. Once a prey
has been detected, the repetition rate of the clicks increases; these sequences
are called buzzes. Previous work shows that the buzz clicks spectrum slowly
varies from click to click for various species. This spectrum similarity allows
buzz clicks association as a sequence using multi-hypothesis tracking
(MHT) algorithms. Thus buzz classification follows automatic click tracking.
The use of MHT reveals that in some rare cases a variant of this property has
been found, whereby sub-sequences of clicks exhibit slowly varying characteristics. In 2010 and 2011, NATO Undersea Research Centre (NURC, now
CMRE Centre for Maritime Research and Experimentation) conducted seatrials with the CPAM (compact Passive Acoustic Monitoring), a volumetric
towed array comprised of four or six hydrophones. This configuration allows
for a rough estimate of clicking animal localization. Some buzzes with subsequences of slowly varying characteristics were recorded with the CPAM.
Localization may help to understand this new finding from a physiological
point of view. The results of this analysis will be presented.
2075
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
11:30
168th Meeting: Acoustical Society of America
2075
1a MON. AM
2013). Here we investigate how belugas embed their signature information
into the PS1 calls. PS1 calls were recorded from each of five belugas
including both sexes and various ages at the Port of Nagoya Public Aquarium using a broadband recording system when in isolation. Temporal and
spectral acoustic parameters of PS1 calls were measured and compared
among individuals. Kruskal-Wallis test revealed that inter-pulse intervals
(IPIs), the number of pulses, and pulse rates of PS1 calls had significant differences among individuals, but duration did not (v2 = 76.7, p<0.0001;
v2 = 26.2, p<0.0001; v2 = 45.3, p<0.0001; and v2 = 4.7, p = 0.316 respectively). The contours depicted by the IPIs as a function of pulse order were
also individually different and only the contours of a calf fluctuated over
time. Four belugas except a juvenile had individually distinctive power
spectra. These results suggest that several acoustic parameters of PS1 calls
may hold individual information. We found PS1-like calls from the other
captive belugas (Yokohama Hakkeijima Sea Paradise) suggested that the
PS1 call is not the specific call for one captive population but the basic call
type for belugas.
MONDAY MORNING, 27 OCTOBER 2014
MARRIOTT 3/4, 7:55 A.M. TO 12:00 NOON
Session 1aNS
Noise, Physical Acoustics, Structural Acoustics and Vibration, and Engineering Acoustics: Metamaterials
for Noise Control I
Keith Attenborough, Cochair
DDEM, The Open University, Walton Hall, Milton Keynes MK7 6AA, United Kingdom
Olga Umnova, Cochair
University of Salford, The Crescent, Salford M5 4WT, United Kingdom
Chair’s Introduction—7:55
Invited Papers
8:00
1aNS1. Recent results on sonic crystals for sound guiding and acoustic absorption. Jose Sanchez-Dehesa, Victor M. GarcıaChocano, and Matthew D. Guild (Dept. of Electron. Eng., Universitat Politecnica de Valencia, Camino de vera s.n., Edificio 7F,
Valencia, Valencia ES-46022, Spain, jsdehesa@upv.es)
We report on different aspects of the behavior of sonic crystals with finite size. First, at wavelengths on the order of the lattice period
we have observed the excitation of deaf modes, i.e., modes with symmetry orthogonal to that of the exciting beam. Numerical simulations and experiments performed with samples made of three rows of cylindrical scatterers demonstrate the excitation of sound waves
guided along a direction perpendicular to the incident beam. Moreover, the wave propagation inside the sonic crystal is strongly dependent on the porosity of the building units. This finding can be used to enhance the absorbing properties of the crystal. Also, we will discuss
the properties of finite sonic crystals at low frequencies, where we have observed small period oscillations superimposed on the wellknown Fabry-Perot resonances appearing in the reflectance and transmittance spectra. It will be shown that the additional oscillations
are due to diffraction in combination with the excitation of the transverse modes associated with the finite size of the samples. [Work
supported by ONR.]
8:20
1aNS2. Acoustic metamaterial absorbers based on multi-scale sonic crystals. Matthew D. Guild, Victor M. Garcıa-Chocano (Dept.
of Electronics Eng., Universitat Politecnica de Valencia, Camino de vera s/n (Edificio 7F), Valencia 46022, Spain, mdguild@utexas.
edu), Weiwei Kan (Dept. of Phys., Nanjing Univ., Nanjing, China), and Jose Sanchez-Dehesa (Dept. of Electronics Eng., Universitat
Politecnica de Valencia, Valencia, Spain)
In this work, thermoviscous losses in single- and multi-scale sonic crystal arrangements are examined, enabling the fabrication and
characterization of acoustic metamaterial absorbers. It will be shown that higher filling fraction arrangements can be used to provide a
large enhancement in the complex mass density and loss factor, and can be combined with other sonic crystals of different sizes to create
multi-scale structures that further enhance these effects. To realize these enhanced properties, different sonic crystal lattices are examined and arranged as a layered structure or a slab with large embedded inclusions. The inclusions are made from either a single solid cylinder or symmetrically arranged clusters of cylinders, known as magic clusters, which behave as an effective fluid. Theoretical results
are obtained using a two-step homogenization process, by first homogenizing each sonic crystal to obtain the complex effective properties of each length scale, and then homogenizing the effective fluid structures to determine the properties of the ensemble structure. Experimental data from acoustic impedance tube measurements will be presented and shown to be in excellent agreement with the
expected results. [Work supported by the US ONR and Spanish MINECO.]
8:40
1aNS3. Quasi-flat acoustic absorber enhanced by metamaterials. Abdelhalim Azbaid El Ouahabi, Victor V. Krylov, and Daniel J.
O’Boy (Dept. of Aeronautical and Automotive Eng., Loughborough Univ., Loughborough University, Loughborough, Leicestershire
LE11 3TU, United Kingdom, A.Azbaid-El-Ouahabi@lboro.ac.uk)
In this paper, the design of a new quasi-flat acoustic absorber (QFAA) enhanced by the presence of a graded metamaterial layer is
described, and the results of the experimental investigation into the reflection of sound from such an absorber are reported. The matching
metamaterial layer is formed by a quasi-periodic array of brass cylindrical tubes with the diameters gradually increasing from the external row of tubes facing the open air towards the internal row facing the absorbing layer made of a porous material. The QFAA is placed
in a wooden box with the dimensions of 569 250 305 mm. All brass tubes are of the same length (305 mm) and fixed between the
opposite sides of the wooden box. Measurements of the sound reflection coefficients from the empty wooden box, from the box with an
inserted porous absorbing layer, and from the full QFAA containing both the porous absorbing layer and the array of brass tubes have
2076
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2076
1a MON. AM
been carried out in an anechoic chamber at the frequency range of 500–3000 Hz. The results show that the presence of the metamaterial
layer brings a noticeable reduction in the sound reflection coefficients in comparison with the reflection from the porous layer alone.
9:00
1aNS4. The influence of thermal and viscous effects on the effective properties of an array of slits. John D. Smith (Physical Sci.,
DSTL, Porton Down, Salisbury SP4 0JQ, United Kingdom, jdsmith@dstl.gov.uk), Roy Sambles, Gareth P. Ward, and Alastair R. Murray (Dept. of Phys. and Astronomy, Univ. of Exeter, Exeter, United Kingdom)
A system consisting of an array of thin plates separated by air gaps is examined using the method of asymptotic homogenization.
The effective properties are compared with a finite element model and experimental results for the resonant transmission of both a single
slit and an array of slits. These results show a dramatic reduction in the frequency of resonant transmission when the slit is narrowed to
below around one percent of the wavelength due to viscous and thermal effects reducing the effective sound velocity through the slits.
These effects are still significant for slit widths substantially greater than the thickness of the boundary layer.
9:20
1aNS5. Atypical dynamic behavior of periodic frame structures with local resonance. Stephane Hans, Claude Boutin (LGCB /
LTDS, ENTPE / Universite de Lyon, rue Maurice Audin, Vaulx-en-Velin 69120, France, stephane.hans@entpe.fr), and Celine Chesnais
(IFSTTAR GER, Universite Paris-Est, Paris, France)
This work investigates the dynamic behavior of periodic unbraced frame structures made up of interconnected beams or plates. Such
structures can represent an idealization of numerous reticulated systems, as the microstructure of foams, plants, bones, the sandwich panels. As beams are much stiffer in tension-compression than in bending, the propagation of waves with wavelengths much greater than
the cell size and the bending modes of the elements can occur in the same frequency range. Thus, frame structures can behave as metamaterials. Since the condition of scale separation is respected, the homogenization method of periodic discrete media is used to derive
the macroscopic behavior. The main advantages of the method are the analytical formulation and the possibility to study the behavior of
the elements at local scale. This provides a clear understanding of the mechanisms governing the dynamics of the material. In the presence of the local resonance, the form of the equations is unchanged but some macroscopic parameters depend on the frequency. In particular, this applies to the mass leading to a generalization of the Newtonian mechanics. As a result, there are frequency bandgaps. In
that case, the same macroscopic modal shape is also associated with several resonant frequencies.
9:40
1aNS6. Design of sound absorbing metamaterials by periodically embedding three-dimensional resonant or non-resonant inclusions in rigidly backed porous plate. Jean-Philippe Groby (LAUM, UMR6613 CNRS, LAUM, UMR 6613 CNRS, AV. Olivier Messiaen, Le Mans F-72085, France, Jean-Philippe.Groby@univ-lemans.fr), Benoit Nennig (LISMMA, Supmeca, Saint Ouen, France),
Clement Lagarrigue, Brunuo Brouard, Dazel Olivier (LAUM, UMR6613 CNRS, Le Mans, France), Olga Umnova (Acoust. Res. Ctr.,
Univ. of Salford, Salford, United Kingdom), and Vincent Tournat (LAUM, UMR6613 CNRS, Le Mans, France)
Air saturated porous materials, namely, foams and wools, are often used as sound absorbing materials. Nevertheless, they suffer
from a lack of absorption efficiency at low frequencies, which is inherent to their absorption mechanisms (viscous and thermal losses),
even when used as optimized multilayer or graded porous materials. These last decades, several solutions have been proposed to avoid
this problem. Among them, metaporous materials consist in exciting modes trapping the energy between the periodic rigid inclusions
embedded in the porous plate and the rigid backing or in the inclusions themselves. The absorption coefficient of different foams is
enhanced both in the viscous and inertial regimes by periodically embedding 3D inclusions, possibly resonant, i.e., air filled Helmholtz
resonators. This enhancement is due to different mode excitation: a Helmholtz resonance in the viscous regime and a trap mode in the inertial regime. In particular, a large absorption coefficient is reached for wavelengths in the air 27 times larger than the sample thickness.
The absorption amplitude and bandwidth is then enlarged by removing porous material in front of the neck, enabling a lower impedance
radiation, and by adjusting the resonance frequencies of the Helmholtz resonator.
10:00–10:20 Break
10:20
1aNS7. Seismic metamaterials: Shielding and focusing surface elastic waves in structured soils. Sebastien R. Guenneau, Stefan
Enoch (Phys., Institut Fresnel, Ave. Escadrille Normandie Niemen, Marseille 13013, France, sebastien.guenneau@fresnel.fr), and
Stephane Brule (Menard Co., Nozay, France)
Phononic crystals and metamaterials are man-made structures (with periodic heterogeneities typically a few micrometers to centimeters) that can control sound in ways not found in nature. Whereas the properties of phononic crystals derive from the periodicity of
their structure, those of metamaterials arise from the collective effect of a large array of small resonators. These effects can be used to
manipulate acoustic waves in unconventional ways, realizing functions such as invisibility cloaking, subwavelength focusing, and
unconventional refraction phenomena (such as negative refractive index and phase velocity). Recent work has started to explore another
intriguing domain of application: using similar concepts to control the propagation of seismic waves within the surface of the Earth. Our
research group at the Aix-Marseille University and French National Center for Scientific Research (CNRS) has teamed up with civil
engineers at an industrial company, Menard, in Nozay, also in France, and carried out the largest-scale tests to date of phononic crystals.
Arrays of boreholes in soil which are a few centimeters to a few meters in diameter are encouraging, thereafter called seismic metamaterials, can be used to deflect incoming acoustic waves at a frequency relevant to earthquake protection, or bring them to a focus. These
preliminary successes could one day translate into a way of mitigating the destructive effects of earthquakes.
2077
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2077
10:40
1aNS8. Tunable resonator arrays—Transmission, near-field interactions, and effective property extraction. Dmitry Smirnov and
Olga Umnova (Acoust. Res. Ctr., Univ. of Salford, The Crescent, Salford, Greater Manchester m5 4wt, United Kingdom, d.smirnov@
edu.salford.ac.uk)
Periodic arrays of slotted cylinders have been studied with a focus on analytical and semi-analytical techniques, observing near-field
interactions and their influence on reflection and transmission of acoustic waves by the array. Relative orientation of the cylinders within
a unit cell has been shown to strongly affect the array behavior, facilitating tunable transmission gaps. An improved homogenization
method is proposed and used to determine effective properties of the array, allowing accurate and computationally efficient prediction of
reflection and transmission characteristics of any number of rows at arbitrary incidence.
11:00
1aNS9. Tunable cylinders for sound control in water. Andrew Norris and Alexey Titovich (Mech. and Aerosp. Eng., Rutgers Univ.,
98 Brett Rd., Piscataway, NJ 08854, norris@rutgers.edu)
Long wavelength effective medium properties are achieved using arrays of closely spaced tunable cylinders. Thin metal shells provide the starting point: for a given shell thickness h and radius a, the effective bulk modulus and density are both proportional to h/a.
Since the metal has large impedance relative to water it follows that there is a unique value of h/a at which the shell is effectively impedance matched to water. The effective sound speed cannot be matched by the thin shell alone (except for impractical metals like silver).
However, simultaneous impedance and speed matching can be obtained by adding an internal mass, e.g., an acrylic core in aluminum cylindrical tubes. By varying the shell thickness and the internal mass, a range of effective properties is achievable. Practical considerations such as shell thickness, internal mass material, and fabrication will be discussed. Arrays made of a small number of different tuned
shells will be described using numerical simulations: example applications include focusing, lensing, and wave steering. [Work supported by ONR.]
11:20
1aNS10. Sound waves over periodic and aperiodic arrays of cylinders on ground surfaces. Shahram Taherzadeh, Ho-Chul Shin,
and Keith Attenborough (Eng. & Innovation, The Open Univ., Walton Hall, Milton Keynes MK7 6AA, United Kingdom, shahram.taherzadeh@open.ac.uk)
Propagation of audio frequency sound waves over periodic arrays of cylinders placed on acoustically hard and soft surfaces has been
studied through laboratory measurements and predictions using a point source. It is found that perturbation of the position of the cylinders from a regular array results in a higher insertion loss than completely periodic or random cylinder arrangements.
11:40
1aNS11. Ground effect due to rough and resonant surfaces. Keith Attenborough (Eng. and Innovation, Open Univ., 18 Milebush,
Linslade, Leighton Buzzard, Bedfordshire LU7 2UB, United Kingdom, Keith.Attenborough@open.ac.uk), Ho-Chul Shin, and Shahram
Taherzadeh (Eng. and Innovation, Open Univ., Milton Keynes, United Kingdom)
Particularly if the ground surface between noise source and receiver would otherwise be smooth and acoustically hard, structured
low-rise ground roughness can be used as an alternative to conventional noise barriers. The techniques of periodic-spacing, absorptive
covering, and local resonance can be used, as when broadening metamaterial stop bands, to achieve a broadband ground effect. This has
been demonstrated both numerically and through laboratory experiments. Computations have employed multiple scattering theory, the
Finite Element Method and the Boundary Element Method. The experiments have involved measurements over cylindrical and rectangular roughness elements and over their resonant counterparts created by incorporating slit-like openings. Resonant elements with slit
openings have been found numerically and experimentally to add a destructive interference below the first roughness-induced destructive interference and thereby mitigate the adverse effects of the low-frequency surface waves generated by the presence of roughness
elements. A nested configuration of slotted hollow roughness elements is predicted to produce multiple resonances and this idea has
been validated through laboratory experiments.
2078
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2078
INDIANA C/D, 8:15 A.M. TO 11:45 A.M.
1a MON. AM
MONDAY MORNING, 27 OCTOBER 2014
Session 1aPA
Physical Acoustics and Noise: Jet Noise Measurements and Analyses I
Richard L. McKinley, Cochair
Battlespace Acoustics, Air Force Research Laboratory, 2610 Seventh Street, Wright-Patterson AFB, OH 45433-7901
Kent L. Gee, Cochair
Brigham Young University, N243 ESC, Provo, UT 84602
Alan T. Wall, Cochair
Battlespace Acoustics Branch, Air Force Research Laboratory, Bldg. 441, Wright-Patterson AFB, OH 45433
Chair’s Introduction—8:15
Invited Papers
8:20
1aPA1. F-35A and F-35B aircraft ground run-up acoustic emissions. Michael M. James, Micah Downing, Alexandria R. Salton
(Blue Ridge Res. and Consulting, 29 N Market St., Ste. 700, Asheville, NC 28801, michael.james@blueridgeresearch.com), Kent L.
Gee, Tracianne B. Neilsen (Phys. and Astronomy, Brigham Young Univ., Provo, UT), Richard L. McKinley, Alan T. Wall, and Hilary
L. Gallagher (Air Force Res. Lab., Dayton, OH)
A multi-organizational effort led by the Air Force Research Laboratory conducted acoustic emissions measurements on the F-35A
and F 35B aircraft at Edwards Air Force Base, California, in September 2013. These measurements followed American National Standards Institute/Acoustical Society of America S12.75-2012 to collect noise data for community noise models and noise exposures to aircraft personnel. In total, over 200 unique locations were measured with over 300 high fidelity microphones. Multiple microphone arrays
were deployed in three orientations: circular arcs, linear offsets from the jet-axis centerline, and linear offsets from the jet shear layer.
The microphone arrays ranged from distances 10 ft outside the shear layer to 4000 ft from the aircraft with angular positions ranging
from 0 (aircraft nose) to 160 (edge of the exhaust flow field). A description of the ground run-up acoustic measurements, data processing, and the resultant data set is provided.
8:40
1aPA2. Measurement of acoustic emissions from F-35B vertical landing operations. Micah Downing, Michael James (Blue Ridge
Res. and Consulting, 29 N. Market St., Ste. 700, Asheville, NC 28801, micah.downing@blueridgeresearch.com), Kent Gee, Brent
Reichman (Brigham Young Univ., Provo, UT), Richard McKinley (Air Force Res. Lab., Wright-Patterson AFB, OH), and Allan Aubert
(Naval Air Warfare Ctr., Patuxent River, MD)
A multi-organizational effort led by the Air Force Research Laboratory conducted acoustic emissions measurements from vertical
landing operations of the F-35B aircraft at Marine Corps Air Station Yuma, Arizona, in September 2013. These measurements followed
American National Standards Institute/Acoustical Society of American S12.75-2012 to collect noise data from vertical landing operations for community noise models and noise exposures to aircraft personnel. Three circular arcs and two vertical microphone arrays
were deployed for these measurements. The circular microphone arrays ranged from distances from 250 ft to 1000 ft from touch down
point. A description of the vertical landing acoustic measurements, data processing, preliminary data analysis, the resultant dataset, and
a summary of results will be provided.
9:00
1aPA3. Acoustic emissions from flyover measurements of F-35A and F-35B aircraft. Richard L. McKinley, Alan T. Wall, Hilary L.
Gallagher (Battlespace Acoust. Branch, Air Force Res. Lab., 711 HPW/RHCB, 2610 Seventh St., Bldg 441, Wright-Patterson AFB, OH,
richard.mckinley.1@us.af.mil), Christopher M. Hobbs, Juliet A. Page, and Joseph J. Czech (Wyle Labs., Inc., Arlington, VA)
Acoustic emissions of F-35A and F-35B aircraft flyovers were measured in September 2013, in a multi-organizational effort led by
the Air Force Research Laboratory. These measurements followed American National Standards Institute/Acoustical Society of America
S12.75-2012 guidance on aircraft flyover noise measurements. Measurements were made from locations directly under the flight path to
12,000 ft away with microphones on the ground, 5 ft, and 30 ft high. Vertical microphone arrays suspended from cranes measured noise
from on the ground up to 300 ft above the ground. A linear ground-based microphone array measured noise directly along the flight
path. In total, data were collected at more than 100 unique locations. Measurements were repeated six times for each flyover condition.
Preliminary results are presented to demonstrate the repeatability of noise data over measurement repetitions, assess data quality, and
quantify community noise exposure models.
2079
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2079
9:20
1aPA4. Three-stream jet noise measurements and predictions. Brenda S. Henderson (Acoust., NASA, MS 54-3, 21000 Brookpark
Rd., Cleveland, OH 44135, brenda.s.henderson@nasa.gov) and Stewart J. Leib (Ohio Aerosp. Inst., Cleveland, OH)
An experimental and numerical investigation of the noise produced by high-subsonic three-stream jets was conducted. The exhaust
system consisted of externally mixed-convergent nozzles and an external plug. Bypass- and tertiary-to-core area ratios between 1 and
1.75, and 0.4 and 1.0, respectively, were studied. Axisymmetric and offset tertiary nozzles were investigated for heated and unheated
conditions. For axisymmetric configurations, the addition of the third stream was found to reduce mid- and high-frequency acoustic levels in the peak-jet-noise direction, with greater reductions at the lower bypass-to-core area ratios. The addition of the third stream also
decreased peak acoustic levels in the peak-jet-noise direction for intermediate bypass-to-core area ratios. For the offset configurations,
an s-duct was found to increase acoustic levels relative to those of the equivalent axisymmetric-three-stream jet while half-duct configurations produced acoustic levels similar to those for the axisymmetric jet for azimuthal observation locations of interest. Comparisons of
noise predictions with acoustic data are presented for selected unheated configurations. The predictions are based on an acoustic analogy
approach with mean flow interaction effects accounted for using a Green’s function, computed in terms of its coupled azimuthal modes,
and a source model previously used for round and rectangular jets.
9:40
1aPA5. Acoustic interaction of turbofan exhaust with deflected control surface for blended wing body airplane. Dimitri Papamoschou (Mech. and Aerosp. Eng., Univ. of California, Irvine, 4200 Eng. Gateway, Irvine, CA 92697-3975, dpapamos@uci.edu) and Salvador Mayoral (Mech. and Aerosp. Eng., Univ. of California, Irvine, Irvine, Armed Forces Pacific)
Small-scale experiments simulated the elevon-induced jet scrubbing noise of the Blended-Wing-Body platform with a bypass ratio
ten turbofan nozzle installed above the wing. The elevon chord length at the interaction zone was similar to the exit fan diameter of the
nozzle. The study encompassed variable nozzle position, variable elevon deflection, removable inboard fins, and two types of nozzles—
plain and chevron. Far-field microphone surveys were conducted underneath the wing. The interaction between the jet and the elevon
produces excess noise that intensifies with increasing elevon deflection. When the elevon trailing edge is near the edge of the jet, excess
noise is manifested as a low-frequency bump on the sound pressure level spectrum. An empirical model for this excess noise is presented. The interaction noise becomes severe, and elevates the entire spectrum, when the elevon intrudes significantly into the jet flow.
The increase in effective perceived noise level (EPNL) falls on well-defined trends when correlated versus the penetration of the elevon
trailing edge into the flow field of the isolated jet. The cumulative takeoff EPNL can increase by as much as 19 dB, underscoring the
potentially detrimental effects of jet-elevon interaction on noise compliance.
10:00–10:20 Break
10:20
1aPA6. Comparison of upside-down microphone with flush mounted microphone configuration. Per Rasmussen (G.R.A.S. Sound
& Vib. A/S, Skovlytoften 33, Holte 2840, Denmark, pr@gras.dk)
Measurement of fly-over aircraft noise is often performed using the microphones mounted in an upside-down configuration, with the
microphone placed 7 mm above a hard reflecting surface. This method assumes that most of the sound is coming from the back of the
microphone within an angle of + -60 degrees. The same microphone configuration is proposed for installed and un-installed jet-engine
test in which case, however, the incidence angle for the microphone may be in the range of 60–85 degrees. The response of the upsidedown microphone configuration is compared with flush mounted microphones as reference. The influence of microphone diameter (ranging from 1/8 in. to 1=2 in.) is compared in the different configurations and the effect of windscreens is investigated.
10:40
1aPA7. Active control of noise from hot, supersonic turbulent jets. Tim Colonius, Aaron Towne (Mech. Eng., Caltech, 1200 E. California Blvd., Pasadena, CA 91125, colonius@caltech.edu), Robert H. Schlinker, Ramons A. Reba, and Dan Shannon (Thermal and Fluid
Sci. Dept., United Technologies Res. Ctr., East Hartford, CT)
We report on an experimental and reduced-order modeling study aimed at reducing mixing noise in hot supersonic jets relevant to
military aircraft. A spinning valve is used to modulate four injection nozzles near the main jet nozzle lip over a range of frequencies and
mass flow rates. Diagnostics include near-, mid-, and far-field microphone arrays aimed at measuring the effect of actuation on the nearfield turbulent wavepacket structures and their correlation with mixing noise. The actuators provide more than 4 dB noise reduction at
peak frequencies in the aft arc, and up to 2 dB reduction in OASPL. Experiments are performed to contrast the performance of steady
and unsteady blowing with different amplitudes. The results to date suggest that the noise reduction is primarily associated with attenuated wave packet activity associated with the rapidly thickened shear layers that occur with both steady and unsteady blowing. Mean
flow surveys are also preformed and serve as inputs to reduced-order models for the wave packets based on parabolized stability equations. These models are in turn used to corroborate the experimental evidence suggesting mechanisms of noise suppression in the actuated flow.
2080
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2080
11:00
1aPA8. Efficient jet noise models using the one-way Euler equations.
Aaron Towne and Tim Colonius (Dept. of Mech. and Civil Eng., California
Inst. of Technol., 1200 E California Blvd., MC 107-81, Pasadena, CA
91125, atowne@caltech.edu)
Experimental and numerical investigations have correlated large-scale
coherent structures in turbulent jets with acoustic radiation to downstream
angles, where sound is most intense. These structures can be modeled as linear instability modes of the turbulent mean flow. The parabolized stability
equations have been successfully used to estimate the near-field evolution of
these modes, but are unable to properly capture the acoustic field. We have
recently developed an efficient method for calculating these linear modes
that properly captures the acoustic field. The linearized Euler equations are
modified such that all upstream propagating acoustic modes are removed
from the operator. The resulting equations, called one-way Euler equations,
can be stably and efficiently solved in the frequency domain as a spatial initial value problem in which initial perturbations are specified at the flow
inlet and propagated downstream by integrating the equations. We demonstrate the accuracy and efficiency of the method by using it to model sound
generation and propagation in jets. The results are compared to accurate
large-eddy-simulation data for both subsonic and supersonic jets.
11:15
1aPA9. A new method of estimating acoustic intensity applied to the
sound field near a military jet aircraft. Trevor A. Stout, Kent L. Gee, Tracianne B. Neilsen, Derek C. Thomas, Benjamin Y. Christensen (Phys. and
Astronomy, Brigham Young Univ., 688 north 500 East, Provo, UT 84606,
tstout@byu.edu), and Michael M. James (Blue Ridge Res. and Consulting
LLC, Asheville, NC)
Intensity probes are traditionally made up of closely spaced microphones, with the finite-difference method used to approximate acoustic
intensity. This approximation is not reliable approaching the Nyquist frequency limit determined by microphone spacing. However, the new phase
and amplitude estimation (PAGE) method allows for accurate intensity
approximation far above this limit. The PAGE method is applied to measurements from a three-dimensional intensity probe, which took data to the
sideline and aft of a tethered F-22A Raptor. It is shown that the PAGE
method produces physically meaningful intensity approximations for frequencies up to about 6 kHz, while the finite-difference method is only reliable up to about 2 kHz. [Work supported by ONR.]
11:30
1aPA10. Three transformations of a crackling jet noise waveform and
their potential implications for quantifying the “crackle” percept. S.
Hales Swift (School of Aeronautics and Astronautics, Purdue Univ., 2286
Yeager Rd., West Lafayette, IN 47906, hales.swift@gmail.com), Kent L.
Gee, and Tracianne B. Neilsen (Dept. of Phys. and Astronomy, Brigham
Young Univ., Provo, UT)
In the 1975 paper by Ffowcs-Williams et al. on jet “crackle,” there are
several potentially competing descriptors—including a qualitative description of the sound quality or percept, a statistical measure, and commentary
on the relation of the presence of shocks to the sound’s quality. These
descriptors have led to disparate conclusions about what constitutes a
crackling jet, waveform, or sound quality. This presentation considers three
modifications of a jet noise waveform that exhibits a crackling sound quality
and initially satisfies all three definitions. These modifications alter the statistical distributions of primarily the pressure waveform or its first time difference in order to demonstrate how these modifications do or do not
correspond to changes in the sound quality of the waveform. The result,
although preliminary, demonstrates that the crackle percept is tied to the statistics of the pressure difference waveform instead of the pressure waveform
itself.
MONDAY MORNING, 27 OCTOBER 2014
MARRIOTT 5, 9:30 A.M. TO 12:00 NOON
Session 1aSC
Speech Communication: Speech Processing and Technology (Poster Session)
Michael Kiefte, Chair
Human Communication Disorders, Dalhousie University, 1256 Barrington St., Halifax, NS B3J 1Y6, Canada
All posters will be on display from 9:30 a.m. to 12:00 noon. To allow contributors an opportunity to see other posters, contributors of
odd-numbered papers will be at their posters from 9:30 a.m. to 10:45 a.m. and contributors of even-numbered papers will be at their
posters from 10:45 a.m. to 12:00 noon.
Contributed Papers
1aSC1. Locus equations estimated form a corpus of running speech.
Michael Kiefte (Human Commun. Disord., Dalhousie Univ., 1256 Barrington St., Halifax, NS B3J 1Y6, Canada, mkiefte@dal.ca) and Terrance M.
Nearey (Linguist, Univ. of AB, Edmonton, NS, Canada)
Locus equations, or the linear relationship between onset and vowel second-formant frequency F2 in terms of slope and y-intercept, have been presented as possible invariant correlates to consonant place of articulation
[e.g., Sussman et al. (1998). Behav. Brain Sci. 21, 241–299]. In the current
2081
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
study, formant measurements were extracted from both stressed and
unstressed vowels taken from a database of spontaneous and read speech.
Locus equations were estimated for several places of articulation of the preceding consonant. In addition, optimal time frames for estimating locus
equations are determined with reference to automatic classification of consonant place of articulation as well as vowel identification. Formant frequencies are first measured at multiple time frames—both before and after
voicing onset in the case of voiceless plosives—to find the pair of time
frames that best estimates place of articulation via discriminant analysis and
168th Meeting: Acoustical Society of America
2081
1a MON. AM
Contributed Papers
other classification methods. In addition, locus-equation slopes are compared between stressed and unstressed vowels as well as between spontaneous and read speech samples. In addition, the role of total vowel duration
across these contexts is described. The evaluation of several strategies for
optimizing the automatic extraction of formant frequencies from running
speech are also reported. [Work supported by SSHRC.]
1aSC2. Formant trajectory analysis using dynamic time warping: Preliminary results. Kirsten T. Regier (Linguist, Indiana Univ., 3201 W
Woodbridge Dr., Muncie, IN 47304, krtodt@indiana.edu)
In English, there are at least two mechanisms that affect vowel duration—vowel identity and postvocalic consonant voicing. Previous studies
have shown that these two mechanisms have independent effects on vowel
duration (Port 1981, Todt 2010). This study presents preliminary results on
the use of dynamic time warping to distinguish between the effects of vowel
identity and postvocalic consonant voicing on the formant trajectories of
English front vowels. Using PraatR (Albin 2014), formant trajectories are
extracted from sound files in Praat and imported into R, where the dynamic
time warping analysis is conducted using the dtw package (Giorgino 2009).
Albin, A. L. (2014). PraatR: An architecture for controlling the phonetics
software “Praat” with the R programming language. JASA 135, 2198. Giorgino T. (2009). “Computing and Visualizing Dynamic Time Warping Alignments in R: The dtw Package,”J. Stat. Software, 31(7), pp. 1–24. Port, R. F.
(1981). Linguistic timing factors in combination. JASA 69(1), 262–274. R
Core Team (2014). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. Todt, K. R.
(2010). The production of English front vowels by Spanish speakers: A
study of vowel duration based on vowel tenseness and consonant voicing,
JASA 128, 2489.
1aSC3. A “pivot” model for extracting formant measurements based on
vowel trajectory dynamics. Aaron L. Albin and Wil A. Rankinen (Dept. of
Linguist, Indiana Univ., Memorial Hall 322, 1021 E 3rd St., Bloomington,
IN 47405-7005, aaalbin@indiana.edu)
Formant measurements are commonly extracted at fixed fractions across
a vowel’s duration (e.g., the 1/2 point for a monophthong and the 1/3 and 2/
3 points for a diphthong). This approach tacitly relies on the convenience
assumption that a speaker always maximally approximates the intended
acoustic target at roughly the same point across a vowel’s duration. The
present paper proposes an alternate method whereby every formant point
sampled within a vowel is considered as a possible "pivot" (i.e., turning
point), with monophthongs modeled as having one pivot and diphthongs
modeled as having two pivots. The optimal pivot for the vowel is then determined by fitting regression lines to the formant trajectory and comparing the
goodness-of-fit of these lines to the raw formant data. When applied to a
corpus of an American English dialect, the resulting measurements were
found to be significantly correlated with previous methods. This suggests
that the aforementioned convenience assumption is unnecessary and that the
proposed model, which is more faithful to our understanding of articulatory
dynamics, is a viable alternative. Moreover, rather than being assumed a priori, the location of the measurement can be treated as an empirical question
in its own right.
1aSC4. Exploiting second-order statistics improves statistical learning
of vowels. Fernando Llanos (School of Lang. and Cultures, Purdue Univ.,
220 FERRY ST APT 6, Lafayette, IN 45901, fllanos@purdue.edu), Yue
Jiang, and Keith R. Kluender (Dept. of Speech, Lang. and Hearing Sci., Purdue Univ., West Lafayette, IN)
Unsupervised clustering algorithms were used to evaluate three models
of statistical learning of minimal contrasts between English vowel pairs.
The first two models employed only first-order statistics with assumptions
of uniform [M1] or Gaussian [M2] distributions of vowels in an F1-F2
space. The third model [M3] employed second-order statistics by encoding
covariance between F1 and F2. Acoustic measures of F1/F2 frequencies for
12 vowels spoken by 139 men, women, and children (Hillendrand et al.
1995) were used as input to the models. Effectiveness of each model was
tested for each minimal-pair contrast across 100 simulations. Each
2082
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
simulation consisted of two centroids that adjusted on a trial-by-trial basis
as 1000 F1/F2 pairs were input to the models. With addition of each pair,
centroids were reallocated by a k-means algorithm, an unsupervised clustering algorithm that provides an optimal partition of the space into uniformlysized convex cells. The first-order Gaussian model [M2] performed better
than a uniform distribution [M2] for six of seven minimal pairs. The second-order model [M3] was significantly superior to both first-order models
for every pair. Results have implications for optimal perceptual learning of
phonetic differences in ways that respect lawful covariance across vocal
tract lengths that vary across talkers.
1aSC5. Analysis of acoustic to articulatory speech inversion for natural
speech. Ganesh Sivaraman (Elec. & Comput. Eng., Univ. of Maryland College Park, 7704 Adelphi Rd., Apt 11, Hyattsville, MD 20783, ganesa90@
umd.edu), Carol Espy-Wilson (Elec. & Comput. Eng., Univ. of Maryland
College Park, College Park, MD), Vikramjit Mitra (SRI Int.., Menlo Park,
CA), Hosung Nam (Korea Univ., Seoul, South Korea), and Elliot Saltzman
(Physical Therapy & Athletic Training, Boston Univ., New Haven,
Connecticut)
Speech inversion is a technique to estimate vocal tract configurations
from speech acoustics. We constructed two such systems using feedforward
neural networks. One was trained using natural speech data from the XRMB
database and the second using synthetic data generated by the Haskins Laboratories TADA model that approximated the XRMB data. XRMB pellet
trajectories were first converted into vocal tract constriction variables (TVs),
providing a relative measure of constriction kinematics (location and
degree) and synthetic TV data was obtained directly using TADA. The natural and synthetic speech inversion systems were trained as TV estimators
using these respective sets of acoustic and TV data. TV-estimators were first
tested using previously collected acoustic data on the utterance “perfect
memory” spoken at slow, normal, and fast rates. The TV estimator trained
on XRMB data (but not on TADA data) was able to recover the tongue tip
gesture for /t/ in the fast utterance despite the gesture occurring partly during
the acoustic silence of the closure. Further, the XRMB system (but not the
TADA system) could distinguish between bunched and retroflexed /r/.
Finally, we compared the performance of the XRMB system with a set of
independently trained speaker-dependent systems (using the XRMB database) to understand the role of speaker-specific differences in the partitioning of variability across acoustic and articulatory spaces.
1aSC6. Testing AutoTrace: A machine-learning approach to automated
tongue contour data extraction. Gustave V. Hahn-Powell (Linguist, Univ.
of Arizona, 2850 N Alvernon Way, Apt 17, Tucson, AZ 85712, hahnpowell@email.arizona.edu) and Diana Archangeli (Linguist, Univ. of Hong
Kong, Tucson, Arizona)
While ultrasound provides a remarkable tool for tracking the tongue’s
movements during speech, it has yet to emerge as the powerful research tool
it could be. A major roadblock is that the means of appropriately labeling
images is a laborious, time-intensive undertaking. In earlier work, Fasel and
Berry (2010) introduced a "translational" deep belief network (tDBN)
approach to automated labeling of ultrasound images of the tongue, and
tested it against a single-speaker set of 3209 images. This study tests the
same methodology against a much larger data set (about 40,000 images),
using data collected for different studies with multiple speakers and multiple
languages. Retraining a “generic” network with a small set of the most erroneously labeled images from language-specific development sets resulted in
an almost three-fold increase in precision in the three test cases examined.
R landmark analysis system for teach1aSC7. Usability of SpeechMarkV
ing speech acoustics. Marisha Speights and Suzanne E. Boyce (Dept. of
Commun. Sci. and Disord., Univ. of Cincinnati, PO Box 670379, Cincinnati, OH 45267-0379, speighma@mail.uc.edu)
Learning about the intersection of articulation and acoustics, and particularly acoustic measurement techniques, is challenging for students in Linguistics, Psychology and Communication Sciences and Disorders curricula.
There is a steep learning curve before students can apply the material to an
interesting research question; for those in more applied programs such as
168th Meeting: Acoustical Society of America
2082
R
1aSC8. Surveying the nasal peak: A1 and P0 in nasal and nasalized
vowels. Will Styler and Rebecca Scarborough (Linguist, Univ. of Colorado,
295 UCB, Boulder, CO 80309, william.styler@colorado.edu)
Nasality can be measured in the acoustical signal using A1-P0, where A1
is the amplitude of the harmonic under F1, and P0 is the amplitude of a lowfrequency nasal peak (~250 Hz) (Chen 1997). In principle, as nasality
increases, P0 goes up and A1 is damped, yielding lower A1-P0. However,
the details of the relationship between A1 and P0 in natural speech have not
been well described. We examined 4778 vowels in French and English elicited words, measuring A1, P0, and the surrounding harmonic amplitudes,
and comparing oral and nasal tokens (phonemic nasal vowels in French, and
coarticulatorily nasalized vowels in English). Linear mixed-effects regressions confirmed that A1-P0 is predictive of nasality: 4.16 dB lower in English
nasal contexts relative to oral and 5.73 dB lower in French (both p<0.001).
In English, as expected, P0 increased 1.42 dB and A1 decreased 3.93 dB
(p<0.001). In French, however, both A1 and P0 lowered with nasality (5.73
and 0.93 dB, respectively, p<0.001). Even so, in both languages, P0 became
more prominent relative to adjacent harmonics in nasal vowels. These data
reveal cross-linguistic differences in the acoustic realization of nasal vowels
and suggest P0 prominence as a potential perceptual cue to be investigated.
1aSC9. Impact of mismatch conditions between mobile phone recordings on forensic voice comparison. Balamurali B T Nair, Esam A. Alzqhoul, and Bernard J. Guillemin (Dept. of Elec. and Comput., The Univ. of
Auckland, Bldg. 303, Rm. 240, Level 2, Sci. Ctr., 38 Princes St., Auckland,
Auckland, Auckland 1142, New Zealand, bbah005@aucklanduni.ac.nz)
Mismatched conditions between the recordings of suspect, offender and
relevant background population represent a typical scenario in real forensic
casework. In this paper, we investigate the impact of mismatch conditions
associated with mobile phone speech recordings on forensic voice comparison (FVC). The two major mobile phone technologies currently in use today
are the Global System for Mobile Communications (GSM) and Code Division Multiple Access (CDMA). These are fundamentally different in the
way in which they handle the speech signal, which in turn will lead to significant mismatch between speech recordings. Our results suggest that the
resulting degradation on the accuracy of a FVC analysis can be very significant (as high as 150%). Surprisingly, though, our results also suggest that
the reliability of a FVC analysis may actually improve. We propose a strategy for lessening this impact by passing the suspect speech data through the
2083
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
GSM or CDMA codecs, depending on the network origin of the offender
data, prior to the FVC analysis. Though this goes a long way to mitigating
the impact (a reduction in loss of accuracy from 150% to 80%), it is still not
as good as analysis under matched conditions.
1aSC10. 99.8 percent accuracy achieved on Peterson and Barney (1952)
acoustic measurements. Michael A. Stokes (R & D, Waveform Commun.,
3929 Graceland Ave., Indianapolis, IN 46208, waveform.model@yahoo.
com)
In 2012, a paper was presented (Reetz, 2012) discussing the lack of
working phonemic models, which was an acknowledgment to an earlier presentation (Ladefoged, 2004) discussing 50 + years of phonetics and phonology. These presentations highlighted the successes in phonological research
over the last 60 and 50 years, respectively, but both concluded that there is
still no recognized working model of phoneme identification. This presentation will discuss the Waveform Model of Vowel Perception (Stokes, 2009)
achieving 99.8% accuracy on the Peterson and Barney (1952) dataset using
30 conditional statements across all ten vowels produced by the 33 males
(509/510 for the vowels identified by humans at 100%). These results replicate and improve on the 99.2% achieved across the vowels produced by the
males in the Hillenbrand (1995) dataset (Stokes, 2011). As a logical progression, ELBOW was developed in 2013 using the algorithm developed for
static data to identify streaming vowel productions achieving over 91%
before introducing improvements. Beyond ELBOW, it was essential to replicate earlier results on the most cited dataset in the literature. The Waveform Model has now replicated human performance across multiple datasets
and is being successfully introduced into automatic speech recognition
applications.
1aSC11. Lombard effect based speech analysis across noisy environments for voice communications with cochlear implant subjects. Jaewook Lee, Hussnain Ali, Ali Ziaei, and Jonh H. Hansen (Elec. Eng., Univ.
of Texas at Dallas, 800 West Campbell Rd., EC33, Office ECSN 4.414,
Richardson, TX 75080, jaewook@utdallas.edu)
Changes in speech production including vocal effort based on auditory
feedback are an important research domain for improved human communication. For example, in the presence of environmental noise, a speaker experiences the well-known phenomenon known as Lombard effect. Lombard
effect has been studied for normal hearing listeners as well as for automatic
speech/speaker recognition systems, but not for cochlear implant (CI) recipients. The objective of this study is to analyze the speech production of CI
users with respect to environmental change. We observe and study this
effect using mobile personal audio recordings from continuous single-session audio streams collected over an individual’s daily life. Prior advancements in this domain include the “Prof-Life-Log” longitudinal study at
UTDallas. Four CI speakers participated by producing read and spontaneous
speech in six naturalistic noisy environments (e.g., office, car, outdoor, cafeteria, etc.). A number of speech production parameters (e.g., short-time logenergy, fundamental frequency, etc.) known to be sensitive to Lombard
speech were measured for both communicative and non-communicative
speech as a function of environment. Results indicate that variability in the
speech production parameters were found in the upward direction with an
increase in background noise level. Overall higher values in acoustic variables were observed in the inter-personal conversations related to the nonconversational speech.
168th Meeting: Acoustical Society of America
2083
1a MON. AM
Communication Disorders or ESL, there is an additional challenge in envisioning how the knowledge can be applied in changing behavior. The availability of software tools such as Wavesurfer, Praat, Audacity, TF32, the
University College of London software suite, among others, has made it
possible for instructors to design laboratory experiences in visualization,
manipulation, and measurement of speech acoustics. Many students have
found them complex for their first exposure to taking scientific measurements. The SpeechMarkV acoustic landmark analysis system has been
developed to automate the detection of specific acoustic events important
for speech, such as voicing offset and onset, stop bursts, fricative noise, and
vowel midpoints, and to provide automated formant frequency measurement
used for vowel space analysis. This paper describes a qualitative multiple
case study in which seven teachers of speech acoustics were interviewed to
explore whether such pre-analysis of the acoustic signal could be useful for
teaching.
MONDAY MORNING, 27 OCTOBER 2014
INDIANA G, 8:40 A.M. TO 11:15 A.M.
Session 1aSP
Signal Processing in Acoustics: Sampling Methods for Bayesian Signal Processing
Cameron J. Fackler, Cochair
Graduate Program in Architectural Acoustics, Rensselaer Polytechnic Institute, 110 8th St, Troy, NY 12180
Ning Xiang, Cochair
School of Architecture, Rensselaer Polytechnic Institute, Greene Building, 110 8th Street, Troy, NY 12180
Invited Papers
8:40
1aSP1. Statistical sampling and Bayesian illumination waveform design for multiple-hypothesis target classification in cognitive
signal processing. Grace A. Clark (Grace Clark Signal Sci., 532 Alden Ln., Livermore, CA 94550, clarkga1@comcast.net)
Statistical sampling algorithms are widely used in Bayesian signal processing for drawing real-valued independent, identically distributed (i.i.d.) samples from a desired distribution. This paper focuses on the more difficult problem of how to draw complex correlated
samples from a distribution specified by both an arbitrary desired probability density function and a desired power spectral density. This
problem arises in cognitive signal processing. A cognitive signal processing system (for example, in radar or sonar) is one that observes
and learns from the environment; then uses a dynamic closed-loop feedback mechanism to adapt the illumination waveform so as to provide system performance improvements over traditional systems. Current cognitive radar algorithms focus only on target impulse
responses that are Gaussian distributed to achieve mathematical tractability. This research generalizes the cognitive radar target classifier
to deal effectively with arbitrary non-Gaussian distributed target responses. The key contribution lies in the use of a kernel density estimator and an extension of a new algorithm by Nichols et al. for drawing complex correlated samples from target distributions specified
by both an arbitrary desired probability density function and a desired power spectral density. Simulations using non-Gaussian target
impulse response waveforms demonstrate very effective classification performance.
9:00
1aSP2. Bayesian inversion and sequential Monte Carlo sampling techniques applied to nearfield acoustic sensor arrays. Mingsian
R. Bai (Power Mech. Eng., Tsing Hua Univ., 101 sec.2, Kuang_Fu Rd., Hsinchu 30013, Taiwan, msbai63@gmail.com), Amal Agarwal
(Power Mech. Eng., Tsing Hua Univ., Mumbai, India), Ching-Cheng Chen, and Yen-Chih Wang (Power Mech. Eng., Tsing Hua Univ.,
Taipei, Taiwan)
This paper demonstrates that inverse source reconstruction can be performed using a methodology of particle filters that relies primarily on the Bayesian approach of parameter estimation. The proposed approach is applied in the context of nearfield acoustic holography based on the equivalent source method (ESM). A state-space model is formulated in light of the ESM. The parameters to estimate
are amplitudes and locations of the equivalent sources. The parameters constitute the state vector which follows a first-order Markov
process with the transition matrix being the identity for every frequency-domain data frame. The implementation of recursive Bayesian
filters involves a sequential Monte Carlo sampling procedure that treats the estimates as point masses with a discrete probability mass
function (PMF) which evolves with iteration. It is evident from the results that the inclusion of the appropriate prior distribution is crucial in the parameter estimation.
9:20
1aSP3. Bayesian sampling for practical design of multilayer microperforated panel absorbers. Cameron J. Fackler and Ning Xiang
(Graduate Program in Architectural Acoust., Rensselaer Polytechnic Inst., 110 8th St, Greene Bldg., Troy, NY 12180, facklc@rpi.edu)
Bayesian sampling is applied to produce practical designs for microperforated panel acoustic absorbers. Microperforated panels
have the capability to produce acoustic absorbers with very high absorption coefficients, without the use of porous materials. However,
the absorption produced by a single panel is limited to a narrow frequency range, particularly at high absorption coefficient values. To
provide broadband absorption, multiple microperforated panel layers may be combined into a multilayer absorber. To design such an
absorber, the necessary number of layers must be determined and four design parameters must be specified for each layer. Using Bayesian model selection and parameter estimation, this work presents a practical method for designing multilayer microperforated panel
absorbers. Particular attention is paid to aspects of the underlying sampling method that enable automatic handling of design constraints
such as limitations of the manufacturing process and availability of raw materials.
2084
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2084
9:40
1a MON. AM
1aSP4. Particle filtering for robust modal identification and sediment sound speed estimation. Nattapol Aunsri and Zoi-Heleni
Michalopoulou (Mathematical Sci., New Jersey Inst. of Technol., 323 ML King Blvd., Newark, NJ 07102, michalop@njit.edu)
Bayesian methods provide a wealth of information on acoustic features of a propagation medium and the uncertainty surrounding
their estimation. In previous work, we showed how sequential Bayesian (particle) filtering can be used to extract dispersion characteristics of a waveguide. Here, we utilize these characteristics for the estimation of geoacoustic properties of sediments. As expected, the
method relies on accurate identification of modes. The effect of correct/erroneous mode identification on geoacoustic estimates is quantified and approaches are developed for robust modal recognition in conjunction with the particle filter. Additionally, the statistical behavior of the noise present in the data measurements is further investigated with more complex noise modeling leading to improved results.
The approaches are validated with both synthetic and real data collected during the Gulf of Mexico Experiment. [Work supported by
ONR.]
10:00–10:20 Break
10:20
1aSP5. Efficient trans-dimensional Bayesian inversion for geoacoustic profile estimation. Stan E. Dosso, Jan Dettmer, Gavin Steininger (School of Earth & Ocean Sci, Univ. of Victoria, PO Box 1700, Victoria, BC V8W 3P6, Canada, sdosso@uvic.ca), and Charles
W. Holland (Appl. Res. Lab., The Penn State Univ., State College, PA)
This paper considers sampling efficiency of trans-dimensional (trans-D) Bayesian inversion based on the reversible-jump Markovchain Monte Carlo (rjMCMC) algorithm, with application to seabed acoustic reflectivity inversion. Trans-D inversion is applied to sample the posterior probability density over geoacoustic parameters for an unknown number of seabed layers, providing profile estimates
with uncertainties that include the uncertainty in the model parameterization. However, the approach is computationally intensive. The
efficiency of rjMCMC sampling is largely determined by the proposal schemes applied to perturb existing parameters and to assign values for parameters added to the model. Several proposal schemes are examined, some of which appear new for trans-D geoacoustic
inversion. Perturbations of existing parameters are considered in a principal-component space based on an eigen-decomposition of the
unit-lag parameter covariance matrix (computed from successive models along the Markov chain, a diminishing adaptation). The relative efficiency of proposing new parameters from the prior versus a Gaussian distribution focused near existing values is considered. Parallel tempering, which employs a sequence of interacting Markov chains with successively relaxed likelihoods, is also considered to
increase the acceptance rate of new layers. The relative efficiency of various proposal schemes is compared through repeated inversions
with a pragmatic convergence criterion.
10:40
1aSP6. Bayesian tsunami-waveform inversion with trans-dimensional tsunami-source models. Jan Dettmer (Res. School of Earth
Sci., Australian National Univ., 3800 Finnerty Rd., Victoria, Br. Columbia V8W 3P6, Canada, jand@uvic.ca), Jakir Hossen, Phil R.
Cummins (Res. School of Earth Sci., Australian National Univ., Canberra, ACT, Australia), and Stan E. Dosso (School of Earth and
Ocean Sci., Univ. of Victoria, Victoria, BC, Canada)
This paper develops a self-parametrized Bayesian inversion to infer the spatio-temporal evolution of tsunami sources (initial sea
state) due to megathrust earthquakes. To date, tsunami-source uncertainties are poorly understood, and the effect of choices such as discretization have not been studied. The approach developed here is based on a trans-dimensional self-parametrization of the sea surface,
avoids regularization constraints and provides rigorous uncertainty estimation that accounts for model-selection ambiguity associated
with the source discretization. The sea surface is parametrized using self-adapting irregular grids, which match the local resolving power
of the data and provide parsimonious solutions for complex source characteristics. Source causality is ensured by including rupture-velocity and obtaining delay times from the Eikonal equation. The data are recorded on ocean-bottom pressure and coastal wave gauges
and predictions are based on Green-function libraries computed from ocean-basin scale tsunami models for cases that include/exclude
dispersion effects. The inversion is applied to tsunami waveforms from the great 2011 Tohoku-Oki (Japan) earthquake. The tsunami
source is strongest near the Japan trench with posterior mean amplitudes of ~5 m. In addition, the data appear sensitive to rupture velocity, which is part of our kinematic source model.
Contributed Paper
11:00
1aSP7. Model selection using Bayesian samples: An introduction to the
deviance information criterion. Gavin Steininger, Stan E. Dosso, Jan
Dettmer (SEOS, U Vic, 201 1026 Johnson St., Victoria, BC v7v 3n7, Canada, gavin.amw.steininger@gmail.com), and Charles W. Holland (SEOS, U
Vic, State College, Pennsylvania)
This paper presents the deviance information criterion (DIC) as a metric
for model selection based on Bayesian sampling approaches, with examples
from seabed geoacoustic and/or scattering inversion. The DIC uses all samples
of a distribution to approximate Bayesian evidence, unlike more common
measures such as the Bayesian information criterion, which only use point estimates. The DIC uses distribution samples to approximate Bayesian evidence,
2085
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
unlike more common measures such as the Bayesian information criterion
based on point estimates. Hence the DIC is more appropriate for non-linear
Bayesian inversions utilizing posterior sampling. Two examples are considered: determining the dominant seabed scattering mechanism (interface and/or
volume scattering), and choosing between seabed profile parameterizations
based on smooth gradients (polynomial splines) or discontinuous homogeneous layers. In both cases, the DIC is applied to trans-dimensional inversions of
simulated and measured data, utilizing reversible jump Markov chain Monte
Carlo sampling. For the first case, the DIC is found to correctly select the true
scattering mechanism for simulations, and its choice for the measured data
inversion is consistent with sediment cores extracted at the experimental site.
For the second case, the DIC selects the polynomial spline parameterization
for soft seabeds with smooth gradients. [Work supported by ONR.]
168th Meeting: Acoustical Society of America
2085
MONDAY MORNING, 27 OCTOBER 2014
INDIANA F, 8:45 A.M. TO 11:55 A.M.
Session 1aUW
Underwater Acoustics: Understanding the Target/Waveguide System–Measurement and Modeling I
Kevin L. Williams, Chair
Applied Physics Lab., University of Washington, 1013 NE 40th St., Seattle, WA 98105
Chair’s Introduction—8:45
Invited Papers
8:50
1aUW1. Very-high-speed 3-dimensional modeling of littoral target scattering. David Burnett (Naval Surface Warfare Ctr., Code
CD10, 110 Vernon Ave., Panama City, FL 32407, david.s.burnett@navy.mil)
NSWC PCD has developed a high-fidelity 3-D finite-element (FE) modeling system that computes acoustic color templates (target
strength vs. frequency and aspect angle) of single or multiple realistic objects (e.g., target + clutter) in littoral environments. High-fidelity means that 3-D physics is used in all solids and fluids, including even thin shells, so that solutions include not only all propagating
waves but also all evanescent waves, the latter critically affecting the former. Although novel modeling techniques have accelerated the
code by several orders of magnitude, it takes about one day to compute an acoustic color template. However, NSWC PCD wants to be
able to compute thousands of templates quickly, varying target/environment features by small amounts, in order to develop statistically
robust classification algorithms. To accomplish this, NSWC PCD is implementing a radically different FE technology that has already
been developed and verified. It preserves all the 3-D physics but promises to accelerate the code another two to three orders of magnitude. Porting the code to an HPC center will accelerate it another one to two orders of magnitude, bringing performance to seconds per
template. The talk will briefly review the existing system and then describe the new technology.
9:10
1aUW2. Modeling three-dimensional acoustic scattering from targets near an elastic bottom using an interior-transmission formulation. Saikat Dey, William G. Szymczak (Code 7131, NRL, 4555 Overlook Ave. SW, Washington, DC 20375, saikat.dey@nrl.
navy.mil), Angie Sarkissian (Code 7130, NRL, Washington, DC), Joseph Bucaro (Excet Inc., Springfield, VA), and Brian Houston
(Code 7130, NRL, Washington, DC)
For targets near the sediment–fluid interface, the scattering response is fundamentally influenced by the characterization of the sediment in the model. We show that if the model consists of a three-dimensional elastic sediment with acoustic fluid on top, then the use of
perfectly matched-layer (PML) approximation for the truncation of the infinite exterior domain for scattering applications has fundamental problems and gives erroneous results. We present a novel formulation using the an interior-transmission representation of the scattering problem where the exterior truncation with PML does not induce errors in the result. Numerical examples will be presented to verify
the application of this formulation to scattering from elastic targets near a fluid–sediment interface.
9:30
1aUW3. The fluid–structure interaction technique specialized to axially symmetric targets. Ahmad T. Abawi (HLS Res., 3366
North Torrey Pines Court, Ste. 310, La Jolla, CA 92037, abawi@hlsresearch.com) and Petr Krysl (Structural Eng., Univ. of California,
San Diego, La Jolla, CA)
The fluid–structure interaction technique provides a paradigm for solving scattering from elastic targets embedded in a fluid by a
combination of finite and boundary element methods. In this technique, the finite element method is used to compute the target’s impedance matrix and the Helmholtz–Kirchhoff integral with the appropriate Green’s function is used to represent the field in the exterior medium. The two equations are coupled at the surface of the target by imposing the continuity of pressure and normal displacement. This
results in a Helmholtz–Kirchhoff boundary element equation that can be used to compute the scattered field anywhere in the surrounding
environment. This method reduces a finite element problem to a boundary element one with drastic reduction in the number of
unknowns, which translates to a significant reduction in numerical cost. This method was developed and tested for general 3D targets. In
this paper, the method is specialized to axially symmetric targets, which provides further reduction in numerical cost, and validated
using benchmark solutions.
2086
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2086
9:50
1aUW4. A new T matrix for acoustic target scattering by elongated
objects in free-field and in bounded environments. Raymond Lim (Code
X11, NSWC Panama City Div., 110 Vernon Ave., Code X11, Panama City,
FL 32407-7001, raymond.lim@navy.mil)
The transition (T) matrix of Waterman has been very useful for
computing fast, accurate acoustic scattering predictions for axisymmetric
elastic objects but this technique is usually limited to fairly smooth objects
that are not too aspherical unless complex basis functions or stabilization
schemes are used. To remove this difficulty, a spherical-basis formulation
adapted from approaches proposed recently by Waterman [J. Acoust. Soc.
10:05–10:20 Break
10:20
1aUW5. Kirchhoff approximation for spheres and cylinders partially
exposed at flat surfaces and application to the interpretation of backscattering. Aaron M. Gunderson, Anthony R. Smith, and Philip L. Marston
(Phys. and Astronomy Dept., Washington State Univ., Pullman, WA 991642814, aaron.gunderson01@gmail.com)
For cylinders partially exposed at flat surfaces, the Kirchhoff approximation was previously evaluated analytically and compared with measured
backscattering at a free surface as a function of exposure [K. Baik and P. L.
Marston, IEEE J. Ocean. Eng. 33, 386–396 (2008)]. In the present research,
this approach is extended to the cases of numerical integration for high
Am. 125, 42–51 (2009)] and Doicu, et al. [Acoustic & Electromagnetic
Scattering Analysis Using Discrete Sources, Academic Press, London,
2000] is suggested. The new method is implemented by simply transforming
the high-order outgoing spherical basis functions within standard T-matrix
formulations to low-order functions distributed along the object’s symmetry
axis. A free-field T-matrix is produced in a nonstandard form but computations with it become much more stable for aspherical shapes. Some advantages of this approach over Waterman’s and Doicu, et al.’s approaches are
noted and, despite its nonstandard form, the feasibility of extension to
objects in a plane-stratified environment is demonstrated. Sample calculations for an elongated spheroid demonstrate the enhanced stability.
frequency backscattering by partially exposed spheres and cylinders. The
cylinder case was limited to broadside illumination at grazing incidence for
which one-dimensional integration is sufficient and the limits of integration
were previously discussed by Baik and Marston. In the corresponding
sphere case, however, two-dimensional integration is required and the corresponding limits of integration become complicated functions of the amount
of exposure and the grazing angle of the illumination. These approximations
of the backscattering, while they omit Franz wave and elastic contributions,
are useful for modeling the evolution of how the reflected scattering contributions depend on the target exposure. They are also useful for understanding the time evolution of specular scattering contributions. The sphere case
was compared with the exact analysis of backscattering by a half exposed
rigid sphere at a free surface that also displays partially reflected Franz
wave contributions. [Work supported by ONR.]
Invited Papers
10:35
1aUW6. Acoustic ray model for the scattering from an object on the sea floor. Steven G. Kargl, Aubrey L. Espana, and Kevin L.
Williams (Appl. Phys. Lab., Univ. of Washington, 1013 NE 40th St., Seattle, WA 98105, kargl@uw.edu)
Target scattering within a waveguide is recast into a ray model, where time-of-flight wave packets are tracked. The waveguide is
replaced by an equivalent set of image sources and receivers, where rays are associated with these images and interactions with the
waveguide’s boundaries are taken into account. By transforming wave packets into the frequency domain, scattering becomes a multiplication of a wave packet’s spectrum at the target location and the target’s free-field scattering amplitude. Data- and model-model comparisons for an aluminum replica of a 100-mm unexploded ordnance will be discussed. For the data-model comparisons, synthetic aperture
sonar (SAS) data were collect during Pond Experiment 2010 from this replica, where it was placed on a water-sand sediment boundary.
The model-model comparisons use the results from a hybrid 2-D/3-D model. The hybrid model combines a 2D finite-element model to
predict the scattered pressure and its derivatives in the near-field of the target, and then a 3D Helmholtz integral to propagate the pressure to the far field. The data- and model-model comparisons demonstrate the viability of using the ray model to quickly generate realistic pings suitable for both SAS and acoustic color template processing. [Research supported by SERDP and ONR.]
10:55
1aUW7. Orientation dependence for backscattering from a solid cylinder near an interface: Imaging and spectral properties.
Daniel Plotnick, Philip L. Marston (Washington State Univ., 1510 NW Turner Dr., Apt. 4, Pullman, WA 99163, dsplotnick@gmail.
com), Aubrey Espana, and Kevin L. Williams (Appl. Phys. Lab., Univ. of Washington, Seattle, WA)
When a solid cylinder lies proud on horizontal sand sediment significant contributions to backscattering, specular and elastic, involve
multipath reflections from the cylinder and interface. The scattering structure and resulting spectrum versus azimuthal angle, the
“acoustic template,” may be understood using a geometric model [K. L. Williams et al., J. Acoust. Soc. Am. 127, 3356–3371 (2010)]. If
the cylinder is tilted such that the cylinder axis is no longer parallel to the interface the multipath structure is modified. Some changes in
the acoustic template can be approximately modeled using a combination of geometric and physical acoustics. For near broadside scattering the analysis gives a simple expression relating certain changes in the template to the orientation of the cylinder and the source geometry. These changes are useful for inferring the cylinder orientation from the scattering. Changes to the template at end-on and
intermediate angles are also examined. The resulting acoustic images show strong dependence on the cylinder orientation in agreement
with this model. A similar model applies to a metallic cylinder adjacent to a flat free surface and was confirmed in tank experiments.
The effect of vertical tilt on the acoustic image was also investigated. [Work supported by ONR.]
2087
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2087
1a MON. AM
Contributed Papers
11:15
1aUW8. Acoustic scattering enhancements for partially exposed cylinders in sand and at a free surface caused by Franz waves
and other processes. Anthony R. Smith, Aaron M. Gunderson, Daniel S. Plotnick, Philip L. Marston (Phys. and Astronomy Dept.,
Washington State Univ., Pullman, WA, spacetime82@gmail.com), and Grant C. Eastland (NW Fisheries Sci. Ctr., Frank Orth & Assoc.
(NOAA Affiliate), Seattle, WA)
Creeping waves on solid cylinders having slightly subsonic phase velocities and large radiation damping are described as Franz
waves because of association with complex poles investigated by Franz. For free-field high frequency broadside backscattering in water,
the associated echoes are weak due to radiation damping. It was recently demonstrated, however, that for partially exposed solid metal
cylinders at a free surface viewed at grazing incidence, the Franz wave echo can be large relative to the specular echo when the grazing
angle is sufficiently small [G. C. Eastland and P. L. Marston, J. Acoust. Soc. Am. 135, 2489–2492 (2014)]. The Fresnel zone associated
with the specular echo is occluded making it weak while the Franz wave is partially reflected at the interface behind the cylinder. This
hypothesis is also supported by calculating the exact backscattering by half-exposed infinitely long rigid cylinders viewed over a range
of grazing angles. Additional experiments concern the high frequency backscattering by cylinders partially buried in sand viewed at
small grazing angles. From the time evolution of the associated backscattering by short tone bursts, situations have been identified for
which partially reflected Franz wave contributions become significant. Franz waves may contribute to sonar clutter from rocks. [Work
supported by ONR.]
11:35
1aUW9. Pressure gradient coupling to an asymmetric cylinder at an interface. Christopher Dudley (NSWC PCD, 110 Vernon Ave.,
Panama City, FL 32407, mhhd@hotmail.com)
Invited Abstract Special session: “Investigation of target response near interfaces, where coupling between target and environmental
properties are important.” Acoustic scattering results from solid and hollow notched aluminum cylinders are presented as a function of
the incident angle. This flat machined into the circular cylinder resembles the topography(geometry) of an finned unexploded ordnance
(UXO). Prior experiments have shown selective coupling to modes of a flat ended cylinder and the effect of pressure nodes to coupling
to a similar notched cylinder [Espana et al., J. Acoust. Soc. Am. 126, 2187 (2009) and Marston & Marston, J. Acoust. Soc. Am. 127,
1750 (2010)]. The wavefront crossing the flat face of the notch in the paddle has a pressure gradient when not co-linear with the normal
to the flat face of the notch. This pressure gradient applies a torque to the cylinder. Torsional modes can be setup in multiple scaled version of the pseudo-UXOs. Analysis of scattering experiments in the Gulf of Mexico and laboratory scale water tanks indicate robust
returns form these fin like targets.
MONDAY AFTERNOON, 27 OCTOBER 2014
MARRIOTT 7/8, 1:00 P.M. TO 5:15 P.M.
Session 1pAA
Architectural Acoustics: Computer Auralization as an Aid to Acoustically Proper Owner/Architect Design
Decisions
Robert C. Coffeen, Cochair
Architecture, University of Kansas, 4721 Balmoral Drive, Lawrence, KS 66047
Kevin Butler, Cochair
Henderson Engineers, Inc., 8345 Lenexa Dr., #300, Lenexa, KS 66214
Chair’s Introduction—1:00
Invited Papers
1:05
1pAA1. The impact of auralization on design decisions for the House of Commons of the Canadian Parliament. Ronald Eligator
(Acoustic Distinctions, 145 Huguenot St., New Rochelle, NY 10801, religator@ad-ny.com)
The House of Commons of the Canadian Parliament will be temporary relocated to a 27,000 m3 glass-enclosed atrium with stone
and glass walls while their home Chamber is being renovated and restored. Acoustic goals include excellent speech intelligibility for
Members and guests in the room, and production of high-quality audio recordings of all proceedings for live and recorded streaming and
broadcast. Room modeling and auralization using CATT Acoustic has been used to evaluate the acoustic environment of the temporary
2088
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2088
location during design. Modeling and testing of the current House Chamber has also been performed to validate the results and conclusions drawn from the model of the new space. The use of auralizations has helped the Owner and Architect understand the impact of
design choices on the achievement of the acoustic performance goals, and smoothed the path for the integration of design features that
might otherwise have been difficult for them to accept. Measured and calculated data as well as audio examples will be presented.
1:25
1p MON. PM
1pAA2. Cost effective auralizations to help architects and owners make informed decisions for sound isolating assemblies. David
Manley and Ben Bridgewater (D.L. Adams Assoc., Inc., 1536 Ogden St., Denver, CO 80218, dmanley@dlaa.com)
As an acoustical consultant, subjective descriptions of noise environments only get you so far. For example, it can be difficult for an
Architect to qualify the difference between STC 35 and STC 40 windows on a given office space next to a highway. Often, justifying
the increased cost for the increased sound isolation performance is at the forefront of the decision making process for the Owner and
Architect. To help them understand the relative difference in performance, DLAA uses a simplified auralization process to create audio
demonstrations of the difference between sound isolating assemblies. This presentation will discuss the process of creating the auralizations and review case studies where the auralizations helped the client make a more informed decision.
1:45
1pAA3. Using auralization to aid in decision making to meet customer requirements for room response and speech intelligibility.
Thomas Tyson (Professional Systems Div., Bose, 5160 South Deborah Ct., Springfield, MO 65810, Tom_Tyson@bose.com)
To meet specific design goals such as a high degree of speech intelligibility along with targeted reverberation time; the presenter
will show how the use of auralization can help determine the effectiveness of acoustic treatments and loudspeaker directivity types,
beyond just the use of predicted numerical data.
2:05
1pAA4. Bridging the gap between eyes and ears with auralization. Robin S. Glosemeyer Petrone, Scott D. Pfeiffer (Threshold
Acoust..com, 53 W Jackson Blvd., Ste. 815, Chicago, IL 60604, robin@thresholdacoustics.com), and Marcus Mayell (Judson Univ.,
Elgin, IL)
Ray trace animation, level plots, and impulse responses, while all useful tools in providing a visual representation of sound, do not
always bridge the gap between the eye and ear. Threshold utilized auralization to inform decisions for an upcoming theater renovation
with the goal of improving the room’s acoustic support of orchestral performance. To achieve the desired acoustic response, the renovation will require major modifications to the shaping of a hall with a very distinctive architectural vernacular; a distinctive vernacular that
will need to be preserved in some form to maintain the facility’s identity. Along with other modeling tools, auralization provided useful
support, reassuring both the client and the design team of the validity of the concepts.
2:25
1pAA5. Extended tools for simulated impulse responses. Wolfgang Ahnert and Stefan Feistel (Ahnert Feistel Media Group, Arkonastr. 45-49, Berlin D-13189, Germany, wahnert@ada-amc.eu)
To calculate impulse responses is already done since more than 25 years. The routines did allow simple calculations without and
now always with scattered sound components. Today, sophistic routines calculate frequency-dependent full impulse responses comparable with measured ones. Parallel to this development, auralization routines have been developed first for monaural and binaural reproduction and nowadays ambisonic signals are created in B-Format of first and second order. These signals make audible during the
reproduction in an ambisonic playback configuration the distribution of wall and ceiling reflections in computer models in EASE. Beside
the acoustic detection of desired or unwanted reflections, which always is asking for the correct reproduction of the ambisonic signals
the visualization of the reflection distribution is desired. In EASE, a new tool has been implemented to correlate the reflections in an
impulse response with their position in a 3D presentation. This new hedgehog presentation of full impulse responses correlates angle-dependent with the view position of the model. So, any wanted or unwanted reflections may be identified quickly. A comparison with
ambisonic signals via auralization is possible.
2:45
1pAA6. Auralization as an aid in decision-making: Examples from professional practice. Benjamin Markham, Robert Connick, and
Jonah Sacks (Acentech Inc., 33 Moulton St., Cambridge, MA 02138, bmarkham@acentech.com)
The authors and our colleagues have presented dozens of auralizations in the service of our architectural acoustics consulting work,
on projects ranging from large atriums to classrooms to sound isolation between nightclubs and surrounding facilities (and many others).
The aim of most of these presentations is to communicate the relative efficacy of design alternatives or acoustical treatment options. In
some cases, the effects are profound; in others, the acoustical impact may be rather subtle. Without perfect correlation, we have noted a
general trend: when the observable change in acoustical attributes presented in the auralization is substantial, so too is the interest on the
part of the owner to invest in significant or even aggressive acoustical design alternatives; by contrast, subtler changes in perceived
acoustical character often leave owners and architects less inclined to dedicate design resources to pursue alternatives that differ from
the architect or owner’s original vision. Examples of auralizations following (and contradicting) this trend will be presented, along with
descriptions of the design direction taken following meetings and discussions that accompanied the auralizations.
3:05–3:20 Break
2089
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2089
3:20
1pAA7. Auralization and the real world. Shane J. Kanter (Threshold Acoust., 53 W. Jackson Blvd., Ste. 815, Chicago, IL 60604, skanter@thresholdacoustics.com), Ben Bridgewater (D.L. Adams, Denver, CO), and Robert C. Coffeen (School of Architecture, Design &
Planning, The Univ. of Kansas, Lawrence, KS)
Architects value their senses and strive to design spaces that are engaging all five of them. However, architects typically make design
decisions based primarily on how spaces appear and feel, as opposed to acousticians who normally justify design intent with the use of
numbers, graphs, and charts. Although the data are clear to acousticians, auralizations are a useful tool to engage architects, building
owners, and other clients and their sense of hearing to help them make informed decisions. If auralizations are used to demonstrate the
effect of design decisions based on acoustics, there must be confidence in the accuracy and realism of these audio simulations. In order
to better understand the accuracy and realism of auralizations, a study was conducted comparing auralizations created from models of an
existing facility to listening within the facility. Listeners were asked to compare the “real world” sound to the auralizations of this sound
by completing a survey with questions focusing on such comparisons. By presenting the actual sound and the auralizations in the same
space, a direct comparison can be made and the accuracy and realism of the auralizations can be determined. Results and observations
from the study will be presented.
3:40
1pAA8. Directing room acoustic decisions for a college auditorium renovation by using auralization. Robert C. Coffeen (Architecture, Univ. of Kansas, 4721 Balmoral Dr., Lawrence, KS 66047, rcoffeen@ku.edu)
From an acoustical viewpoint, the renovation of a multipurpose college auditorium was predicted by music and theater faculty to be
a compromise not suitable for either music or theater. It was obvious that either variable sound absorption or active acoustics would be
required to satisfy the multipurpose uses of the auditorium. Active acoustics was rejected by the college due to cost and an experience
by one faculty member. And the faculty committee was not familiar with variable sound absorption. Using a computer model of the auditorium it was determined that the volume of the venue could be established to produce the desired maximum reverberation time for
music and that vertical rising drapery could produce the desired reverberation time for drama. Auralization was used to demonstrate to
the faculty committee that with variable sound absorption the auditorium could properly accommodate music of various types and theatrical performances including drama.
Contributed Papers
4:00
4:30
1pAA9. “Illuminating” reflection orders in architectural acoustics using
SketchUp and light rendering. J. Parkman Carter (Architectural Acoust.,
Rensselaer Polytechnic Inst., 32204 Waters View Circle, Cohoes, NY
12047, cartej8@rpi.edu)
1pAA11. Vibrolization: Simulating whole-body structural vibration for
clients and colleagues with the Motion Platform. Clemeth Abercrombie
(Acoust., Arup, New York, NY), Tom Wilcock (Adv. Tech. and Res., Arup,
New York, NY), and Andrew Morgan (Acoust., Arup, 77 Water St., Arup,
New York, NY 10005, andrew.morgan@arup.com)
The conventional architecture workflow tends to—quite literally—
“overlook” matters of sound, given that the modeling tools of architectural
design are almost exclusively visual in nature. The modeling tools used by architectural acousticians, however, produce visual representations, which are,
frankly, less than inspirational for the design process. This project develops a
simple scheme to visualize acoustic reflection orders using light rendering in
the freely available and widely used Trimble SketchUp 3D modeling software. In addition to allowing architectural designers to visualize acoustic
reflections in a familiar modeling environment, this scheme also works easily
with complex geometry. The technique and examples will be presented.
4:15
1pAA10. Using auralization to evaluate the decay characteristics that
impact intelligibility in a school auditorium. Bruce C. Olson (Ahnert Feistel Media Group, 8717 Humboldt Ave. North, Brooklyn Park, MN 55444,
bcolson@afmg.eu) and Bruce C. Olson (Olson Sound Design, Brooklyn
Park, MN)
Auralization was used to evaluate the effectiveness of the loudspeaker
design in a high school auditorium to provide good speech intelligibility when
used for lectures. The goals of this project where to offer an aural impression
that enhances the visual printouts of the simulation results from the 3D model
of the space in EASE using the Analysis Utility for Room Acoustics. The process used will be described and some of the results will be presented.
2090
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Arup has recently introduced an experiential design tool for demonstrating whole-body vibration. The Motion Platform, a bespoke simulator,
moves vertically and can reproduce structural vibration in buildings, transport, and any other situations that involve shaking. Beyond humans, the
platform can also shake objects—opening the door for developing new
vibration criteria for devices such as video cameras and projectors. We will
share our experience in developing the platform and how it has helped us
communicate design ideas to clients and design team members.
4:45
1pAA12. The role of auralization utilizing the end user source signal in
determining final material finishes for the Chapel at St. Dominics. David
S. Woolworth (Oxford Acoust., 356 CR 102, Oxford, MS 38655, dave@
oxfordacoustics.com)
The Chapel at St. Dominics Hospital in Jackson, Mississippi, was created for religious services, prayer time, and serve other spiritual needs of the
hospital’s patients, employees, medical staff, hospital visitors, and the
greater community. It is an intimate space seating up to 100 people and is
used daily by the Dominican Sisters, who first started the Jackson Infirmary
in 1946. This paper outlines the process used to record the voices of the sisters and then use them to generate auralizations, which helped drive decisions regarding acoustic finishes.
168th Meeting: Acoustical Society of America
2090
before we fully understand how the directional distribution of sound should
influence architectural design decisions. A three-dimensional array of 28
loudspeakers and two subwoofers has been constructed in a hemi-anechoic
chamber at PSU, allowing for accurate reproduction of sound fields. For the
array, closed-box loudspeakers were built and digitally equalized to ensure
a flat frequency response. With this facility, subjective studies investigating
spatial sound in concert halls can be conducted using measured sound fields
and perceptually motivated auralizations, not tied to a physical room. Such
a facility is instrumental in understanding and communicating subtle differences in sound fields to listeners, whether they be musicians, architects, or
clients. The flexibility and versatility of this system will facilitate room
acoustics research at Penn State for years to come. [Work supported by NSF
Award 1302741.]
1pAA13. The construction and implementation of a multichannel loudspeaker array for accurate spatial reproduction of sound fields. Matthew
T. Neal, Colton D. Snell, and Michelle C. Vigeant (Graduate Program in
Acoust., Penn State Univ., 201 Appl. Sci. Bldg., University Park, PA 16802,
mtn5048@psu.edu)
The spatial distribution of sound has a strong impact upon a listener’s
overall impression of a room and must be reproduced accurately for auralization. In concert hall acoustics, directionally independent metrics such as
reverberation time and clarity index simply do not predict this impression.
Late lateral energy level, lateral energy fraction, and the interaural correlation coefficient are measures of spatial impression, but more work is needed
MONDAY AFTERNOON, 27 OCTOBER 2014
LINCOLN, 1:00 P.M. TO 5:00 P.M.
Session 1pAB
Animal Bioacoustics and Signal Processing in Acoustics: Array Localization of Vocalizing Animals
Michelle Fournet, Cochair
College or Earth Ocean and Atmospheric Sciences, Oregon State University, 425 SE Bridgeway Ave., Corvallis, OR 97333
David K. Mellinger, Cochair
Coop. Inst. for Marine Resources Studies, Oregon State University, 2030 SE Marine Science Dr., Newport, OR 97365
Chair’s Introduction—1:00
Invited Papers
1:05
1pAB1. Exploiting the sound-speed minimum to extend tracking ranges of vertical arrays in deep water environments. Aaron
Thode, Delphine Mathias (SIO, UCSD, 9500 Gilman Dr., MC 0238, La Jolla, CA 92093-0238, athode@ucsd.edu), Janice Straley (Univ.
of Alaska, Southeast, Sitka, AK), Russel D. Andrews (Alaska SeaLife Ctr., Seward, AK), Chris Lunsford, John Moran (Auke Bay
Labs., NOAA, Juneau, AK), Jit Sarkar, Chris Verlinden, William Hodgkiss, and William Kuperman (SIO, UCSD, La Jolla, CA)
Underwater acoustic vertical arrays can localize sounds by measuring the vertical elevation angles of various multipath arrivals generated by reflections from the ocean surface and bottom. This information, along with measurements of the relative arrival times of the multipath, can be sufficient for obtaining the range and depth of an acoustic source. At ranges beyond a few kilometers ray refraction effects
add additional multipath possibilities; in particular, the existence of a sound-speed minimum in deeper waters permits purely refracted
ray arrivals to be detected and distinguished on an array, greatly extending the tracking range for short-aperture systems. Here, two experimental vertical array deployments are presented. The first is a simple two-element system, deployed using longline fishing gear off Sitka,
AK. By tracking a tagged sperm whale, this system demonstrated an ability to localize this species out to 35 km range, and provide estimates of the detection range of these animals as a function of sea state. The second deployment—a field trial of an 128-element, mid-frequency vertical array system off Southern California—illustrates how multi-element array gain can further extend the detection and
tracking ranges of sperm and humpback whales in deep-water environments. [Work supported by NPRB, NOAA, and ONR.]
1:25
1pAB2. Arrayvolution—An overview of array systems to study bats and toothed whales. Jens C. Koblitz (German Oceanographic
Museum, Katharinenberg 14-20, Stralsund 18439, Germany, Jens.Koblitz@meeresmuseum.de), Magnus Wahlberg (Dept. of Biology,
RMIT Univ., Odense, Denmark), Peter Stilz (Freelance Biologist, Hechingen, Germany), Jamie MacAulay (Sea Mammal Res. Unit,
Univ. of St Andrews, St. Andrews, United Kingdom), Simone G€
otze, Anna-Maria Seibert (Animal Physiol., Inst. for Neurobiology,
Univ. of T€ubingen, T€
ubingen, Germany), Kristin Laidre (Polar Sci. Ctr., Appl. Phys. Lab, Univ. of Washington, Seattle, WA), HansUlrich Schnitzler (Animal Physiol., Inst. for Neurobiology, Univ. of T€
ubingen, Z€
ubingen, Germany), and Harald Benke (German Oceanographic Museum, Stralsund, Germany)
Some echolocation signal parameters can be studied using a single receiver. However, studying parameters such as source level,
directionality, and direction of signal emission require the use of multi-receiver arrays. Acoustic localization allows for determination of
the position of echolocators at the time of signal emission, and when multiple animals are present, calls can be assigned to individuals
2091
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2091
1p MON. PM
5:00
based on their location. This combination makes large multi-receiver arrays a powerful tool. Here we present an overview of different
array configurations used to study both toothed whales and bats, using a suite of systems ranging from semi-3D-minimum receiver number-number-arrays (3D-MINNAs), linear-2-D-over determined arrays (2D-ODAs), to 3-D-over-determined-arrays (3D-ODAs). We discuss approaches to process and summarize the usually large amounts of data. In some studies, the absolute position of an echolocator
and not only relative to the array is crucial. Combining acoustic localizations from a source with geo-referenced receivers allows for
determining geo-referenced movements of an echolocator. Combining these animal tracks with other geo-referenced data such as hydrographic parameters will allow new insights into habitat use.
1:45
1pAB3. Tracking Cuvier’s beaked whales using small aperture arrays. Martin Gassmann, Sean M. Wiggins, and John Hildebrand
(Scripps Inst. of Oceanogr., Univ. of California San Diego, 9152 Regents Rd., Apt. L, La Jolla, CA 92037, mgassmann@ucsd.edu)
Cuvier beaked whales are deep-diving animals that produce strongly directional sounds using high frequencies (>30 kHz) at which
attenuation due to absorption and scattering is high (>8 dB/km). This makes it difficult to track beaked whales in three dimensions with
standard large-aperture hydrophone arrays. By embedding two volumetric small-aperture (~1 m element spacing) arrays into a largeaperture (~1 km element spacing) array of five nodes, individuals and even groups of Cuvier beaked whales were tracked in three dimensions continuously up to one hour within an area of 10 km2 in the Southern California Bight. This passive acoustic tracking technique
provides a tool to study the characteristics of beaked whale echolocation, and their behavior during deep-diving.
2:05
1pAB4. Using ocean bottom seismometer networks to better understand fin whale distributions at different spatial scales.
Michelle Weirathmueller, William SD Wilcock, and Dax C. Soule (Univ. of Washington, 1503 NE Boat St., Seattle, WA 98105,
michw@uw.edu)
Ocean bottom seismometers (OBSs) are designed to monitor ground motion caused by earthquakes, but they also record low frequency vocalizations of fin and blue whales. Seismic networks used for opportunistic whale datasets are rarely optimized for acoustic
localization of marine mammals. We demonstrate the use of OBSs for studying fin whales using two different networks. The first example is a small, closely spaced network of 8 OBSs deployed on the Juan de Fuca Ridge from 2003 to 2006. An automated method for
identifying arrival times and locating fin whale calls using a grid search was applied to obtain 154 individual fin whale tracks over one
year, revealing information on swimming patterns and spatial distribution in the vicinity of a mid ocean ridge. The second example is a
network with widely spaced OBSs, such that a given call can only be detected on one instrument. The Cascadia Initiative Experiment is
a sparse array of 70 OBSs covering the Juan de Fuca Plate from 2011 to 2015. Localization methods based on differential arrival times
are not possible but techniques to locate the range and bearing to fin whales with a single OBS can be applied to constrain larger scale
spatial distributions by comparing call densities in different regions.
2:25
1pAB5. Baleen whale localization using hydrophone streamers during seismic reflection surveys. Shima H. Abadi (Lamont–Doherty Earth Observatory, Columbia Univ., 122 Marine Sci. Bldg., University of Washington 1501 NE Boat St., Seattle, Washington
98195, shimah@ldeo.columbia.edu), Maya Tolstoy (Lamont–Doherty Earth Observatory, Columbia Univ., Palisades, NY), William S.
D. Wilcock (School of Oceanogr., Univ. of Washington, Seattle, WA), Timothy J. Crone, and Suzanne M. Carbotte (Lamont–Doherty
Earth Observatory, Columbia Univ., Palisades, NY)
Seismic reflection surveys use acoustic energy to image the structure beneath the seafloor, but concern has been raised about their
potential impact on marine animals. Most of the energy from seismic surveys is low frequency, so the concern about their impact is
focused on Baleen whales that communicate in the same frequency range. To better mitigate against this impact, safety radii are established based on the criteria defined by the National Marine Fisheries Service. Marine mammal observers use visual and acoustic techniques to monitor safety radii during each experiment. However, additional acoustic monitoring, in particular, locating marine mammals,
could demonstrate the effectiveness of the observations, and help us understand animal responses to seismic experiments. A novel sound
source localization technique using a seismic streamer has been developed. Data from seismic reflection surveys conducted with the R/V
Langseth are being analyzed with this method to locate baleen whales and verify the accuracy of visual detections during experiments.
The streamer is 8 km long with 636 hydrophones sampled at 500 Hz. The work focuses on time intervals when only a mitigation gun is
firing because of marine mammal sightings. [Sponsored by NSF.]
2:45
1pAB6. Faster than real-time automated acoustic localization and call association for humpback whales on the Navy’s Pacific
Missile Range Facility. Tyler A. Helble (SSC-PAC, 2622 Lincoln Ave., San Diego, CA 92104, tyler.helble@gmail.com), Glenn Ierley,
Gerald D’Spain (Scripps Inst. of Oceanogr., San Diego, CA), and Stephen Martin (SSC-PAC, San Diego, CA)
Optimal time difference of arrival (TDOA) methods for acoustically localizing multiple marine mammals have been applied to the
data from the Navy’s Pacific Missile Range Facility in order to localize and track humpback whales. Modifications to established methods were necessary in order to simultaneously track multiple animals on the range without the need for post-processing and in a fully
automated way, while minimizing the number of incorrect localizations. The resulting algorithms were run with no human intervention
at computational speeds faster than the data recording speed on over 40 days of acoustic recordings from the range, spanning several
years and multiple seasons. Spatial localizations based on correlating sequences of units originating from within the range produce estimates having a standard deviation typically 10 m or less (due primarily to TDOA measurement errors), and a bias of 20 m or less (due
to sound speed mismatch). Acoustic modeling and Monte Carlo simulations play a crucial role in minimizing both the variance and bias
of TDOA localization methods. These modeling and simulation techniques will be discussed for optimizing array design, and for maximizing the quality of localizations from existing data sets.
2092
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2092
3:05
1pAB7. Applications of an adaptive back-propagation method for passive acoustic localizations of marine mammal sounds. Ying-Tsong Lin,
Arthur E. Newhall, and James F. Lynch (Appl. Ocean Phys. and Eng.,
Woods Hole Oceanographic Inst., Bigelow 213, MS#11, WHOI, Woods
Hole, MA 02543, ytlin@whoi.edu)
An adaptive back-propagation localization method utilizing the dispersion relation of the acoustic modes of low-frequency sound signals is
reviewed in this talk. This method employs an adaptive array processing
technique (the maximum a posteriori mode filter) to extract the acoustic
modes of sound signals, and it is capable of separating signals from noisy
data. The concept of the localization algorithm is to back-propagate modes
to a location where the modes align with each other. Gauss-Markov inverse
theory is applied to make the normal mode back-propagator adaptive to the
signal-to-noise ratio (SNR). When the SNR is high, the localization procedure will push the algorithm to achieve high resolution. On the other hand,
when the SNR is low, the procedure will try to retain its robustness and
reduce the noise effects. Examples will be shown in the talk to demonstrate
the localization performance with comparisons to other methods. Applications to baleen whale sounds collected in Cape Cod Bay, Massachusetts,
will also be presented. Lastly, population density estimation using this passive acoustic localization method will be discussed.
3:20–3:45 Break
3:45
1pAB8. Tracking porpoise underwater movements in tidal rapids using
drifting hydrophone arrays. Jamie D. Macaulay, Doug Gillespie, Simon
Northridge, and Jonathan Gordon (SMRU, Univ. of St Andrews, 15 Crichton St., Anstruther, Fife KY103DE, United Kingdom, jdjm@st-andrews.ac.
uk)
The growing interest in generating electrical power from tidal currents
using tidal turbine generators raises a number of environmental concerns,
including the risk that cetaceans might be injured or killed through collision
with rotating turbine blades. To understand this risk we need better information on how cetaceans use tidal rapid habitats and in particular their underwater movements and dive behavior. Focusing on harbor porpoises, a
European protected species, we have developed an approach which uses
time of arrival differences of narrow band high frequency (NBHF) clicks
detected on large aperture hydrophone arrays drifting in tidal rapids, to
determine dive tracks of porpoises underwater. Probabilistic localization
algorithms have been developed to filter echoes and provide accurate 2D or
geo-referenced 3D locations. Calibration trials have been carried out that
show that the system can provide depth and location data with submeter
errors. Data collected over three seasons in tidal races around Scotland has
provided new insights into how harbor porpoises are using these unique habitats, information vital for assessing the risk tidal turbines may pose.
4:00
1pAB9. Using a coherent hydrophone array for observing sperm whale
range, classification, and shallow-water dive profiles. Duong D. Tran,
Wei Huang, Alexander C. Bohn, Delin Wang (Elec. and Comput. Eng.,
Northeastern Univ., 006 Hayden Hall, 370 Huntington Ave., Boston, MA
02115, wang.del@husky.neu.edu), Zheng Gong, Nicholas C. Makris (Mech.
Eng., Massachusetts Inst. of Technol., Cambridge, MA), and Purnima Ratilal (Elec. and Comput. Eng., Northeastern Univ., Boston, MA)
Sperm whales in the New England continental shelf and slope were passively localized, in both range and bearing, and classified using a single
low-frequency (<2500 Hz), densely sampled, towed horizontal coherent
hydrophone array system. Whale bearings were estimated using time-
2093
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
domain beamforming that provided high coherent array gain in sperm whale
click signal-to-noise ratio. Whale ranges from the receiver array center were
estimated using the moving array triangulation technique from a sequence
of whale bearing measurements. Multiple concurrently vocalizing sperm
whales, in the far-field of the horizontal receiver array, were distinguished
and classified based on their horizontal spatial locations and the inter-pulse
intervals of their vocalized click signals. The dive profile was estimated for
a sperm whale in the shallow waters of the Gulf of Maine with 160 m watercolumn depth located close to the array’s near-field where depth estimation
was feasible by employing time difference of arrival of the direct and multiply reflected click signals received on the horizontal array. By accounting
for transmission loss modeled using an ocean waveguide-acoustic propagation model, the sperm whale detection range was found to exceed 60 km in
low to moderate sea state conditions after coherent array processing.
4:15
1pAB10. Testing the beam focusing hypothesis in a false killer whale
using hydrophone arrays. Laura N. Kloepper (Dept. of Neurosci., Brown
Univ., 185 Meeting St. Box GL-N, Providence, RI 02912, laura_kloepper@
brown.edu), Paul E. Nachtigall, Adam B. Smith (Zoology, Univ. of Hawaii,
Honolulu, HI), John R. Buck (Elec. and Comput. Eng., Univ. of Massachusetts Dartmouth, Dartmouth, MA), and Jason E. Gaudette (Neurosci., Brown
Univ., Providence, RI)
The odontocete sound production system is complex and composed of
tissues, air sacs, and a fatty melon. Previous studies suggested that the emitted sonar beam might be actively focused, narrowing depending on target
distance. In this study, we further tested this beam focusing hypothesis in a
false killer whale (Pseudorca crassidens) in a laboratory setting. Using three
linear arrays, we recorded the same emitted click at 2, 4, and 7 m distance
while the animal performed a target detection task with the target distance
varying between 2, 4, and 7 m. For each click, we calculated the beamwidth,
intensity, center frequency, and bandwidth as recorded on each array. As the
distance from the whale to the array increased, the received click intensity
was higher than predicted by spreading loss. Moreover, the beamwidth varied with range as predicted by the focusing model and contrary to a piston
model or spherical spreading. These results support the hypothesis that the
false killer whale adaptively focuses its sonar beam according to target
range. [Work supported by ONR and NSF.]
4:30
1pAB11. Sei whale localization and tracking using a moored, combined
horizontal and vertical line array near the New Jersey continental shelf.
Arthur E. Newhall, Ying-Tsong Lin, James F. Lynch (Appl. Ocean Phys.
and Eng., Woods Hole Oceanographic Inst., 210 Bigelow Lab. MS11,
Woods Hole, MA 02543, anewhall@whoi.edu), and Mark F. Baumgartner
(Biology, Woods Hole Oceanographic Inst., Woods Hole, MA)
In 2006, a multidisciplinary experiment was conducted in the Mid-Atlantic continental shelf off the New Jersey coast. During a 2 day period in
mid-September 2006, more than 200, unconfirmed but identifiable, sei
whale (Balaenoptera borealis) calls were collected on a moored, combined
horizontal and vertical line hydrophone array. Sei whale movements were
tracked over long distances (up to tens of kilometers) using a normal mode
back propagation method. This approach uses low-frequency, broadband
passive sei whale call receptions from a single-station, two-dimensional
hydrophone array to perform long distance localization and tracking by
exploiting the dispersive nature of propagating acoustic modes in a shallow
water environment. Source depth information and the source signal can also
be determined from the localization application. This passive whale tracking, combined with the intensive oceanography measurements performed
during the experiment, was also used to examine sei whale movements in
relation to oceanographic features observed in this region.
168th Meeting: Acoustical Society of America
2093
1p MON. PM
Contributed Papers
4:45
1pAB12. Obtaining underwater acoustic impulse responses via blind
channel estimation. Brendan P. Rideout, Eva-Marie Nosal (Dept. of Ocean
and Resources Eng., Univ. of Hawaii at Manoa, 2540 Dole St., Holmes Hall
402, Honolulu, HI 96822, bprideou@hawaii.edu), and Anders Hst-Madsen
(Dept. of Elec. Eng., Univ. of Hawaii at Manoa, Honolulu, HI)
Blind channel estimation is the process of obtaining the impulse
responses between a source and multiple (arbitrarily placed) receivers without prior knowledge about the source characteristics or the environment.
This approach could simplify localization of non-impulsive submerged
sound sources (e.g., pinnipeds or cetaceans); the process of picking arrivals
(direct and reflected) could be carried out on the estimated impulse
responses rather than on the recorded waveforms, thus facilitating the use of
time of arrival-based localization approaches. Blind channel estimation
could also be useful in estimating the original source signal of a vocalizing
animal through deconvolution of the estimated channel impulse responses
and the recorded waveforms. In this paper, simulation and controlled pool
studies will be used to explore requirements on source and environment
characteristics and to quantify blind channel estimation performance for
underwater passive acoustic applications.
MONDAY AFTERNOON, 27 OCTOBER 2014
INDIANA A/B, 1:15 P.M. TO 5:30 P.M.
Session 1pBA
Biomedical Acoustics: Medical Ultrasound
Robert McGough, Chair
Department of Electrical and Computer Engineering, Michigan State University, 2120 Engineering Building,
East Lansing, MI 48824
Contributed Papers
1:15
1pBA1. Investigation of fabricated 1 MHz lithium niobate transfer
standard ultrasonic transducer. Patchariya Petchpong (Acoust. and Vib.
Dept., National Inst. Metrology of Thailand, 75/7 Rama VI Rd., Thungphayathai, Rajthevi, Bangkok 10400, Thailand, patchariya@nimt.or.th) and
Yong Tae Kim (Div. of Convergence Technol., Korea Res. Inst. of Standards and Sci., Daejeon, South Korea)
The fabrication of a single element transducer made from Lithium Niobate (LiNbO3) operating at 1 MHz is focused on this paper. The air-backed
LiNbO3 transducer is developed to be used as the standard transfer ultrasonic transducer to calibrate the ultrasound power-meter, which is measured
the total emitted acoustic power radiated from the medical equipment. To
clarify the precision of the acoustic power, the primary standard calibration
measurement (radiation force balance, RFB) based on IEC 61161 is used to
investigate the fabricated transducer. The geometry of the piezoelectric
active element was first designed by the prediction of Krimholtz, Leedom,
and Matthaei (KLM) simulation technique. The electrical impedance measurements of the LiNbO3 element, before and after assembling into the transducer, were checked and compared. The results of electrical impedance
show that the operating frequency is in the range from 1 MHz to 10 MHz by
forming harmonics. The evaluations of total emitted power and radiation
conductance of fabricated transducer were also revealed. Results of acoustic
power have been responding up to 2.1 W, which can be assessed within 6%
of expanded uncertainty (k = 2).
1:30
1pBA2. Sustained acoustic medicine for stimulation of wound healing:
A translational research report. Matthew D. Langer and George K. Lewis
(ZetrOZ, 56 Quarry Rd., Trumbull, CT 06611, mlanger@zetroz.com)
The healing of both acute and chronic wounds is a challenging clinical
issue affecting more than 6.5 million Americans. The regeneration phase of
wound healing is critical to restoration of function, but is often prolonged
because of the adverse environment for cell growth. Therapeutic ultrasound
2094
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
increases nutrient absorption by cells, accelerates cellular metabolism, and
stimulates production of ECM proteins, which all increase the rate of wound
healing. To test the effect of long duration ultrasound exposure, an initial
study of wound healing was conducted in a rat model, with wounds sutured
to prevent closure via contraction. In this study, a 6 mm wound healed in
962 days when exposed to 6 hours of ultrasound therapy, and 1561 days
with a placebo device (p<0.01). Following IRB approval of a similar protocol for use in humans, a case study was performed on the wound closure of
a chronic wound. Four weeks of daily LITUS therapy reduced the wound
size by 90% from its size after 21 days of treatment with standard of care.
These results demonstrate the efficacy of long duration LITUS for healing
wounds in an animal model and an initial case of healing in a human
subject.
1:45
1pBA3. Long duration ultrasound facilitates delivery of a therapeutic
agent. Kelly Stratton, Rebecca Taggart, and George K. Lewis (ZetrOZ, 56
Quarry Rd., Trumbull, CT 06611, george@zetroz.com)
The ability for ultrasound to enhance drug delivery through the skin has
been established in an animal model. This research tested the delivery of a
therapeutic agent into human skin using sustained ultrasonic application
over multiple hours. An IRB-approved pilot study was conducted using hyalaronan, a polymer found in the skin and associated with hydration. To
assess the effectiveness of the delivery, a standard protocol was applied to
measure moisture of the volar forearm with a corneometer. Fifteen subjects
applied the hyalaronan to their forearms daily. One location was then treated
with a multi-hour ultrasonic treatment, and the other was not. Baseline skin
hydration measurements were taken for one week, followed by daily treatments with moisturizer and corneometer measurements twice per week for
three weeks. Subjects experienced double the increase in sustained moisture
when ultrasound was used in conjunction with a moisturizer when compared
to moisturizer alone (p<0.001) over the four weeks. This study successfully
demonstrated ultrasound treatment enhanced delivery of a therapeutic agent
into the skin.
168th Meeting: Acoustical Society of America
2094
2:30
1pBA4. Characterizing the pressure field in a modified flow cytometer
quartz flow cell: A combined measurement and model approach to validate the internal pressure. Camilo Perez (BioEng. and Ctr. for Industrial
and Medical Ultrasound - Appl. Phys. Lab., Univ. of Washington, 1013 NE
40th St., Seattle, WA 98105-6698, camipiri@uw.edu), Chenghui Wang
(Inst. of Acoust., College of Phys. & Information Technol., Shaanxi Normal
Univ., Xi’an, Shaanxi, China), Brian MacConaghy (Ctr. for Industrial and
Medical Ultrasound, Appl. Phys. Lab., Univ. of Washington, Seattle, WA),
Juan Tu (Key Lab. of Modern Acoust., Nanjing Univ., Nanjing, Jiangsu,
China), Jarred Swalwell (Oceanogr., Univ. of Washington, Seattle, WA),
and Thomas J. Matula (Ctr. for Industrial and Medical Ultrasound, Appl.
Phys. Lab., Univ. of Washington, Seattle, WA)
1pBA6. Effects of fluid medium flow and spatial temperature variation
on acoustophoretic motion of microparticles in microfluidic channels.
Zhongzheng Liu and Yong-Joe Kim (Texas A&M Univ., 3123 TAMU, College Station, TX 77843, liuzz008@tamu.edu)
We incorporated an ultrasound transducer into a flow cytometer to "activate" microbubbles passing the laser interrogation zone (J. Acoust. Soc. Am.
126, 2954–2962, (2009)). This system allows high throughput recording of
the volume oscillations of microbubbles, and has led to a new bubble dynamics model that incorporates shear thinning (Phys. Med. Biol. 58, 985–998
(2013)). Important parameters in the model include the ambient microbubble
size, R0, driving pressure, PA, and the shell parameters v and j, the shell elasticity and viscosity, respectively. R0 is obtained by calibrating the cytometer.
Pressure calibration is difficult because the flow channel width (<200mm) is
too small to insert a hydrophone. The objective of this study was to develop a
calibration method for a 20-cycle, 1 MHz transient pressure field. The pressure field propagating through the channel and into water was compared to a
3-D FEM model. After validation, the model was used to simulate the driving
pressure as input for the bubble dynamics model, leaving only v and j variables. This approach was used to determine the mechanical properties for different bubbles (albumin, lipid, and lyzozyme shells). Excellent fits were
obtained in many cases, but not all, suggesting heterogeneity in microbubble
shell parameters.
2:15
1pBA5. Entropy based detection of molecularly targeted nanoparticle
ultrasound contrast agents in tumors. Michael Hughes (Int. Med./Cardiology, Washington Univ. School of Medicine, 1632 Ridge Bend Dr., St.
Louis, MO 63108, mshatctrain@gmail.com), John McCarthy (Dept. of
Mathematics, Washington Univ., St. Louis, MO), Jon Marsh, and Samuel
Wickline (Int. Med./Cardiology, Washington Univ. School of Medicine,
Saint Louis, MO)
In this study, we demonstrate the use of “joint entropy” of two random
variables (X,Y) can be applied to markedly improve tumor conspicuity
(where X = f(t) =backscattered waveform and Y = g(t) = a reference waveform; both differentiable functions). Previous studies have shown that a
good initial choice of reference is a reflection of the original insonifying
pulse taken from a stainless-steel reflector. Using this choice, joint entropy
analysis is more sensitive to accumulation of targeted contrast agents than
conventional gray-scale or signal energy analysis by roughly a factor of
2 [Hughes, M. S., et al., J. Acoust. Soc. Am., 133(1), p 283, 2013]. We now
derive an improved reference that is applied to three groups of (MDA-435,
breast tumor) flank tumor-implanted athymic nude mice to identify tumor
vasculature after binding perfluorocarbon nanoparticles (~250 nm) to neovascular avb3 integrins. Five mice received i.v.avb3-targeted nanoparticles,
five received nontargeted nanoparticles, and five received saline at a dose of
1 ml/kg, which was allowed to circulate for up to two hours prior to imaging. Three analogous groups of nonimplanted mice were imaged in the same
region following the same imaging protocol. Our results indicate an
improvement in contrast by a factor of 2.5 over previously published results.
Thus, judicious selection of the reference waveform is critical to improving
contrast-to-noise in tumor environments when attempting to detect targeted
nanostructures for molecular imaging of sparse features.
Current, state-of-the-art models of acoustophoretic forces, applied to
microparticles suspended in fluid media inside microfluidic channels, and
acoustic streaming velocities inside the microfluidic channels have been
mainly derived with the assumption of “static” fluid media with uniform
temperature distributions. Therefore, it has been challenging to understand
the effects of “moving” fluid media and fluid medium temperature variation
on acoustophoretic microparticle motion in the microfluidic channels. Here,
a numerical modeling method to accurately predict the acoustophoretic
motion of compressible microparticles in the microfluidic channels is presented to address the aforementioned challenge. In the proposed method, the
Mass, Momentum, and Energy Conservation Equations and the State Equation are decomposed by using a perturbation method into the zeroth- to the
second-order equations. Here, the fluid medium flow and temptation variation are considered in the zeroth-order equations and the solutions of the
zeroth-order equations (i.e., the zeroth-order fluid medium velocities and
temperature distribution) are propagated into the higher-order equations,
ultimately affecting the second-order acoustophoretic forces and acoustic
streaming velocities. The effects of the viscous fluid medium flow and the
medium temperature variation on the acoustophoretic forces and the acoustic streaming velocities were then studied in this article by using the proposed numerical modeling method.
2:45
1pBA7. Thrombolytic efficacy and cavitation activity of rt-PA echogenic
liposomes versus Definity exposed to 120-kHz ultrasound. Kenneth B.
Bader, Guillaume Bouchoux, Christy K. Holland (Internal Medicine, Univ.
of Cincinnti, 231 Albert Sabin Way, CVC 3933, Cincinnati, OH 452670586, Kenneth.Bader@uc.edu), Tao Peng, Melvin E. Klegerman, and David
D. McPherson (Internal Medicine, Univ. of Texas Health Sci. Ctr., Houston,
TX)
Echogenic liposomes can be used as a vector for co-encapsulation of the
thrombolytic drug rt-PA and microbubbles. These agents can be acoustically
activated for localized cavitation-enhanced drug delivery. The objective of
our study was to characterize thrombolytic efficacy and sustained cavitation
nucleation and activity from rt-PA-loaded echogenic liposomes (t-ELIP). A
spectrophotometric method was used to determine the enzymatic activity of
rt-PA released from t-ELIP and compared to unencapsulated rt-PA. The
thrombolytic efficacy of t-ELIP, rt-PA alone, or rt-PA and the commercial
contrast agent DefinityV exposed to sub-megahertz ultrasound was determined in an in vitro flow model. Ultraharmonic (UH) emissions from stable cavitation were recorded during insonation. Both UH emissions and
thrombolytic efficacy were significantly greater for rt-PA and DefinityV
over either rt-PA alone or t-ELIP with equivalent rt-PA loading. Furthermore, the enzymatic activity of t-ELIP was significantly lower than free rtPA. When the dosage of t-ELIP was adjusted to compensate for the lack of
enzymatic activity, similar thrombolytic efficacy was found for t-ELIP and
DefinityV and rt-PA. However, sustained ultraharmonic emissions were not
observed for t-ELIP in the flow phantom.
R
R
R
3:00
1pBA8. Temporal stability evaluation of fluorescein-nanoparticles
loaded on albumin-coated microbubbles. Marianne Gauthier (Dept. of
Elec. and Comput. Eng., BioAcoust. Res. Lab., Univ. of Illinois at UrbanaChampaign, 4223 Beckman Inst.,405 N. Mathews, Urbana, IL 61801,
frenchmg@illinois.edu), Jamie R. Kelly (Dept. of BioEng., BioAcoust. Res.
Lab., Univ. of Illinois at Urbana-Champaign, Urbana, IL), and William D.
O’Brien (Dept. of Elec. and Comput. Eng., BioAcoust. Res. Lab., Univ. of
Illinois at Urbana-Champaign, Urbana, IL)
Purpose: This study aims to evaluate the temporal stability of newly
designed FITC-nanoparticles (NPs) loaded on albumin-coated microbubbles
(MBs) to be used for future drug delivery purposes. Materials and Methods:
MBs (3.6 108 MB/mL) were obtained by sonicating 5% bovine serum
2095
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2095
1p MON. PM
2:00
albumin and 15% dextrose solution. NPs (5 mg/mL) were produced from
fluorescein (FITC)-PLA polymers and functionalized using EDC/NHS. NPloaded MBs resulted from the covalent linking between functionalized NPs
and MBs via carbodiimide technique. Three parameters were quantitatively
monitored over a 4-week duration at 8 time points: MB diameter was determined using a circle detection routine based on the Hough transform, MB
number density was evaluated using a hemocytometer, and NP-loading yield
was assessed based on the loaded-MB fluorescence uptake. Based on the
hypotheses, analyses of variance or Kruskal Wallis test were run to evaluate
the stability of these physical parameters over the time of the experiment.
Results: Statistical analysis exhibited no significant differences in NPloaded MB mean sizes, number densities, and loading yields over time (p >
0.05). Conclusion: Newly designed NP-loaded MBs are stable over at least
a 4-week duration and can be used without extra precaution concerning their
temporal stability. [This work was supported by NIH R37EB002641.]
3:15–3:30 Break
3:30
1pBA9. Chronotropic effect in rats heart caused by pulsed ultrasound.
Olivia C. Coiado and William D. O’Brien Jr. (Dept. of Elec. and Comput.
Eng., Univ. of Illinois at Urbana-Champaign, 405 N Mathews, 4223 Beckman Inst., Urbana, IL 61801, oliviacoiado@hotmail.com)
This study investigated the dependence of an increasing/decreasing
sequence of pulse repetition frequencies (PRFs) on the chronotropic effect
via the application of 3.5-MHz pulsed ultrasound (US) on the rat heart. The
experiments were divided into three 3-month-old female rat groups (n = 4
ea): control, PRF increase and PRF decrease. Rats were exposed to transthoracic ultrasonic pulses at ~0.50% of duty factor at 2.0-MPa peak rarefactional pressure amplitude. For the PRF increase group, the PRF started
lower than that of the rat’s heart rate and was increased sequentially in 1-Hz
steps every 5 s (i.e., 4, 5, and 6 Hz) for a total duration of 15 s. For the PRF
decrease group, the PRF started greater than that of the rat’s heart rate and
was decreased sequentially in 1-Hz steps every 5 s (i.e., 6, 5, and 4 Hz). For
the PRF decrease and control groups, the ultrasound application resulted in
a significant negative chronotropic effect (~11%) after ultrasound exposure.
However, for the PRF increase group, a significant but less decrease of the
heart rate (~3%) was observed after ultrasound exposure. The ultrasound
application caused a negative chronotropic effect after US exposure for
increase/decrease US group. [Support: NIH Grant R37EB002641.]
3:45
1pBA10. Ultrasonic welding in orthopedic implants. Kristi R. Korkowski
and Timothy Bigelow (Mech. Eng., Iowa State Univ., 2201 Coover Hall,
Ames, IA 50011, korkowsk@iastate.edu)
A critical event in hip replacement is the occurrence of osteolysis.
Cemented hip replacements most commonly use polymethylmethacrylate
(PMMA), not as an adhesive but rather a filler to limit micromotion and provide stability. PMMA, however, contributes to osteolysis through both a
thermal response during curing and implant wear debris. In order to mitigate
the occurrence of osteolysis, we are exploring ultrasonic welding as a means
of attachment. Weld strength was assessed using ex vivo bovine rib and femur bones. A flat end mill provided 20 site locations for insertion of an acrylonitrile butadiene styrene, ABS pin. Each location was characterized on
topography, porosity, discoloration, and any other notable features. Each
site was welded using a Branson Ultrasonic Welder 2000iw; 20 kHz, 1100
W. Machine parameters include weld force, weld time, and hold time. The
bond strength was determined using a tensile tester. Tensile testing showed
a negative correlation between porosity and bond strength. Further evaluation and characterization of bone properties to bond strength will enable
appropriate selection of welding properties to ensure a superior bond.
2096
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
4:00
1pBA11. Estimation of subsurface temperature profiles from infrared
measurements during ultrasound ablation. Tyler R. Fosnight, Fong Ming
Hooi, Sadie B. Colbert, Ryan D. Keil, and T. Douglas Mast (Biomedical
Eng., Univ. of Cincinnati, 3938 Cardiovascular Res. Ctr., 231 Albert Sabin
Way, Cincinnati, OH 45267-0586, doug.mast@uc.edu)
Measurement of in situ spatiotemporal temperature profiles would be
useful for developing and validating thermal ablation methods and therapy
monitoring approaches. Here, finite difference and analytic solutions to
Pennes’ bio-heat transfer equation were used to determine spatial correlations between temperature profiles on parallel planes. Time delays and scale
factors for correlated profiles were applied to infrared surface-temperature
measurements to estimate subsurface temperatures. To test this method,
ex vivo bovine liver tissue was sonicated by linear image-ablate arrays with
1–6 pulses of 5.0 MHz unfocused (7.5 s, 64.4–92.0 W/cm2 in situ ISPTP) or
focused (1 s, 562.7–799.6 W/cm2 in situ ISPTP, focus depth 10 mm) ultrasound. Temperature was measured on the liver surface by an infrared camera at 1 fps and extrapolated to the imaging/ablation plane, 3 mm below the
surface. Echo decorrelation maps were computed from pulse-echo signals
captured at 118 fps during 5.0 s rest periods beginning 1.1 s after each sonication pulse. Tissue samples were frozen at 80 C, sectioned, vitally
stained, imaged, and segmented for analysis. Estimated thermal dose profiles showed correspondence with segmented tissue histology, while thresholded temperature profiles corresponded with measured echo decorrelation.
These results suggest utility of this method for thermal ablation research.
4:15
1pBA12. Temperature dependence of harmonics generated by nonlinear ultrasound beam propagation in water. Borna Maraghechi, Michael
C. Kolios, and Jahan Tavakkoli (Phys., Ryerson Univ., 350 Victoria St., Toronto, ON M5B 2K3, Canada, borna.maraghechi@ryerson.ca)
Ultrasound thermal therapy is used for noninvasive treatment of cancer.
For accurate ultrasound based temperature monitoring in thermal therapy,
the temperature dependence of acoustic parameters is required. In this study,
the temperature dependence of acoustic harmonics was investigated in
water. The pressure amplitudes of the transmitted fundamental frequency
(p1), and its harmonics (second (p2), third (p3), fourth (p4), and fifth (p5))
generated by nonlinear ultrasound propagation were measured by a calibrated hydrophone in water. The hydrophone was placed at the focal point
of a focused 5-MHz transducer (f-number 4.5) to measure the acoustic pressure. Higher harmonics were generated by transmitting a 5-MHz 15-cycle
pulse that resulted in a focal positive peak pressure of approximately 0.26
MPa in water. The water temperature was increased from 26 C to 52 C in
increments of 2 C. Due to this temperature elevation, the value of p1
decreased by 9%61.5% (compared to its value at 26 C) and values of p2,
p3, p4, and p5 increased by %562%, 22%68%, 44%67%, and 55%65%,
respectively. The results indicate that the nonlinear harmonics are highly
temperature dependent and their temperature sensitivity increase with the
harmonic number. It is concluded that the nonlinear harmonics could potentially be used for ultrasound-based thermometry.
4:30
1pBA13. Implementation of a perfectly matched layer in nonlinear continuous wave ultrasound simulations. Xiaofeng Zhao and Robert
McGough (Dept. of Elec. and Comput. Eng., Michigan State Univ., East
Lansing, MI, zhaoxia6@msu.edu)
FOCUS, the "Fast Object-Oriented C + + Ultrasound Simulator" (http://
www.egr.msu.edu/~fultras-web), simulates nonlinear ultrasound propagation by numerically evaluating the Khokhlov–Zabolotskaya–Kuznetsov
(KZK) equation. For continuous-wave excitations, KZK simulations in
FOCUS previously required that the simulations extend over large radial
distances relative to the aperture radius, which reduced the effect of reflections from the boundary on the main beam. To reduce the size of the grid
required for these calculations, a perfectly matched layer (PML) was
recently added to the KZK simulation routines in FOCUS. Simulations of
the linear pressure fields generated by a spherically focused transducer with
an aperture radius of 1.5 cm and a radius of curvature of 6cm are evaluated
for a peak surface pressure of 0.5 MPa and a 1 MHz fundamental frequency.
168th Meeting: Acoustical Society of America
2096
4:45
1pBA14. An improved time-base transformation scheme for computing
waveform deformation during nonlinear propagation of ultrasound.
Boris de Graaff, Shreyas B. Raghunathan, and Martin D. Verweij (Acoust.
Wavefield Imaging, Delft Univ. of Technol., Lorentzweg 1, Delft 2628CJ,
Netherlands, m.d.verweij@tudelft.nl)
Nonlinear propagation plays an important role in various applications of
medical ultrasound, like higher harmonic imaging and high intensity focused
ultrasound (HIFU) treatment. Simulation of nonlinear ultrasound fields can
greatly assist in explaining experimental observations and in predicting the
performance of novel procedures and devices. Many numerical simulations
are based on the generic split-step approach, which takes the ultrasound field
at the transducer plane and propagates this forward over successive parallel
planes. Usually, the spatial steps between the planes are small and the diffraction, attenuation, and nonlinear deformation may be treated as separate
substeps. For the majority of methods, e.g., for all KZK-type methods, the
nonlinear substep relies on the implicit solution of the one-dimensional Burgers equation, which is implemented using a time-base transformation. This
generally works fine, but when the shock wave regime is approached,
reduced spatial steps are required to avoid time points to "cross over," and
the method can become notoriously slow. This paper analyses the fundamental difficulty with the common time base transformation, and provides an alternative that does not suffer from the mentioned slowdown. Numerical
results will be shown to demonstrate that this alternative will allow much
larger spatial steps without compromising the numerical accuracy.
5:00
1pBA15. An error reduction algorithm for numeric calculation of the
spatial impulse response. Nils Sponheim (Inst. of Industrial Dev., Faculty
of Technol., Art and Design, Oslo and Akershus Univ. College of Appl.
Sci., Pilestredet 35, P.O. Box 4, St. Olavs plass, Oslo NO-0130, Norway,
nils.sponheim@hioa.no)
The most frequently used method for calculation of the pulsed pressure
field of ultrasonic transducers is the spatial impulse response (SIR)
2097
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
method. This paper presents a new numeric approach that reduce the
numeric error by weighting the contribution of each source element into
the SIR time array, by considering the exact time of arrival of each contribution. The resolution of the time array Dt must be finite. This results in an
error in travel time of 6Dt/2. However, we know the exact travel time and
based on this, we can share the contribution from each source element
between the two closest time elements so that the average time corresponds to the exact travel time and thereby reduce the numeric error. This
study compares the old and the new numeric algorithm with the analytic
solution for a planar circular disk because it has a simple analytic solution.
The paper presents calculations of the SIR for selected points in space and
calculations of the RMS-error between the numeric algorithms and the
analytic solution. The proposed new numeric algorithm decreases the
numeric noise or error with a factor of 5 compared to the old numeric
algorithm.
5:15
1pBA16. Teaching auscultation visually with low cost system, is it
feasible? Sergio L. Aguirre (Universidade Federal de Santa Maria, Rua
Professor Heitor da Graça Fernandes, Avenida Roraima 1000 Centro de
Tecnologia, Santa Maria, Rio Grande do Sul 97105-170, Brazil, sergio.
aguirre@eac.ufsm.br), Ricardo Brum, Stephan Paul, Bernardo H. Murta,
and Paula P. Jardin (Universidade Federal de Santa Maria, Santa Maria,
RS, Brazil)
Cardiac auscultation can generate important information in the diagnosis of diseases. The sounds that the cardiac system provides are understood
in the frequency range of human hearing, but in a region of low sensitivity.
This project aims to build a low cost didactic software/hardware set for
teaching cardiac auscultation technique in Brazilian universities. The frequencies of interest to describe the human cardiac cycle were found in the
range of 20 Hz to 1 kHz which includes low frequencies where available
low-cost transducers usually have large errors. To create the system, an
optimization of the geometry of the chestpiece is being programmed with
finite element simulations; meanwhile, digital filters for specific frequencies of interest and an interface based on MATLAB are being developed.
There were needed filters for the gallops (20 to 70 Hz), heart beats (20 to
100 Hz), ejection murmurs (100 to 500 Hz), mitral stenosis (30 to 80 Hz),
and regurgitations (200 to 900 Hz). The FEM simulation of a chestpiece
demonstrates high signaling levels on the desired frequency range, which
can be used with the filters to obtain specific information. Furthermore, the
ideal signal recording equipments will be defined, implemented, and
tested.
168th Meeting: Acoustical Society of America
2097
1p MON. PM
Results of linear KZK simulations with and without the PML are compared
to an analytical solution of the linear KZK equation on-axis, and the results
show that simulations without the PML require a radial boundary that is at
least seven times the aperture radius, whereas the PML enables accurate
simulations for a radial boundary that is only two times the aperture radius.
[This work was supported in part by NIH Grant R01 EB012079.]
MONDAY AFTERNOON, 27 OCTOBER 2014
MARRIOTT 3/4, 12:55 P.M. TO 3:20 P.M.
Session 1pNS
Noise and Physical Acoustics: Metamaterials for Noise Control II
Olga Umnova, Cochair
University of Salford, The Crescent, Salford M5 4WT, United Kingdom
Keith Attenborough, Cochair
DDEM, The Open University, Walton Hall, Milton Keynes MK7 6AA, United Kingdom
Chair’s Introduction—12:55
Invited Paper
1:00
1pNS1. Sound propagation in the presence of a resonant surface. Logan Schwan (Univ. of Salford, The Crescent, Salford m5 4wt,
United Kingdom, logan.schwan@gmail.com) and Olga Umnova (Acoust. Res. Ctr., Univ. of Salford, Salford, United Kingdom)
The interactions between acoustic waves and an array of resonators are studied. The resonators are arranged periodically on an impedance surface so that the scale separation between sound wavelength and the array period is achieved. An asymptotic multi-scale model
which accounts for viscous and thermal losses in the resonators is developed and is used to derive an effective surface admittance. It is
shown that the boundary conditions at the surface are substantially modified around the resonance frequency. The pressure field on the surface is nearly canceled leading to a phase shift between the reflected and the incident waves. The array can also behave as an absorbing
layer. The predictions of the homogenized model are compared with multiple scattering theory (MST) applied to a finite size array and the
limitations of the former are identified. The influence of the surface roughness and local scattering on the reflected wave is discussed.
Contributed Papers
1:20
1pNS2. Flexural wave induced coherent scattering in arrays of cylindrical shells in water. Alexey S. Titovich and Andrew N. Norris (Mech. and
Aerosp. Eng., Rutgers Univ., 98 Brett Rd., Piscataway, NJ 08854,
alexey17@eden.rutgers.edu)
A periodic array of elastics shells in water is a sonic crystal with local
resonances in the form of flexural vibrations. This acoustic metamaterial has
seen application in wave steering by grading the index in the array, as well as
acoustic filters manifested by Bragg scattering. The primary reason for using
shells is that they can be tuned quasi-statically to have water-like effective
acoustic properties. The issue is that the modally dense flexural resonances
can form pseudogaps in the frequency response resulting in total reflection
from the array. Furthermore, if a flexural resonance falls in the Bragg band
gap, total transmission is possible at that frequency. Although the scattered
wave due to low order flexural vibration of a thin shell is evanescent, when
several shells are closely spaced, the effect on the far-field response is dramatic. In this paper, the interaction of neighboring shells is investigated theoretically using the Love-Timoshenko shell theory and multiple scattering. A
simple model is offered to describe the interaction of modes based on the analytical work. The directionality of the lowest flexural modes is also discussed
as it can lead to phasing between neighboring shells.
1:35
1pNS3. A thin-panel underwater acoustic absorber. Ashley J. Hicks, Michael R. Haberman, and Preston S. Wilson (Mech. Eng. and Appl. Res.
Labs, Univ. of Texas at Austin, 3607 Greystone Dr., Apartment 1410, Austin, TX 78731, a.jean.hicks@utexas.edu)
We present experimental results on the acoustic behavior of thin-panel
underwater sound absorbers composed of a sub-wavelength layered
2098
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
structure. The panels are formed using an inner layer of Delrin or PLA plastic with circular air-filled holes sandwiched between two rubber outer
layers. The panel structure mimics a planar encapsulated bubble screen
exactly one bubble thick, but displays performance that is significantly more
broadband than a comparable bubble screen, which is only useful near the
resonance frequency of the bubble. Initial results indicate 10 dB of insertion
loss in the frequency range 1 kHz to 5 kHz for a panel that is about 1/250th
of a wavelength in thickness at the lowest frequency. The effect of air volume fraction and the use of a 3-D printed (porous) inner layer on insertion
loss will be presented and discussed. [Work supported by ONR.]
1:50
1pNS4. Micromechanical effective medium modeling of metamaterials
of the Willis form. Michael B. Muhlestein, Michael R. Haberman, and
Preston S. Wilson (Appl. Res. Labs. and Dept. of Mech. Eng., Univ. of
Texas at Austin, 3201 Duval Rd. #928, Austin, TX 78759, mimuhle@gmail.
com)
The unique behavior of acoustic metamaterials (AMM) results from
deeply sub-wavelength structures with hidden degrees of freedom rather
than the inherent material properties of their constituents. This distinguishes
AMM from classical composite or cellular materials and also complicates
attempts to model their overall response. This is especially true when subwavelength structures yield anisotropic effective material response, a key
feature of AMM devices designed using transformation acoustics. Further,
previous work has shown that the dynamic response of heterogeneous materials must include coupling between the overall strain and momentum fields
[Milton and Willis, Proc. R. Soc. A 463, 855–880, (2007)]. A micromechanical homogenization model of the overall Willis constitutive equations is
presented to address these difficulties. The model yields a low-volume-fraction estimate of anisotropic and frequency-dependent effective properties in
168th Meeting: Acoustical Society of America
2098
2:05
1pNS5. Acoustic metamaterial homogenization based on equivalent
fluid media with coupled field response. Caleb F. Sieck (Appl. Res. Labs.
and Dept. of Elec. & Comput. Eng., The Univ. of Texas at Austin, 4021
Steck Ave #115, Austin, TX 78759, cfsieck@utexas.edu), Michael R. Haberman (Appl. Res. Labs. and Dept. of Mech. Eng., The Univ. of Texas at
Austin, Austin, TX), and Andrea Al
u (Dept. of Elec. & Comput. Eng., The
Univ. of Texas at Austin, Austin, TX)
Homogenization schemes for wave propagation in heterogeneous electromagnetic (EM) and elastic materials indicate that EM bianisotropy and
elastic momentum-strain and stress-velocity field coupling is required to
correctly describe the effective behavior of the medium [Alu, Phys. Rev. B,
84, 075153 (2011); Milton and Willis, Proc. R. Soc. A, 463, 855–880,
(2007)]. Further, the determination of material coupling terms in EM
resolves apparent violations of causality and passivity which is present in
earlier models [A. Alu, Phys. Rev. B, 83, 081102(R) (2011)]. These details
have not received much attention in fluid acoustics, but they are important
for a proper description of acoustic metamaterial behavior. We derive
expressions for effective properties of a heterogeneous fluid medium from
expressions for the conservation of mass, the conservation of momentum,
and the equation of state and find a physically meaningful effective material
response from first-principles. The results show inherent coupling between
the ensemble averaged volume strain-momentum and pressure-velocity
field. The approach is valid for an infinite periodic lattice of heterogeneities
and employs zero-, first-, and second-order tensorial Green’s functions to
relate point-discontinuities in compressibility and density to far field pressure and particle velocity fields. [This work was supported by the Office of
Naval Research.]
2:20
1pNS6. Nonlinear behavior of a coupled multiscale material containing
snapping acoustic metamaterial inclusions. Stephanie G. Konarski, Michael R. Haberman, and Mark F. Hamilton (Appl. Res. Labs., The Univ. of
Texas at Austin, P.O. Box 8029, Austin, TX 78713-8029, skonarski@
utexas.edu)
Snapping acoustic metamaterial (SAMM) inclusions are engineered subwavelength structures that exhibit regimes of both positive and negative
stiffness. Snapping is defined as large, rapid deformations resulting from the
application of an infinitesimal change in externally applied pressure. This
snapping leads to a large hysteretic response at the inclusion scale and is
thus of interest for enhancing absorption of energy in acoustic waves. The
research presented here models the forced dynamics of a multiscale material
consisting of SAMM inclusions embedded in a nearly incompressible viscoelastic matrix material to explore the influence of small-scale snapping on
enhanced macroscopic absorption. The microscale is characterized by a single SAMM inclusion, while the macroscale is sufficiently large to encompass a low volume fraction of non-interacting SAMM inclusions within the
nearly incompressible matrix. A model of the forced dynamical response of
this heterogeneous material is achieved by coupling the two scales in time
and space using a generalized Rayleigh-Plesset analysis, which has been
adapted from the field of bubble dynamics. A loss factor for the heterogeneous medium is examined to characterize energy dissipation due to the forced
behavior of these metamaterial inclusions. [Work supported by the ARL:UT
McKinney Fellowship in Acoustics and Office of Naval Research.]
2:35
1pNS7. Cloaking of an acoustic sensor using scattering cancelation. Matthew D. Guild (Dept. of Electronics Eng., Universitat Politecnica de Valencia, Camino de vera s/n (Edificio 7F), Valencia 46022, Spain, mdguild@
utexas.edu), Andrea Al
u (Dept. of Elec. and Comput. Eng., Univ. of Texas
at Austin, Austin, TX), and Michael R. Haberman (Appl. Res. Labs. and
Dept. of Mech. Eng., Univ. of Texas at Austin, Austin, TX)
Acoustic scattering cancelation (SC) is an approach enabling the elimination of the scattered field from an object, thereby cloaking it, without
restricting the incident wave from interacting with the object. This aspect of
an SC cloak lends itself well to applications in which one wishes to extract
energy from the incident field with minimal scattering, such as for sensing
and noise control. In this work, an acoustic cloak designed based on the
scattering cancelation method, and made of two effective fluid layers, is
applied to the case of an acoustic sensor consisting of a hollow piezoelectric
shell with mechanical absorption, providing a 20–50 dB reduction in the
scattering strength. The cloak is shown to increase the range of frequencies
over which there is nearly perfect phase fidelity between the acoustic signal
and the voltage generated by the sensor, while remaining within the physical
bounds of a passive absorber. The feasibility of achieving the necessary
fluid layer properties is demonstrated using sonic crystals with the use of
readily available acoustic materials. [Work supported by the US ONR and
Spanish MINECO.]
2:50
1pNS8. Cloaking non-spherical objects and collections of objects using
the scattering cancelation method. Ashley J. Hicks (Appl. Res. Labs. and
Dept. of Mech. Eng., The Univ. of Texas at Austin, Appl. Res. Labs., 10000
Burnet Rd., Austin, TX 78758, ahicks@arlut.utexas.edu), Matthew D. Guild
(Wave Phenomena Group, Dept. of Electronics Eng., Universitat Politècnica
de València, Valencia, Spain), Michael R. Haberman (Appl. Res. Labs. and
Dept. of Mech. Eng., The Univ. of Texas at Austin, Austin, TX), Andrea
Al
u (Dept. of Elec. and Comput. Eng., The Univ. of Texas at Austin, Austin,
TX), and Preston S. Wilson (Appl. Res. Labs. and Dept. of Mech. Eng., The
Univ. of Texas at Austin, Austin, TX)
Acoustic cloaks can be designed using transformation acoustics (TA) to
guide acoustic disturbances around an object. TA cloaks, however, require
the use of exotic materials such as pentamode materials [Proc. R. Soc. A.
464, pp. 2411–2434, (2008)]. Alternatively, the scattering cancelation (SC)
method allows the cloaked object to interact with the acoustic wave and can
be realized with isotropic materials [Phys. Rev. B., 86, 104302 (2012)].
Unfortunately, SC cloaking performance may be degraded if the shape of
the cloaked object diverges from the one for which the cloak was originally
designed. This study investigates the design of two-layer SC cloaks for
imperfect spherical objects. The cloaking material properties are determined
by minimizing the scattered field from a model of the imperfect object
approximated as a series of concentric shells. Predictions from this approximate analytical model are compared with three-dimensional finite element
(FE) models of the cloaked and uncloaked non-spherical shapes. Analytical
and FE results are in good agreement for ka 5, indicating that the SC
method is robust to object imperfections. Finally, FE models are used to
explore SC cloak robustness to multiple-scattering by investigating linear
arrays of cloaked objects for different incident angles. [Work supported by
ONR.]
3:05
1pNS9. Parity-time symmetric metamaterials and metasurfaces for
loss-immune and broadband acoustic wave manipulation. Romain
Fleury, Dimitrios Sounas, and Andrea Alu (ECE Dept., The Univ. of Texas
at Austin, 1 University Station C0803, Austin, TX 78712, romain.fleury@
utexas.edu)
We explore the largely uncharted scattering properties of acoustic systems that are engineered to be invariant under a special kind of space-time
2099
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2099
1p MON. PM
the long-wavelength limit. The model employs volume averages of the
dyadic Green’s function calculating the particle displacement resulting from
a unit force source. This Green’s function is shown to be analogous to one
that determines the particle velocity in a fluid resulting from a unit dipole
moment. The predicted effective properties for isotropic materials with
spherical inclusions fall within the Hashin-Shtrikman bounds and agree with
self-consistent estimates. [Work supported by ONR.]
symmetry, consisting in taking their mirror image and running time backwards. Known as Parity-Time symmetry, this special condition is shown
here to lead to acoustic metamaterials that possess a balanced distribution of
gain (amplifying) and loss (absorbing) media, at the basis of ideal loss-compensation, and under certain conditions, unidirectional invisibility. We have
designed and built the first acoustic metamaterial with parity-time symmetric properties, obtained by pairing the acoustic equivalent of a lasing system
with a coherent perfect acoustic absorber, implemented using electro-
acoustic resonators loaded with non-Foster electrical circuits. The active
system can be engineered to be fully stable and, in principle, broadband. We
discuss the underlying physics and present the realization of a unidirectional
invisible acoustic sensor with unique sensing properties. We also discuss the
potential of PT acoustic metamaterials and metasurfaces for a variety of
metamaterial-related applications, which we obtain in a loss-immune and
broadband fashion, including perfect cloaking of sensors, planar focusing,
and unidirectional cloaking of large objects.
MONDAY AFTERNOON, 27 OCTOBER 2014
INDIANA C/D, 1:15 P.M. TO 4:45 P.M.
Session 1pPA
Physical Acoustics and Noise: Jet Noise Measurements and Analyses II
Richard L. McKinley, Cochair
Battlespace Acoustics, Air Force Research Laboratory, 2610 Seventh Street, Wright-Patterson AFB, OH 45433-7901
Kent L. Gee, Cochair
Brigham Young University, N243 ESC, Provo, UT 84602
Alan T. Wall, Cochair
Battlespace Acoustics Branch, Air Force Research Laboratory, Bldg. 441, Wright-Patterson AFB, OH 45433
Chair’s Introduction—1:15
Invited Papers
1:20
1pPA1. Considerations for array design and inverse methods for source modeling of full-scale jets. Alan T. Wall (Battlespace
Acoust. Branch, Air Force Res. Lab., Bldg. 441, Wright-Patterson AFB, OH 45433, alantwall@gmail.com), Blaine M. Harker, Trevor
A. Stout, Kent L. Gee, Tracianne B. Neilsen (Dept. of Phys. and Astronomy, Brigham Young Univ., Provo, UT), Michael M. James
(Blue Ridge Res. and Consulting, Asheville, NC) and Richard L. McKinley (Air Force Res. Lab., Boston, OH)
Microphone array-based measurements of full-scale jet noise sources necessitate the adaptation and incorporation of advanced array
processing methodologies. Arrays for full-scale jets measurements can require large apertures, high spatial sampling densities, and strategies to account for partially coherent fields. Many approaches have been taken to sufficiently capture radiated noise in past jet noise
investigations, including patch-and-scan measurements with a small dense array, one-dimensional measurements along the extent of the
jet in conjunction with an axisymmetric assumption, and full two-dimensional source coverage with a large microphone set. Various
measurement types are discussed in context of physical jet noise field properties, such as spatial coherence, source stationary, and frequency content.
1:40
1pPA2. Toward the development of a noise and performance tool for supersonic jet nozzles: Experimental and computational
results. Christopher J. Ruscher (Spectral Energies, LLC, 2654 Solitaire Ln. Apt. #3, Beavercreek, OH 45431, cjrusche@gmail.com),
Barry V. Kiel (RQTE, Air Force Res. Lab., Dayton, OH), Sivaram Gogineni (Spectral Energies, LLC, Dayton, OH), Andrew S. Magstadt, Matthew G. Berry, and Mark N. Glauser (Dept. of Mech. and Aerosp. Eng., Syracuse Univ., Syracuse, NY)
Modal decomposition of experimental and computational data for a range of two- and three-stream supersonic jet nozzles will be
conducted to study the links between the near-field flow features and the far-field acoustics. This is accomplished by decomposing nearfield velocity and pressure data using proper orthogonal decomposition (POD). The resultant POD modes are then used with the far-field
sound to determine a relationship between the near-field modes and portions of the far-field spectra. A model will then be constructed
for each of the fundamental modes, which can then be used to predict the entire far-field spectrum for any supersonic jet. The resultant
jet noise model will then be combined with an existing engine performance code to allow parametric studies to optimize thrust, fuel consumption, and noise reduction.
2100
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2100
2:00
1pPA3. Finely resolved spatial variation in F-22 spectra. Tracianne B. Neilsen, Kent L. Gee, Hsin-Ping C. Pope, Blaine Harker
(Brigham Young Univ., N311 ESC, Provo, UT 84602, tbn@byu.edu), and Michael M. James (Blue Ridge Res. and Consulting, LLC,
Asheville, NC)
1p MON. PM
Examination of the spatial variation in the spectrum from ground-based microphones near an F-22 Raptor has revealed spectral features at high engine power that are not seen at intermediate power or in laboratory-scale jet noise. At military and afterburner powers, a
double peaked spectrum is detected around the direction of maximum radiation. In this region, there is not a continuous variation in
peak frequency with downstream distance, as seen in lab-scale studies, but a transition between the relative levels for two discrete onethird octave bands. Previous attempts to match similarity spectra for turbulent mixing noise to a few of these measurements split the difference between the two peak frequencies [Neilsen et al., J. Acoust. Soc. Am. 133, 2116–2125 (2013)]. The denser spatial resolution
afforded by examining the spectral variation on all 50 ground-based microphones, located 11.6 m to the sideline and spanning 30 m, provides the opportunity to further investigate this phenomenon and propose a more complete formulation of expected spectral shapes. Special care must be given to account for the relative amount of waveform steepening, which varies with level, distance, and angular
position. [Work supported by ONR.]
2:20
1pPA4. Experimental and computational studies of noise reduction for tactical fighter aircraft. Philip Morris, Dennis K. McLaughlin, Russell Powers, Nidhi Sikarwar, and Matthew Kapusta (Aerosp. Eng., Penn State Univ., 233C Hammond Bldg., University Park, PA
16802, pjm@psu.edu)
The noise levels generated by tactical fighter aircraft can result in Noise Induced Hearing Loss for Navy personnel, particularly those
involved in carrier deck operations. Reductions in noise source levels are clearly necessary, but these must be achieved without a loss in
aircraft performance. This paper describes an innovative noise reduction technique that has been shown in laboratory scale measurements to provide significant reductions in both mixing as well as broadband shock-associated noise. The device uses the injection of relatively low pressure and low mass flow rate air into the diverging section of the military-style nozzle. This injection generates “fluidic
inserts” that change the effective nozzle area ratio and generate streamwise vorticity that breaks up the large scale turbulent structures in
the jet exhaust that are responsible for the dominant mixing noise. The paper describes noise measurements with and without forward
flight that demonstrate the noise reduction effectiveness of the inserts. The experiments are supported by computations that help to
understand the flow field generated by the inserts as well as help to optimize the distribution and strength of the flow injection.
2:40
1pPA5. Detection and analysis of shock-like waves emitted by heated supersonic jets using shadowgraph flow visualization. Nathan
E. Murray (National Ctr. for Physical Acoust., The Univ. of MS, 1 Coliseum Dr., University, MS 38677, nmurray@olemiss.edu)
Shock-like waves in the acoustic field adjacent to the shear layer formed by a supersonic, heated jet are observed using the method
of retro-reflective shadowgraphy. The two inch diameter jet issued from a converging–diverging nozzle at a pressure ratio of 3.92 with a
temperature ratio of 3.3. Image sets were obtained near the jet exit and in the post-potential core region. In both locations, shock-like
waves can be observed immediately adjacent to the jet shear layer. Each image is subdivided into a set of overlapping tiles. A radon
transform is applied to the auto-correlation of each tile providing a quantitative measure of the dominant propagation direction of waves
in each sub-region. The statistical distribution of propagation angles over the image space provides a measure of the distribution of
source convection speeds and source locations in the jet shear layer. Results show general agreement with a convection speed on the
order of 70 percent of the jet velocity.
3:00–3:20 Break
3:20
1pPA6. Where are the nonlinearities in jet noise? Charles E. Tinney (Ctr. for AeroMech. Res., The Univ. of Texas at Austin, ASE/
EM, 210 East 24th St., Austin, TX 78712, cetinney@utexas.edu) and Woutijn J. Baars (Mech. Eng., The Univ. of Melbourne, Parkville,
VIC, Australia)
For some time now it has been theorized that spatially evolving instability waves in the irrotational near-field of jet flows couple
both linearly and nonlinearly to generate far-field sound [Sandham and Salgado, Philos. Trans. R. Soc. Am. 366 (2008); Suponitsky, J.
Fluid Mech. 658 (2010)]. An exhaustive effort at The University of Texas of Austin was initiated in 2008 to better understand this phenomenon, which included the development of a unique analysis technique for quantifying their coherence [Baars et al., AIAA Paper
2010–1292 (2010); Baars and Tinney, Phys. Fluids 26, 055112 (2014)]. Simulated data have shown this technique to be effective, albeit,
insurmountable failures arise when exercised on real laboratory measurements. The question that we seek to address is how might jet
flows manifest nonlinearities? Both subsonic and supersonic jet flows are considered with simulated and measured data sets encompassing near-field and far-field pressure signals. The focus then turns to considering nonlinearities in the form of cumulative distortions, and
the conditions required for them to be realized in a laboratory scale facility [Baars, et al., J. Fluid Mech. 749 (2014)].
3:40
1pPA7. Characterization of supersonic jet noise and its control. Ephraim Gutmark, Dan Cuppoletti, Pablo Mora, Nicholas Heeb, and
Bhupatindra Malla (Aerosp. Eng. and Eng. Mech., Univ. of Cincinnati, 799 Rhodes Hall, Cincinnati, OH 45221, gutmarej@ucmail.uc.edu)
As supersonic aircraft and their turbojet engines become more powerful they emit more noise. The principal physical difference
between the jets emanating from supersonic jets and those from subsonic jets is the presence of shocks in the supersonic one. This paper
summarizes a study of noise reduction technologies applied to supersonic jets. The measurements are performed with a simulated
2101
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2101
exhaust of a supersonic nozzle representative of supersonic aircraft. The nozzle has a design Mach number of 1.56 and is examined at
design and off-design conditions. Several components of noise are present including mixing noise, screech, broadband shock associated
noise, and crackle. Chevrons and fluidic injection by microjets and a combination of them are shown to reduce the noise generated by
the main jet. These techniques provide significant reduction in jet noise. PIV provides detailed information of the flow and brings out the
physics of the noise production and reduction process.
Contributed Papers
4:00
1pPA8. Influence of windscreen on impulsive noise measurement. per
rasmussen (G.R.A.S. Sound & Vib. A/S, Skovlytoften 33, Holte 2840, Denmark, pr@gras.dk)
The nearfield noise from jet engines may contain impulsive sound signals with high crest factors. Most jet engine noise measurements are performed outside in potentially windy conditions, and it may, therefore, be
necessary to use windscreens on microphones to reduce the influence of
wind induced noise on the microphone. The windscreen will, however,
influence the frequency response of the microphone especially at high frequencies. This will change both the magnitude and the phase response and,
therefore, change the measured impulse. The effect of different sizes of
windscreen is investigated and the effect on impulsive type signals is evaluated both in the time domain and the frequency domain.
4:15
1pPA9. Comparison of nonlinear, geometric, and absorptive effects in
high-amplitude jet noise propagation. Brent O. Reichman, Kent L. Gee,
Tracianne B. Neilsen, Joseph J. Thaden (Brigham Young Univ., 453 E 1980
N, #B, Provo, UT 84604, brent.reichman@byu.edu), and Michael M. James
(Blue Ridge Research and Consulting, LLC, Asheville, NC)
In recent years, understanding of nonlinearity in noise from high-performance jet aircraft has increased, with successful modeling of nonlinear
propagation in the far field. However, the importance and characteristics of
nonlinearity in the near field are still debated. An ensemble-averaged, frequency-domain version of the Burgers equation can be inspected to directly
compare the effects of nonlinearity on the sound pressure level with the
effects of atmospheric absorption and geometric spreading on a decibel
2102
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
scale. This nonlinear effect is calculated using the quadspectrum of the pressure and the squared pressure waveforms. Results from applying this analysis to F-22A data at various positions in the near field reveal that in the near
field the nonlinear effects are of the same order of magnitude as geometric
spreading and that both of these effects are significantly greater than absorption in the area of maximum radiation. [Work supported by ONR and an
ORISE fellowship through AFRL.]
4:30
1pPA10. Correlation lengths in deconvolved cross-beamforming measurements of military jet noise. Blaine M. Harker, Kent L. Gee, Tracianne
B. Neilsen (Dept. of Phys. and Astronomy, Brigham Young Univ., N283
ESC,, Provo, UT 84602, blaineharker@byu.net), Alan T. Wall (Battlespace
Acoust. Branch, Air Force Res. Lab., Wright-Patterson Air Force Base,
OH), and Michael M. James (Blue Ridge Research and Consulting, LLC,
Asheville, NC)
Beamforming algorithms have been applied in multiple contexts in aeroacoustic applications, but difficulty arises when applying these to the partially correlated and distributed sources found in jet noise. To measure and
more accurately distinguish correlated sources, cross-beamforming methods
are employed to incorporate correlation information. Deconvolution methods such as DAMAS-C, an extension of the deconvolution approach for the
mapping of acoustic sources (DAMAS), remove array effects from crossbeamforming applications and further resolve beamforming results. While
DAMAS-C results provide insight to correlation between sources, the extent
to which these results relate to source correlation remains to be analyzed.
Numerical simulations of sources with varying degrees of correlation are
provided to benchmark the DAMAS-C results. Finally, correlation lengths
are established for DAMAS-C results from measurements for full-scale
military jet noise sources. [Work supported by ONR.]
168th Meeting: Acoustical Society of America
2102
MONDAY AFTERNOON, 27 OCTOBER 2014
MARRIOTT 1/2, 1:00 P.M. TO 5:00 P.M.
Session 1pSCa
1p MON. PM
Speech Communication and Biomedical Acoustics: Findings and Methods in Ultrasound Speech
Articulation Tracking
Keith Johnson, Cochair
Linguistics, University of California, Berkeley, 1203 Dwinelle Hall, Berkeley, CA 94720
Susan Lin, Cochair
UC Berkeley, 1203 Dwinelle Hall, UC Berkeley, Berkeley, CA 94720
Chair’s Introduction—1:00
Invited Papers
1:05
1pSCa1. Examining suprasegmental and morphological effects on constriction degree with ultrasound imaging. Lisa Davidson
(Linguist, New York Univ., 10 Washington Pl., New York, NY 10003, lisa.davidson@nyu.edu)
Two case studies of ultrasound imaging use tongue shape differences to investigate whether suprasegmental influences affect the articulatory implementation of otherwise equivalent phonemic sequences. First, we examine whether word-medial and word-final stop codas
have the same degree of constriction (e.g., "blacktop" vs. "black top"). Previous research on syllable position effects on articulatory implementation have conflated syllable position with word position, and this study investigates whether each prosodic factor has an independent
contribution. Results indicate that where consistent differences are found, they are due not to the prosodic position but to speaker-specific
implementation. Second, we examine whether morphological status influences the darkness of American English /l/ in comparing words
like "tallest" and "flawless." While the intervocalic /l/s in "tall-est" and "flaw-less" are putatively assigned the same syllabic status, the /l/ in
"tallest" corresponds to the coda /l/ of the stem "tall" whereas that of "flawless" is the onset of the affix "-less." Results indicate that /l/ is
darker—the tongue is lower and more retracted—when corresponding to the coda of the stem word. Data in both studies were analyzed
with smoothing spline ANOVA, an effective statistical technique for examining differences between whole tongue curves.
1:25
1pSCa2. Imaging dynamic lingual movements that we could previously only imagine. Amanda L. Miller (Linguist, The Ohio State
Univ., 222 Oxley Hall, 1712 Neil Ave., Columbus, OH 43210-1298, amiller@ling.osu.edu)
Pioneering lingual ultrasound studies of speech demonstrated that almost the entire tongue could be imaged (McKay 1957). Early
studies contributed to our knowledge of tongue shape and tongue bracing in vowels (Morrish et al. 1984; Stone et al. 1987). However,
until recently, lingual ultrasound studies have been limited to standard video frame rates of 30 fps, which are sufficient only for imaging
stable speech sounds such as vowels and liquids. High frame rate lingual ultrasound (>100 fps) allows us to view the production of
dynamic speech sounds, such as stop consonants, and even click consonants. The high sampling rate, which yields an image of the
tongue every 8–9 ms, improves image quality, by decreasing temporal smear, allowing even tongue tip movements to be visualized to a
greater extent than was previously possible. Results from several high frame rate ultrasound studies (114 fps) of consonants that were
collected and analyzed using the CHAUSA method (Miller and Finch 2011) are presented. The studies elucidate (a) tongue dorsum and
root gestures in velar and uvular pulmonic consonants; (b) tongue coronal, dorsal, and root gestures in four contrastive click consonants;
and (c) lingual gestures in pulmonic fricatives.
1:45
1pSCa3. Ultrasound evidence for place of articulation of the mora nasal /N/ in Japanese. Ai Mizoguchi (The Graduate Ctr., City
Univ. of New York, 365 Fifth Ave., Rm. 7304, New York, NY 10016, amizoguchi@gc.cuny.edu) and Douglas H. Whalen (Haskins
Labs., New Haven, CT)
The Japanese mora nasal /N/, which occurs in syllable-final position, takes its place of articulation from the following segment if
there is one. However, the mora nasal in utterance-final position is often transcribed as velar, uvular, or even placeless. The present study
examines the tongue shapes in Japanese using ultrasound imaging to investigate whether Japanese mora nasal /N/ is placeless and to
assess whether assimilation to following segments is gradient or categorical. Preliminary results from ultrasound imaging from one
native speaker of Tokyo dialect showed three shapes for final /N/, even though the researchers could not distinguish them perceptually.
Results from assimilation contexts showed that the velar gesture for /N/ was not deleted. All gestures remained and assimilation was not
categorical, even though perceptually, it was. The velar gesture for /N/ might be expected to be deleted before an alveolar /n/ because
they are both lingual, but a blending of the two tongue gestures occurred instead. Variability in place of articulation in final position
occurred even within one speaker. Categorical assimilation was not observed in any phonological environments studied. The mora nasal
may vary across speakers, so further research is needed to determine whether it behaves similarly for more speakers.
2103
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2103
2:05
1pSCa4. A multi-modal imaging system for simultaneous measurement of speech articulator kinematics for bedside applications
in clinical settings. David F. Conant (Neurological Surgery, UCSF, 675 Nelson Rising Ln., Rm. 635, San Francisco, CA 94143, dfconant@gmail.com), Kristofer E. Bouchard (LBNL, San Francisco, CA), Anumanchipalli K. Gopala, Ben Dichter, and Edward F. Chang
(Neurological Surgery, UCSF, San Francisco, CA)
A critical step toward a neurological understanding of speech generation is to relate neural activity to the movement of articulators.
Here, we describe a noninvasive system for simultaneously tracking the movement of the lips, jaw, tongue, and larynx for human neuroscience research carried out at the bedside. We combined three methods previously used separately: videography to track the lips and
jaw, electroglottography to monitor the larynx, and ultrasonography to track the tongue. To characterize this system, we recorded articulator positions and acoustics from six speakers during production of nine American English vowels. We describe processing methods for
the extraction of kinematic parameters from the raw signals and methods to account for artifacts across recording conditions. To understand the relationship between kinematics and acoustics, we used regularized linear regression between the vocal tract kinematics and
speech acoustics to identify which, and how many, kinematic features are required to explain both across vowel and within vowel acoustics. Furthermore, we used unsupervised matrix factorization to derive "prototypical" articulator shapes, and use them as a basis for articulator analysis. These results demonstrate a multi-modal system to non-invasively monitor speech articulators for clinical human
neuroscience applications and introduce novel analytic methods for understanding articulator kinematics.
2:25
1pSCa5. A study of tongue trajectories for English /æ/ using articulatory signals automatically extracted from lingual ultrasound
video. Jeff Mielke, Christopher Carignan, and Robin Dodsworth (English, North Carolina State Univ., 221 Tompkins Hall, Campus Box
8105, Raleigh, NC 27695-8105, ccarign@ncsu.edu)
While ultrasound imaging has made articulatory phonetics more accessible, quantitative analysis of ultrasound data often reduces
speech sounds to tongue contours traced from single video frames, disregarding the temporal aspect of speech. We propose a tracingfree method for directly converting entire ultrasound videos to phonetically interpretable articulatory signals using Principal Component
Analysis of image data (Hueber et al. 2007). Once a batch of ultrasound images (e.g., 36,000 frames from 10 min at 60 fps) has been
reduced to 20 principal components, numerous techniques are available for deriving temporally changing articulatory signals that are
both phonetically meaningful and comparable across speakers. Here we apply a regression model to find the linear combination of PCs
that is the lingual articulatory analog of the front diagonal of the acoustic vowel space (Z2-Z1). We demonstrate this technique with a
study of /æ/ tensing in 20 speakers of North American English varieties with different tensing environments (Labov 2005). Our results
show that /m n/ condition a tongue raising gesture that is aligned to the vowel nucleus, while /g/ conditions anticipatory raising toward
the velar target. /˛/ patterns consistently with the other velar rather than the other nasals.
2:45–3:05 Break
3:05
1pSCa6. Combined analysis of real-time three-dimensional tongue ultrasound and digitized three-dimensional palate impressions: Methods and findings. Steven M. Lulich (Speech and Hearing Sci., Indiana Univ., 4789 N White River Dr., Bloomington, IN
47404, slulich@indiana.edu)
Vocal tract and articulatory imaging has a long and rich history using a wide variety of techniques and equipment. This presentation
focuses on combining real-time 3D ultrasound with high-resolution 3D digital scans of palate impressions. Methods for acquiring and
analyzing these data will be presented, including efforts to accomplish 3D registration of the tongue and hard palate. Findings from an
experiment investigating inter-speaker variability in palate shape and vowel articulation will also be presented.
3:25
1pSCa7. AutoTrace: An automatic system for tracing tongue contours. Gustave V. Hahn-Powell (Linguist, Univ. of Arizona, 2850
N Alvernon Way, Apt. 17, Tucson, AZ 85712, hahnpowell@email.arizona.edu) and Diana Archangeli (Linguist, Univ. of Hong Kong,
Tucson, Arizona)
Ultrasound imaging of the tongue is used for analyzing the articulatory features of speech sounds. In order to be able to study the
movements of the tongue, the tongue surface contour has to be traced for each recorded image. In order to capture the details of the
tongue’s movement during speech, the ultrasound video is generally recorded at the highest frame rate available. Detail comes at a price.
The number of frames produced from even a single non-trivial experiment is often far too large to trace manually. The Arizona Phonological Imaging Lab (APIL) at the University of Arizona has developed a suite of tools to simplify the labeling and analysis of tongue
contours. AutoTrace is a state-of-the-art automatic method for tracing tongue contours that is robust across speakers and languages and
operates independently of frame order. The workshop will outline the software installation procedure, introduce the included tools for
selecting and preparing training data, provide instructions for automated tracing, and overview a method for measuring the network’s accuracy using the Mean Sum of Distances (MSD) metric described by Li et al. (2005).
2104
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2104
Contributed Papers
3:45
lpSCa8. UATracker: A tool for ultrasound data management. Mohsen
Mahdavi Mazdeh and Diana B. Archangeli (Linguist, Univ. of Arizona, 3150
E Bellevue St., #16, Tucson, AZ 85716, mahdavi@email.arizona.edu)
This presentation introduces TraceTracker, a tool for efficiently managing language ultrasound data. Ultrasound imaging of the tongue is used for
analyzing the articulatory features of speech sounds. Most analyses involve
finding data points from individual images. The number of image frames
and the volume of secondary data associated with them tend to grow quickly
in speech analysis studies of this type, making it very hard to handle them
manually. TraceTracker is a data management tool for organizing, modifying, and performing advanced searches over ultrasound tongue images and
the data associated with those images. The setup operation of the program
automatically iterates through file systems and generates a comprehensive
database containing the image files and information such as the speaker, the
video each frame is extracted from, an index, how they have been traced,
etc. The program also automatically reads Praat format TextGrid files and
。Nセウッ」ゥエ・@
specific image frames with the corresponding words and speech
segments based on the annotations in the grids. Once the database is populated, TraceTracker can be used to tag images, generate copies, and perform
advanced search operations over the images based on the aforementioned
criteria including the specific sequence of segments in which it lies.
4:00
lpSCa9. Optical ftow analysis for measuring tongue-motion. Adriano V.
Barbosa (Electron. Eng., Federal Univ. of Minas Gerais, Belo Horizonte,
Brazil) and Eric Vatikiotis-Bateson (Linguist, Univ. Br. Columbia, 2613
West Mall, Vancouver, BC V6N2W4, Canada, evb@mail.ubc.ca)
Most attempts to measure motion of the tongue have focused on locating
the upper surface of the tongue or specific points on that surface. Recently,
we have used our software implementation of optical ftow analysis, Flow Analyzer, to extract measures of tongue motion. The software allows identification of multiple regions of interest, consisting of rectangles whose
dimensions and location are user-definable. For example, a large region
encompassing the visible tongue body provides general information about
the amount and direction (2D) of motion through time; while narrow vertical rectangles can measure the time-varying changes of tongue height at various locations. We will demonstrate the utility of the software, which is
freely available upon request to the authors.
4:15
lpSCalO. An acoustic profile of Spanish trlll/r/. Ahmed Rivera-Campos
and Suzanne E. Boyce (Commun. Sci. and Disord., Univ. of Cincinnati,
3202, Eden Ave., Cincinnati, OH 45267, riveraam@mail.uc.edu)
Unlike English rhotic, there is limited data on the acoustic profile of
Spanish trill /r/. It is well known that one key aspect of the English rhotic /J/
is the lowering of the F3 formant but limited information can be found if
Spanish trill shares the same characteristics. Although it has been described
that a lowering of F3 is not something that characterizes /r/ production and
that F3 values fall under certain ranges that are delimited by vowel contexts,
2105
J. Acoust. Soc. Am., Vol. 136, No.4, Pt. 2, October 2014
analysis of F3 values has not been done using a large sample of native
speakers of Spanish. The following study analyzed the F3 values of 20 participants after production of /r/ by different native speakers of Spanish from
different regions of Latin America, and the Caribbean. Analysis of F3 values
of /r/ provides information about articulatory requirements for adequate /r/
production. This information will benefit professionals that service individuals with articulatory difficulties or are learning Spanish as a second
language.
4:30
lpSCall. Investigation of the role of the tongue root In Kazakh vowel
production using ultrasound. Jonathan N. Washington (Linguist, Indiana
Univ.,
Bloomington, IN 47403-2608, jonwashi@indiana.edu)
It has been argued that Kazakh primarily distinguishes its anterior
("front") vowels from its posterior ("back") vowels through retraction of the
tongue root. This analysis is at odds with the traditional assumption that the
anteriority of Kazakh vowels is contrasted by tongue body position. The
present study uses ultrasound imaging to investigate the extent to which the
position of the tongue root and the tongue body are involved in the anteriority contrast in Kazakh. Native speakers of Kazakh were recorded reading
words (in carrier sentences) containing target vowels, which were controlled
for adjacent consonants and metrical position. An audio recording was also
made of these sessions. Frames containing productions of the target vowels
were extracted from the ultrasound video and the imaged surface of the
tongue was manually traced. Analyses of tongue root and body position
were analyzed for each vowel and will be presented together with formant
measurements from the audio recordings.
4:45
lpSCa12. Vowel production In sighted children and congenitally blind
children. Lucie Menard and Christine Turgeon (Linguist, Universite du PQ
a Montreal, CP 8888, succ. Centre-Ville, Montreal, QC H3C 3P8, Canada,
menard.lucie@uqam.ca)
It is well known that vision plays an important role in speech perception.
At the production level , we have recently shown that speakers with congenital visual deprivation produce smaller displacements of the lips (visible articulator) compared to their sighted peers [L. Menard, C. Toupin, S. Baum,
S. Drouin, J. Aubin, and M. Tiede, J. Acoust. Soc. Am. 134, 2975-2987
(2013)]. To further investigate the impact of visual experience on the articulatory gestures used to produce intelligible speech, a speech production
study was conducted with blind and sighted school-aged children. Eight
congenitally blind children (mean age: 7 years old, from 5 years to II years)
and eight sighted children (mean age: 7 years old, from 5 years to II years)
were recorded using a synchronous ultrasound and Optotrak imaging system
to record tongue and lip positions. Repetitions of the French vowels /i/, /a/,
and /u/ were elicited in a /bVb/ sequence in two prosodic conditions:
neutral and under contrastive focus. Tongue contours, lip positions, and
formant values were extracted. Acoustic data show that focused syllables
are less differentiated from their unfocused counterparts in blind children
than in sighted children. Trade-offs between lip and tongue positions are
examined.
168th Meeting: Acoustical Society of America
2105
MONDAY AFTERNOON, 27 OCTOBER 2014
MARRIOTT 5, 1:00 P.M. TO 5:00 P.M.
Session 1pSCb
Speech Communication: Issues in Cross Language and Dialect Perception (Poster Session)
Tessa Bent, Chair
Dept. of Speech and Hearing Sciences, Indiana Univ., Bloomington, IN 47405
All posters will be on display from 1:00 p.m. to 5:00 p.m. To allow contributors an opportunity to see other posters, contributors of oddnumbered papers will be at their posters from 1:00 p.m. to 3:00 p.m. and contributors of even-numbered papers will be at their posters
from 3:00 p.m. to 5:00 p.m.
Contributed Papers
1pSCb1. Cross-language identification of non-native lexical tone. Jennifer Alexander and Yue Wang (Dept. of Linguist, Simon Fraser Univ., 9201
Robert C Brown Hall Bldg., 8888 University Dr., Burnaby, BC V5A 1S6,
Canada, jennifer_alexander@sfu.ca)
We extend to lexical-tone systems a model of second-language perception, the Perceptual Assimilation Model (PAM) (Best & Tyler, 2007), to
examine whether native-language lexical-tone experience influences identification of novel tone. Native listeners of Cantonese, Thai, Mandarin, and
Yoruba hear six CV syllables, each produced with the three phonemic
Yoruba tones (High-level/H, Mid-level/M, Low-level/L), presented randomly three times. In a 3-AFC task, participants indicate a syllable’s tone
by selecting from a set of arrows the one that illustrates its pitch trajectory.
Accuracy scores (proportion correct) were submitted to a two-way
rANOVA with L1-Group (x4) as the between-subjects factor and Tone (x3)
as the within-subjects factor. There was no main effect of Tone or Group.
The Tone-by-Group interaction was significant (p = 0.031) but driven by
one group: Thai listeners identified H and M more accurately than L (both p
< 0.05), though L accuracy was above chance (59%; chance = 33.33%).
Tone-error patterns indicate that Thai listeners primarily confused L with M
(two-way L1-Group x Response-pattern rANOVA p < 0.05). Overall, despite their different tonal-L1 backgrounds, listeners performed comparably.
As predicted by the PAM, listeners attended to gradient phonetic detail and
acoustic cues relevant to L1 phoneme distinctions (F0 height/direction) in
order to classify non-native contrasts. [NSF grant #0965227.]
1pSCb2. Spectral and duration cues of English vowel identification for
Chinese-native listeners. Sha Tao, Lin Mi, Wenjing Wang, Qi Dong (Cognit. Neurosci. and Learning, Beijing Normal Univ., State Key Lab for Cognit. Neurosci. and Learning, Beijing Normal University, Beijing 100875,
China, taosha@bnu.edu.cn), and Chang Liu (Commun. Sci. and Disord.,
The Univ. of Texas at Austin, Austin, TX)
This study was to investigate how Chinese-native listeners use spectral
and duration cues for English vowel identification. The first experiment was
to examine whether Chinese-native listeners’ English vowel perception was
related to their sensitivity to the change of vowel formant frequency that is a
critical spectral cue to vowel identification. Identification of 12 isolated
American English vowels was measured for 52 Chinese college students in
Beijing. Thresholds of vowel formant discrimination were also examined
for these students. Results showed that there was a significantly moderate
correlation between Chinese college students’ English vowel identification
and their thresholds of vowel formant discrimination. That is, the lower
vowel formant threshold of listeners, the better vowel identification. However, the moderate correlation between vowel identification and formant discrimination suggested some other factors accounting for the individual
variability in English vowel identification for Chinese-native listeners. In
Experiment 2, vowel identification was measured with and without duration
2106
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
cues, showing that vowel identification was reduced by 5.1% when duration
cue was removed. Further analysis suggested that for the listeners who
depended less on duration cue, the better thresholds of formant discrimination, the higher scores of vowel identification, but no such correlation for
listeners who used duration cues remarkably.
1pSCb3. The influence of lexical status in the perception of English allophones by Korean learners. Kyung-Ho Kim and Jeong-Im Han (English,
Konkuk Univ., 120 Neungdong-ro, Gwangjin-gu, Seoul 143-701, South
Korea, gabrieltotti88@gmail.com)
This study investigated whether the allophonic contrast in the second language (L2) may require contact with the lexicon to influence the perception.
Given that English medial voiceless stops occur with aspiration in stressed,
but without aspiration in unstressed syllables, Korean learners of English
were tested for aspirated and unaspirated allophones of /p/ for perceptual preference in appropriate and inappropriate stress contexts in the second syllable
of disyllabic words. The stimuli included four types of non-words and eight
pairs of real words (four pairs each for high-frequency and low-frequency
words), and participants were asked to judge the perceptual preference of
each token on a 7-scale (1 = a bad example, 7 = a good example). The results
demonstrated that in tests with non-words, there was no significant difference
in the ratings as a function of context appropriateness (e.g., [ıp2] vs. [ıph2]),
with higher rankings for initially-stressed words. By contrast, in real words,
participants preferred the correct allophones (e.g., [kep2] vs. [keph2]
“caper”). The frequency of real words further showed a significant effect.
This finding suggests that allophony in L2 is driven by lexicality (Whalen et
al., 1997). Exemplar theory (Pierrehumbert 2001, 2002) provides a more
effective means of modeling this finding than do traditional approaches.
1pSCb4. The perception of English coda obstruents by Mandarin and
Korean second language learners. Yen-Chen Hao (Modern Foreign Lang.
and Literatures, Univ. of Tennessee, 510 14th St. #508, Knoxville, TN
37916, yenchenhao@gmail.com) and Kenneth de Jong (Linguist, Indiana
Univ., Bloomington, IN)
This study investigates the perception of English obstruents by learners
whose native language is either Mandarin, which does not permit coda obstruents, or Korean, which neutralizes laryngeal and manner contrasts into voiceless stop codas. The stimuli are native productions of eight English obstruents
/p b t d f v h ð/ combined with the vowel /A/ in different prosodic contexts.
Forty-one Mandarin and 40 Korean speakers identified the consonant from
the auditorily presented stimuli. The results show that the two groups do not
differ in their accuracy in the onset position, indicating that they are comparable in their proficiency. However, the Mandarin speakers are more accurate in
the coda position than the Koreans. When the fricatives and stops are analyzed separately, it shows that the two groups do not differ with fricatives, yet
168th Meeting: Acoustical Society of America
2106
1pSCb5. Effect of phonetic training on the perception of English consonants by Greek speakers in quiet and noise conditions. Angelos Lengeris
and Katerina Nicolaidis (Theor. and Appl. Linguist, Aristotle Univ. of Thessaloniki, School of English, Aristotle University, Thessaloniki 541 24,
Greece, lengeris@enl.auth.gr)
The present study employed high-variability phonetic training (multiple
words spoken by multiple talkers) to improve the identification of English
consonants by native speakers of Greek. The trainees completed five sessions of identification training with feedback for seven English consonants
(contrasting voiced vs. voiceless stops and alveolar vs. postalveolar fricatives) each consisting of 198 trials with a different English speaker in each
session. Another group of Greek speakers served as controls, i.e., completed
the pre/post test but received no training. Pre/post tests included English
consonant identification in quiet and noise. In the noise condition, participants identified consonants in the presence of a competing English speaker
at a signal-to-noise ratio of -2dB. The results showed that training significantly improved English consonant perception for the group that received
training but not for the control group in both quiet and noise. The results
add to the existing evidence that supports the effectiveness of the high-variability approach to second-language segmental training.
1pSCb6. Perceptual warping of phonetic space applies beyond known
phonetic categories: Evidence from the perceptual magnet effect. Bozena
Pajak (Brain & Cognit. Sci., Univ. of Rochester, 1735 N Paulina St. Apt. 509,
Chicago, Illinois 60622, bpajak@bcs.rochester.edu), Page Piccinini, and
Roger Levy (Linguist, Univ. of California, San Diego, San Diego, CA)
What is the mental representation of phonetic space? Perceptual reorganization in infancy yields a reconfigured space “warped” around nativelanguage (L1) categories. Is this reconfiguration entirely specific to L1 category inventory? Or does it apply to a broader range of category distinctions
that are non-native, yet discriminable due to being defined by phonetic
dimensions informative in the listener’s L1 (Bohn & Best, 2012; Pajak,
2012)? Here we address this question by studying perceptual magnets,
which involve attrition of within-category distinctions and enhancement of
distinctions across category boundaries (Kuhl, 1991). We focus on segmental length, known to yield L1-specific perceptual magnets: e.g., L1-Finnish
listeners have one for [t]/[tt], but L1-Dutch listeners, who lack (exclusively)
length-based contrasts, do not (Herren & Schouten, 2008). We tested 31 L1Korean listeners in an AX discrimination task for [n]-[nn] and [f]-[ff] continua. Korean listeners have been shown to discriminate both (Pajak, 2012),
despite only having the former set in the inventory. We found perceptual
magnets for both continua, demonstrating that perceptual warping goes
beyond the specific L1 categories: when a phonetic dimension is informative
for contrasting some L1 categories, perceptual warping applies not only to
the tokens from those categories, but also to that dimension more generally.
1pSCb7. Language mode effects on second language categorical perception. Beatriz Lopez Prego and Allard Jongman (Linguist, Univ. of Kansas,
1145 Pennsylvania St., Lawrence, KS 66044, lopezb@ku.edu)
This study investigates the perception of the /b/-/p/ voicing contrast in
English and Spanish by native English listeners, native Spanish listeners,
and highly proficient Spanish-speaking second-language (L2) learners of
English with a late onset of acquisition (mean = 10.8) and at least three-year
residence in an English-speaking environment. Participants completed a
forced-choice identification task where they identified target syllables in a
Voice Onset Time (VOT) continuum as "pi" or "bi." They listened to 10
blocks of 19 equidistant steps ranging from + 88 ms-VOT to -89 ms-VOT.
Between blocks, subjects read and wrote responses to language background
questions, thus actively processing the target language. Monolinguals completed the task in their native language (L1). L2 learners completed the task
2107
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
once in their L1 and once in their L2, thus providing a manipulation of "language mode" (Grosjean, 2001). The results showed that L2 learners’ category boundary in English did not differ from that of monolingual English
listeners, but their category boundary in Spanish differed from that of monolingual Spanish listeners and from their own category boundary in English.
These results suggest that the language mode manipulation was successful
and that L2 learners can develop new phonetic categories, but this may have
an impact on their L1 categories.
1pSCb8. Processing of English-accented Spanish voice onset time by
Spanish speakers with low English experience. Fernando Llanos (School
of Lang. and Cultures, Purdue Univ., Stanley Coulter Hall, 640 Oval Dr.,
West Lafayette, IN 47907, fllanos@purdue.edu) and Alexander L. Francis
(Speech, Lang. & Hearing Sci., Purdue Univ., West Lafayette, IN)
Previous research (Llanos & Francis, 2014) shows that the processing of
foreign accented speech sounds can be affected by listeners’ familiarity with
the language that causes the accent. Highly familiar listeners treat foreign
accented sounds as foreign sounds while less familiar listeners treat them
natively. The present study tests the hypothesis that less familiar listeners
may nevertheless be able to apply foreign categorization patterns to
accented words by recalibrating phonetic expectations according to acoustic
information provided by immediate phonetic context. Two groups of Spanish native speakers with little English experience will identify tokens drawn
from a digitally edited VOT continuum ranging from baso "glass" (-60 ms
VOT) to paso "step" (60 ms VOT). Tokens are embedded in a series of
Spanish words beginning with /b/ and /p/ to provide phonetic context. In the
English-accented condition, context words are digitally modified to exhibit
English-like VOT values for /b/ (10 ms) and /p/ (60 ms). In the Spanish condition, these tokens are edited to exhibit prototypical Spanish /b/ (-90 ms)
and /p/ (10 ms) VOT values. If listeners can accommodate foreign accented
sounds according to expectations provided by immediate phonetic context,
then listeners’ VOT boundary in the English-accented condition should be
significantly higher than in the Spanish condition.
1pSCb9. Amount of exposure and its effect on perception of second language front vowels in English. Andrew Jeske (Linguist, Univ. of Pittsburgh, 3211 Brereton St., Pittsburgh, PA 15219, arjeske@gmail.com)
Experience with a second language (L2) has been shown to positively
affect learners’ perception of L2 sounds. However, few studies have focused
on how the amount of L2 exposure in foreign language classrooms impacts
perception of L2 sounds during the incipient stages of language learning in
school-age children. To determine what effect, if any, the amount of L2 exposure has on perception, 64 students from a Spanish-English bilingual elementary school and 60 students from two non-bilingual elementary schools
participated in an AX Categorical Discrimination task, which contained
tokens of five English front vowels: /i I e E æ/. Results show that students
from the bilingual school earned perception scores significantly higher than
those earned by the students from the non-bilingual school (p = 0.002). However, an ANOVA found there to be no significant simple main effect for grade
or significant correlation between grade level and school type. The bilingual
school students perceived all within-category word pairings (e.g., bat-bat) significantly more accurately than the non-bilingual school students suggesting
that increased, early exposure to an L2 may heighten one’s ability to disregard
irrelevant, interpersonal phonetic differences and lead to a within-category
perceptual advantage over those with less L2 exposure early on.
1pSCb10. Does second language experience modulate perception of
tones in a third language? Zhen Qin and Allard Jongman (Linguist, Univ.
of Kansas, 1541 Lilac Ln., Blake Hall, Rm. 427, Lawrence, KS 66045, qinzhenquentin2@ku.edu)
Previous studies have shown that English speakers pay attention to pitch
height rather than direction, whereas Mandarin speakers are more sensitive
to pitch direction than height in perception of lexical tones. The present
study addresses if a second language (L2, i.e., Mandarin) overrides the influence of a native language (L1, i.e., English) in modulating listeners’ use of
pitch cues in the perception of tones in a third language (L3, i.e., Cantonese). English-speaking L2 learners (L2ers) of Mandarin constituted the
168th Meeting: Acoustical Society of America
2107
1p MON. PM
the Mandarin speakers are more accurate than the Koreans with stops. These
findings suggest that having stop codas in their L1 does not necessarily facilitate Koreans’ acquisition of the L2 sounds. Despite their L1 differences, the
two groups display very similar perceptual biases in their error patterns. However, not all of them can be explained by L1 transfer or universal markedness,
suggesting other language-independent factors in L2 perception.
target group. Mandarin speakers and English speakers without knowledge
of Mandarin were included as control groups. In Experiment 1, all groups,
na€ıve to Cantonese tones, discriminated Cantonese tones by distinguishing
either a contour tone from a level tone (pitch direction pair) or a level tone
from another level tone (pitch height pair). The results showed that L2ers
patterned differently from both control groups with regard to pitch cues
under the influence of L2 experience. The acoustics of the tones also
affected all listeners’ discrimination. In Experiment 2, L2ers were instructed
to identify Mandarin tones to measure their sensitivity to L2 tones. The
results showed that L2ers’ sensitivity to L2 tones is not necessarily correlated with their perception of L3 tones.
letters with diacritics, while L1 Swedish and L1 Polish speakers tend to see
these types of characters as different characters of the alphabet. These differing beliefs about orthography may cause English speakers to confuse the
vowels represented in Swedish by the characters å, €a and €
o with vowels represented by the characters a, a, and o, respectively, while Polish speakers
would not be similarly affected. Results of a Swedish vowel perception
study conducted with native speakers of English and Polish after exposure
to Swedish words containing these characters will be presented. These
results contribute to increasing knowledge about the relationship between
L1 orthography and L2 phonology.
1pSCb11. Does early foreign language learning in school affect phonemic discrimination in adulthood? Tetsuo Harada (School of Education,
Waseda Univ., 1-6-1 Nishi Waseda, Shinjuku, Tokyo 169-8050, Japan, tharada@waseda.jp)
1pSCb14. A preliminary investigation of the effect of dialect on the perception of Korean sibilant fricatives. Jeffrey J. Holliday (Second Lang.
Studies, Indiana Univ., 1021 E. Third. St., Memorial Hall M03, Bloomington, IN 47405, jjhollid@indiana.edu) and Hyunjung Lee (English, Hankyong National Univ., Anseong, Gyeonggi-do, South Korea)
Long-term effects of early foreign language learning with a few hours’
classroom contact per week on speech perception are controversial: some
studies show age effects of minimal English input in childhood on phonemic
perception in adulthood, but others don’t (e.g., Lin et al., 2004). This study
investigated effects of a younger starting age in a situation of minimal exposure on perception of English consonants under noise conditions. The listeners were two groups of Japanese university students: early learners (n = 21)
who started studying English in kindergarten or elementary school, and late
learners (n = 24) who began to study in junior high school. The selected target phonemes were word-medial approximants (/l, r/). Each nonword (i.e.,
ala, ara), produced by six native talkers, was combined with speech babble
at the signal-to-noise ratios (SNRs) of 8 dB (medium noise) and 0 dB (quite
high noise for L2 listeners). A discrimination test was given in the ABX format. Results showed that the late learners discriminated /l/ and /r/ better
than the early learners regardless of the noise conditions and talker differences (p < 0.05). A multiple regression analysis revealed that length of learning and English use could contribute to their discrimination ability.
Korean has two sibilant fricatives, /sh/ and /s*/, that are phonologically
contrastive in the Seoul dialect but are widely believed to be phonetically
neutralized in the Gyeongsang dialects spoken in southeastern South Korea,
with both fricatives being acoustically realized as [sh]. The current study
investigated the degree to which the perception of these fricatives by Seoul
listeners is affected by knowledge of the speaker’s dialect. In the first task,
the stimuli were two fricative-initial minimal pairs (i.e., four words) produced by 20 speakers each from Seoul and Gyeongsang. Half of the 18 listeners were told that the speakers were from Seoul, and the other half were
told they were from Gyeongsang. Listeners identified the 160 word-initial
fricatives and provided a goodness rating for each. It was found that neither
the speaker’s actual dialect nor the primed dialect had a significant effect on
either identification accuracy or listeners’ goodness ratings. In a second
task, listeners identified tokens from a seven-step continuum from [sada] to
[s*ada]. It was found that listeners who were primed for Gyeongsang dialect
were more likely to perceive tokens as /s*/ than listeners primed for Seoul,
which may reflect a dialect-based hypercorrective perceptual bias.
1pSCb12. The identification of American English vowels by native
speakers of Japanese before three nasal consonants. Takeshi Nozawa
(Lang. Education Ctr., Ritsumeikan Univ., 1-1-1 Nojihigashi, Kusatsu 5258577, Japan, t-nozawa@ec.ritsumei.ac.jp)
1pSCb15. Language is not destiny: Task-specific factors, and not just
native language perceptual biases, influence foreign sound categorization strategies. Jessamyn L. Schertz and Andrew Lotto (Univ. of Arizona,
Douglass 200, Tucson, AZ 85721, jschertz@email.arizona.edu)
Native speakers of Japanese identified American English vowels that are
uttered before three nasal consonants /m, n, ˛/ and three oral stop consonants
/b, d, g/. Of the seven vowels /i, I, eI, E, æ, A, ˆ/, /æ/ was generally less accurately identified before nasal consonants than before oral stop consonants, and
this tendency was stronger when /˛/ follows. This tendency is probably attributed to the extended raising of /æ/ before /˛/ and the Japanese listeners’ limited sensitivity to differentiate three nasal phonemes in coda position. /I/, on
the other hand, was identified more correctly before /˛/ than before the other
two nasal consonants, also probably because the vowel is raised before /˛/.
This vowel was more often misidentified as /E/ before /m/ and /n/. /A/ and /ˆ/
were less accurately identified before stop consonants, but after nasal consonants, /ˆ/ was more often misidentified as /A/. /A/ and /ˆ/ may sound alike to
Japanese listeners in every context, but before nasal contexts, both of these
vowels may sound closer to the Japanese vowel /o/. The results generally
revealed that identification accuracy cannot be solely accounted for in terms
of the place of articulation of the following consonant.
Listeners were trained to distinguish two novel classes of speech sounds
differing in both Voice Onset Time (VOT) and fundamental frequency at
vowel onset (f0). One group was shown Korean orthography during the
training period (“symbols” group) and the other English orthography
(“letters” group). During a subsequent test phase, listeners classified sounds
with mismatched VOT and f0. The two groups relied on different cues to
categorize the contrast: those exposed to symbols used f0, while those
exposed to letters used VOT. A second experiment employed the same paradigm, but the two dimensions defining the contrast were closure duration
(instead of f0) and VOT. In this more difficult experiment, successful listeners in the “letters” group again classified the hybrid stimuli based on VOT,
while the single listener in the “symbols” group who passed the learning criterion used closure duration. In both experiments, subjects showed different
categorization patterns based on orthography used in the presentation, even
though orthography was irrelevant for the experimental task. Listeners
relied on VOT when the stimuli were presented with English, but not foreign, orthography, showing that task-related information (as opposed to
native language biases alone) can direct attention to different acoustic cues
in foreign contrast classification.
1pSCb13. Effects of beliefs about first language orthography on second
language vowel perception. Mara Haslam (Dept. of Lang. Education,
Stockholm Univ., S:t Ansgars v€ag 4, Solna 16951, Sweden, mara.haslam@
gmail.com)
Recent research has identified that L1 orthography can affect perception
of vowels in a second language (e.g., Escudero and Wanrooij, 2010). The
present study investigates the effect that participants’ beliefs about orthography have on their ability to perceive vowels in a second language. Englishand Polish-speaking learners of Swedish have to encounter some new vowel
sounds and also the characters that are used to represent them, e.g., å, €a, and
€o. New survey data of native speakers of English, Polish, and Swedish confirm that L1 English speakers see these characters like these as familiar
2108
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
1pSCb16. Generational difference in the perception of high-toned [il] in
Seoul Korean. Sunghye Cho (Univ. of Pennsylvania, 3514 Lancaster Ave.,
Apt. 106, Philadelphia, PA 19104, csunghye@sas.upenn.edu)
A word-initial [il] is most frequently H-toned in Seoul Korean (SK) when
it means one, out of three homophones, one, day, and work (Jun & Cha,
2011). However, Cho (2014) finds that 25% of teenagers always produce [il]
with a H tone, regardless of its meaning. This paper examines how young SK
speakers perceive the phenomenon. Thirty-seven SK speakers (aged 14–29)
participated in two identification tasks, hearing only [il] in the first task and
168th Meeting: Acoustical Society of America
2108
1pSCb17. The effect of perceived talker race on phonetic imitation of
pin-pen words. Qingyang Yan (Linguist, The Ohio State Univ., 591 Harley
Dr., Apt. 10, Columbus, OH 43212, yan@ling.ohio-state.edu)
The current study investigated the phonetic imitation of the PIN-PEN
merger by nonmerged participants. An auditory shadowing task was used to
examine how participants changed their /I/ and /E/ productions after auditory
exposure to merged and nonmerged voices. Black and white talker photos
were used as visual cues to talker race. The pairing of voices (merged and
nonmerged) with the talker photos (black and white) was counterbalanced
across participants. A third group of participants completed the task without
talker photos. Participants’ explicit talker attitudes were assessed by a questionnaire, and their implicit racial attitudes were measured by an Implicit
Association Task. Nonmerged participants imitated the PIN-PEN merger, and
the degree of imitation varied depending on the experimental condition. The
merged voice elicited more imitation when it was presented without a talker
photo or with the black talker photo than with the white talker photo. No
effect of explicit talker attitudes or implicit racial attitudes on the degree of
imitation was observed. These results suggest that phonetic imitation of the
PIN-PEN merger is more complex than an automatic response to the merged
voice and that it is mediated by perceived talker race.
1pSCb18. Foreign-accent discrimination with words and sentences.
Eriko Atagi (Volen National Ctr. for Complex Systems, Brandeis Univ.,
Volen National Ctr. for Complex Systems, MS 013, Brandeis University,
415 South St., Waltham, MA 02454-9110, eatagi@brandeis.edu) and Tessa
Bent (Dept. of Speech & Hearing Sci., Indiana Univ., Bloomington, IN)
Native listeners can detect a foreign accent in very short stimuli; however, foreign-accent detection is more accurate with longer stimuli (Park,
2008; Flege, 1984). The current study investigated native listeners’ sensitivity to the characteristics that differentiate between accents—both foreign
versus native accents and one foreign accent versus another—in words and
sentences. Listeners heard pairs of talkers reading the same word or sentence and indicated whether the talkers had the same or different native language backgrounds. Talkers included two native talkers (Midland dialect)
and six nonnative talkers from three native language backgrounds (German,
Mandarin, and Korean). Sensitivity varied significantly depending on the
specific accent pairings and stimulus type. Listeners were most sensitive
when the talker pair included a native talker, but could detect the difference
between two nonnative accents. Furthermore, listeners were generally more
sensitive with sentences than with words. However, for one nonnative pairing, listeners exhibited higher sensitivity with words; for another, listeners’
sensitivity did not differ significantly across stimulus types. These results
suggest that accent discrimination is not simply influenced by stimulus
length. Sentences may provide listeners with opportunities to perceive similarities between nonnative talkers, which are not salient in single words.
[Work supported by NIDCD T32 DC00012.]
1pSCb19. Stimulus length and scale label effects on the acoustic correlates of foreign accent ratings. Elizabeth A. McCullough (Linguist, Ohio
State Univ., 222 Oxley Hall, 1712 Neil Ave., Columbus, OH 43210, eam@
ling.ohio-state.edu)
Previous studies have investigated acoustic correlates of accentedness ratings, but methodological differences make it difficult to compare their results
directly. The present experiment investigated how choices about stimulus
2109
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
length and rating scale labels influence the acoustic correlates of listeners’ rating responses. Four conditions crossed two stimulus lengths (CV syllable vs.
disyllabic word) with two sets of rating labels (“no foreign accent”/“strong
foreign accent” vs. “native”/“not native”). Monolingual American English listeners heard samples of English from native speakers of American English,
Hindi, Korean, Mandarin, and Spanish and indicated their responses on a continuous rating line. Regression models evaluated the correlations between listeners’ ratings and a variety of acoustic properties. Patterns for accentedness
and non-nativeness ratings were identical. VOT, F1, and F2 correlated with
ratings on all stimuli, but vowel duration correlated with ratings on disyllabic
word stimuli only. If vowel duration is interpreted as a reflection of global
temporal properties, this result suggests that listeners may perceive such properties in utterances as short as two syllables. Thus, stimulus design is vital in
identifying components of foreign accent perception that are related to differences between a talker’s first and second languages as opposed to components
that are related to general fluency.
1pSCb20. Language proficiency, context influence foreign-accent adaptation. Cynthia P. Blanco (Linguist, Univ. of Texas at Austin, 305 E. 23rd
St., Austin, TX 78712, cindyblanco@utexas.edu), Hoyoung Yi (Commun.
Sci. & Disord., Univ. of Texas at Austin, Austin, TX), Elisa Ferracane, and
Rajka Smiljanic (Linguist, Univ. of Texas at Austin, Austin, TX)
Listeners adapt quickly to changes in accent (Bradlow & Bent, 2003;
Clarke & Garrett, 2004; inter alia). The cause of this brief delay may be due
to the cost of processing accented speech, or may reflect a surprise effect
associated with task expectations (Floccia et al., 2009). The present study
examines a link between accent familiarity and processing delays with listeners who have varying degrees of familiarity with target languages: monolingual Texans with little or no formal exposure to Spanish, early SpanishEnglish bilinguals, and Korean learners of English. Participants heard four
blocks of English sentences—Blocks 1 and 4 were produced by two native
speakers of American English, and Blocks 2 and 3 were produced by native
speakers of Spanish or Korean- and responded to written probe words. All
listener groups responded more slowly after an accent change; however, the
degree of delay varied with language proficiency. L1 Korean listeners were
less delayed by Korean-accented speech than the other listeners, while
changes to Spanish-accented speech were processed most slowly by
Spanish-English bilinguals. The results suggest that adaptation to foreignaccented speech depends on language familiarity and task expectations. The
processing delays are analyzed in light of intelligibility and accentedness
measures.
1pSCb21. When two become one—Orthography helps link two free variants to one lexical entry. Chung-Lin Yang (Linguist, Indiana Univ.- Bloomington, Memorial Hall 322, 1021 E 3rd St., Bloomington, IN 47408,
cy1@indiana.edu) and Isabelle Darcy (Second Lang. Studies, Indiana
Univ.- Bloomington, Bloomington, IN)
L2 learners can become better at distinguishing an unfamiliar contrast
by knowing the corresponding orthographic forms (e.g., Escudero et al.,
2008). We ask whether learners could associate two free variants with the
same lexical entry when the orthographic form was provided during learning. American learners learned an artificial language where [p]-[b] were in
free variation (both were spelled as <p>) (test condition) while [t]-[d] were
contrastive (control condition), or vice-versa ([t]-[d] in test, counterbalanced
across subjects). Using a word-learning paradigm modified from HayesHarb et al. (2010), in the learning phase, participants heard novel words
paired with pictures. One subgroup of learners saw the spellings as well
(“Orth+”), while another did not (i.e., auditory only, “Orth ”). Then in a
picture-auditory word matching task, the new form of the word was paired
with the original picture. Orth + learners were expected to be more accurate
at accepting the variant as the correct label for the original test item than
Orth . The results showed that Orth + learners detected and learned the [p][b] free variation significantly better than Orth (p < 0.05), but not the [t][d] free variation. Thus, the benefit of orthography in speech learning could
vary depending on the specific contrasts at hand.
168th Meeting: Acoustical Society of America
2109
1p MON. PM
four [il]-initial minimal pairs in the second task. All target words were manipulated into five pitch levels with 30 Hz intervals. In the first task, the 20s
group identified [il] as one 70% of the time at higher pitch levels, while the
teenagers identified [il] as one about 50% of the time at all pitch levels. In the
second task, the 20s group showed a categorical perception, identifying [il]initial words as one only at higher pitch levels, while the teenagers did not.
The results suggest that the teenagers are aware that some peers always produce [il] with a H tone. It explains that the 20s group could identify the meanings of [il] depending on the pitch, while the teenagers could not.
MONDAY AFTERNOON, 27 OCTOBER 2014
INDIANA F, 1:25 P.M. TO 5:15 P.M.
Session 1pUW
Underwater Acoustics: Understanding the Target/Waveguide System–Measurement and Modeling II
Aubrey L. Espana, Chair
Acoustics Dept., Applied Physics Lab, Univ. of Washington, 1013 NE 40th St., Box 355640, Seattle, WA 98105
Chair’s Introduction—1:25
Invited Paper
1:30
1pUW1. Mapping bistatic scattering from spherical and cylindrical targets using an autonomous underwater vehicle in
BAYEX’14 experiment. Erin M. Fischell, Stephanie Petillo, Thomas Howe, and Henrik Schmidt (Mech. Eng., MIT, 77 Massachusetts
Ave., 5-204, Cambridge, MA 02139, emf43@mit.edu)
In May 2014, the MIT Laboratory for Autonomous Marine Sensing Systems (LAMSS) participated in the BAYEX’14 experiment
with the goal of collecting full bistatic data sets around proud spherical and cylindrical targets for use in real-time autonomous target
localization and classification. The BAYEX source was set to insonify both targets, and was triggered to ping at the start of each second
using GPS PPS. The MIT Bluefin 21 in. AUV Unicorn, fitted with a 16-element nose array, was deployed in broadside sampling behaviors to collect the bistatic scattered data set. The AUV’s Chip Scale Atomic Clock was synchronized to GPS on the surface, and the data
was logged using a PPS triggered analog to digital conversion system to ensure synchronization with the source. The MIT LAMSS
operational paradigm allowed the vehicle to be unpacked, tested and deployed over the brief three-day interval available for operations.
MOOS-IvP and acoustic communication enabled the group to command AUV mission changes in situ based on data collection needs.
During data collection, the vehicle demonstrated real-time signal processing and target localization, and the bistatic datasets were used
to demonstrate real-time target classification in simulation. [Work supported by ONR Code 322OA.]
Contributed Papers
1:50
2:05
1pUW2. Elastic features visible on canonical targets with high frequency
imaging during the 2014 St. Andrews Bay experiments. Philip L. Marston
(Phys. and Astronomy Dept., Washington State Univ., Pullman, WA 991642814, marston@wsu.edu), Timothy M. Marston, Steven G. Kargl (Appl.
Phys. Lab., Univ. of Washington, Seattle, WA), Daniel S. Plotnick (Phys. and
Astronomy, Washington State Univ., Pullman, WA), Aubrey Espana, and
Kevin L. Williams (Appl. Phys. Lab., Univ. of Washington, Seattle, WA)
1pUW3. Boundary enhanced coupling processes for rotated horizontal
solid aluminum cylinders: Helical rays, synthetic aperture sonar
images, and coupling conditions. Jon R. La Follett (Shell International
Exploration and Production Inc., Houston, TX) and Philip L. Marston
(Phys. and Astronomy Dept., Washington State Univ., Phys. and Astronomy
Dept., Washington State Univ., Pullman, WA 99164-2814, marston@wsu.
edu)
During the 2014 St. Andrews Bay experiments some canonical metallic
targets (a hollow sphere and some circular cylinders) were viewed with a
synthetic aperture sonar (SAS) capable of acquiring data using a 110–190
kHz chirped source. The targets rested on mud-covered sand and were typically at a range of 20 m. Fast reversible SAS processing using an extension
of line-scan quasi-holography [K. Baik, C. Dudley, and P. L. Marston, J.
Acoust. Soc. Am. 130, 3838–3851 (2011)] was used to extract relevant signal content from images. The significance of target elastic responses in
extracted signals was evident from the frequency response and/or the timedomain response. For example, the negative group velocity guided wave
enhancement of the backscattering by the sphere was clearly visible near
180 kHz. [For a ray model of this type of enhancement see: G. Kaduchak,
D. H. Hughes, and P. L. Marston, J. Acoust. Soc. Am. 96, 3704–3714
(1994).] In another example, the timing of a sequence of near broadside echoes from a solid aluminum cylinder was consistent with reflection and internal reverberation of elastic waves. These observations support the value of
combining reversible imaging with models interpreted using rays. [Work
supported by ONR and SERDP.]
Experiments with solid aluminum cylinders placed near a flat free surface provide insight into scattering processes relevant to other flat reflecting
boundaries [J. R. La Follett, K. L. Williams, and P. L. Marston, J. Acoust.
Soc. Am. 130, 669–672 (2011); J. R. La Follett, Ph.D. thesis, WSU (2010)].
This presentation concerns the coupling to surface guided leaky Rayleigh
waves that have been shown to contribute significantly to backscattering by
solid metallic cylinders [K. Gipson and P. L. Marston, J. Acoust. Soc. Am.
106, 1673–1689 (1999)]. The emphasis here is on horizontal cylinders
rotated about a vertical axis away from broadside viewed at grazing incidence. The range of rotation angles for which helical rays can contribute is
limited in the free field by the cylinder’s length [F. J. Blonigen and P. L.
Marston, J. Acoust. Soc. Am. 112, 528–536 (2002)]. Some examples of surface enhanced backscattering may be summarized as follows. In agreement
with geometrical considerations, the angular range for coupling to helical
rays may be significantly extended when a short cylinder is adjacent to a flat
surface. In addition, the presence of a flat surface splits synthetic aperture
sonar (SAS) image features from various guided wave mechanisms on
rotated cylinders. [Work supported by ONR.]
2110
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2110
Invited Papers
2:20
1pUW4. Denoising structural echoes of elastic targets using spatial time–frequency distributions. Karim G. Sabra (Mech. Eng.,
Georgia Inst. of Technol., 771 Ferst Dr., NW, Atlanta, GA 30332-0405, karim.sabra@me.gatech.edu)
1p MON. PM
Structural echoes of underwater elastic targets, used for detection and classification purposes, can be highly localized in the time–frequency domain and can be aspect-dependent. Hence, such structural echoes recorded along a distributed (synthetic) aperture, e.g., using
a moving receiver platform, would not meet the stationarity and multiple snapshots requirements of common subspace array processing
methods used for denoising array data based on their estimated covariance matrix. To handle these scenarios, a generalized space–time–
frequency covariance matrix can be computed from the single-snapshot data using Cohen’s class time-frequency distributions between
all sensor data pairs. This space–time–frequency covariance matrix automatically accounts for the inherent coherence across the timefrequency plane of the received nonstationary echoes emanating from the same target. Hence, identifying the signal’s subspace from the
eigenstructure of this space–time–frequency covariance matrix provides a means for denoising these non-stationary structural echoes by
spreading the clutter and noise power in the time–frequency domain. The performance of the proposed methodology will be demonstrated using numerical simulations and at-sea data.
2:40
1pUW5. Measurements and modeling of acoustic scattering from targets in littoral environments. Harry J. Simpson (Physical
Acoust. Branch, Naval Res. Lab., 4555 Overlook Ave. SW, Washington, VA20375, harry.simpson@nrl.navy.mil), Zackary J. Waters,
Timothy J. Yoder, Brian H. Houston (Physical Acoust. Branch, Naval Res. Lab., Washington, DC), Kyrie K. Jig, Roger R. Volk (Sotera
Defense Solution, Crofton, MD), and Joseph A. Bucaro (Excet, Inc., Springfield, VA)
Broadband laboratory and at-sea measurements systems have been built by NRL to quantify the acoustic target strength of objects
sitting on or in the bottom of littoral environments. Over the past decade, these measurements and the subsequent modeling of the target
strength have helped to develop an understanding of how the environment, especially near the bottom interface, impacts the structural
acoustic response of a variety of objects. In this talk we will present a set of laboratory, at-sea rail and AUV based back scatter, forward
scatter, and propagation measurements with subsequent analysis to understand the impact of the littoral environment. Simple targets
such as spheres, along with UXO targets will be discussed. The analysis will be focused on quantifying the changes to target strength as
a result of being near the bottom interface. In addition to the traditional backscatter or monosatic target strength, we focus upon efforts
to investigate the multi-static scattering from targets. [Work supported by ONR.]
3:00–3:15 Break
Contributed Papers
3:15
3:30
1pUW6. TREX13 target experiments and case study: Comparison of
aluminum cylinder data to combined finite element/physical acoustics
modeling. Kevin Williams, Steven G. Kargl, and Aubrey L. Espana (Appl.
Phys. Lab., Univ. of Washington, 1013 NE 40th St., Seattle, WA 98105,
williams@apl.washington.edu)
1pUW7. Predicting the acoustic response of complicated targets in complicated environments using a hybrid finite element/propagation model.
Aubrey L. Espana, Kevin L. Williams, Steven G. Kargl (Acoust. Dept.,
Appl. Phys. Lab. - Univ. of Washington, 1013 NE 40th St., Box 355640,
Seattle, WA 98105, aespana@apl.washington.edu), Marten J. Nijhof
(Acoust. and Sonar, TNO, Den Haag, Netherlands), Daniel S. Plotnick, and
Philip L. Marston (Phys. and Astronomy, Washington State Univ., Pullman,
WA)
The apparatus and experimental procedure used during the target portion
of TREX13 are described. A primary goal of the TREX13 target experiments was to test the high speed modeling methods developed and previously tested as part of efforts in more controlled environments where the
sediment/water interface was flat. At issue is to what extent the simplified
physics used in our models can predict the changes seen in acoustic templates (target strength versus angle and frequency) as a function of grazing
angle, i.e., the Target-In-the-Environment-Response (TIER), for a target
proud on an unprepared “natural” sand sediment interface. Data/model comparisons for a 3 ft. long, 1 ft. diameter cylinder are used as a case study.
These comparisons indicate that much of the general TIER dependence is
indeed captured and allows one to understand/predict geometries where the
broadest band of TIER information can be obtained. This case study indicates the predictive utility of dissecting the target physics at the expense of
making the model results “inexact” from a purely finite element, constitutive
equation standpoint. [Work supported by ONR and SERDP.]
2111
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Previous work has shown that hybrid finite element (FE)/propagation
models are a viable tool for estimating the Target-In-The-EnvironmentResponse, or TIER, for simple shapes such as cylinders and pipes on a flat,
undisturbed sand/water interface [K. L. Williams et al., J. Acoust. Soc. Am
127, 3356–3371 (2010)]. Here we examine their use for more complicated
targets located in complicated ocean environments. The targets examined
include various munitions and ordnance-like targets, with intricate internal
structure and filled with either air or water. A hybrid FE/propagation model
is used to predict their TIER on flat, undisturbed sand. Data acquired during
the target portion of TREX13 is used to validate the model results. Next, the
target response is investigated in a more complicated environment, being
partially buried with their axis tilted w.r.t. the flat sand interface. Again
model results are validated using TREX13 data, as well as data acquired in
a controlled tank experiment. These comparisons highlight the feasibility of
using hybrid models for complex target/environment configurations, as well
possible limitations due to the effects of multiple scattering.
168th Meeting: Acoustical Society of America
2111
Invited Papers
3:45
1pUW8. A correlation analysis of the Naval Surface Warfare Center Panama City Division’s (NSWC PCD) database of simulated and collected target scattering responses focused on automated target recognition. Raymond Lim, David E. Malphurs, James
L. Prater, Kwang H. Lee, and Gary S. Sammelmann (Code X11, NSWC Panama City Div., 110 Vernon Ave, Code X11, Panama City,
FL 32407-7001, raymond.lim@navy.mil)
Recently, NSWC PCD participated in a number of computational and experimental efforts aimed at assembling a database of sonar
scattering responses encompassing a variety of objects including UXO, cylindrical shapes, and other clutter-type objects. The range of
data available on these objects consists of a simulated component generated with 3D finite element calculations coupled to a fast Helmholtz-equation-based propagation scheme, a well-controlled experimental component collected in NSWC PCD’s pond facilities, and a
component of measurements in realistic underwater environments off Panama City, FL (TREX13 and BayEX14). The goal is to use the
database to test schemes for automating reliable separation of these objects into desired classes. Here, we report on an initial correlation
analysis of the database projected onto the target aspect vs frequency plane to assess the feasibility of the simulated component against
the measured ones, to investigate some basic questions regarding environmental and range effects on class separation, and to try and
identify phenomena in this plane useful for classification. [Work supported by ONR and SERDP.]
4:05
1pUW9. Identifying buried unexploded ordnance with structural acoustics based numerically trained classifiers: Laboratory
demonstrations. Zachary J. Waters, Harry J. Simpson, Brian H. Houston (Physical Acoust. - Code 7130, Naval Res. Lab., 4555 Overlook Ave. SW, Bldg 2. Rm. 186, Washington, DC 20375, zachary.waters@nrl.navy.mil), Kyrie Jig, Roger Volk, Timothy J. Yoder
(Sotera Defense Solutions Inc., Crofton, MD), and Joseph A. Bucaro (Excet Inc., Springfield, VA)
Strategies for the automated detection and classification of underwater unexploded ordnance (UXO), based upon structural acoustics
derived features, are currently being transitioned to autonomous underwater vehicle based sonar systems. The foundation for this transition arose, in part, from extensive laboratory investigations conducted at the Naval Research Laboratory. We discuss the evolution of
structural acoustic based methodologies, including research into understanding the free-field scattering response of UXO and the coupling of these objects, under varying stages of burial, to water-saturated sediments. In addition to providing a physics-based understanding of the mechanisms contributing to the scattering response of objects positioned near the sediment–water interface, this research
supports the validation of three-dimensional finite-element-based models for large-scale structural–acoustics problems. These efforts
have recently culminated with the successful classification of a variety of buried UXO targets using a numerically trained relevance vector machine (RVM) classifier and the discrimination of these targets, under various burial orientations, from several objects representing
both natural and manmade clutter. We conclude that this demonstration supports the transition of structural acoustic processing methodologies to maritime sonar systems for the classification of challenging UXO targets. [Work supported by ONR and SERDP.]
4:25
1pUW10. Detection and classification of marine targets buried in the sediment using structural acoustic features. Joseph Bucaro
(Excet, Inc. @ Naval Res. Lab., 4555 Overlook Ave SW, Naval Res. Lab., Washington, DC 20375, joseph.bucaro.ctr@nrl.navy.mil),
Brian Houston, Angie Sarkissian, Harry Simpson, Zack Waters (Naval Res. Lab., Washington, DC), Timothy Yoder (Sotera Inc. @ Naval Res. Lab., Washington, DC), and Dan Amon (Naval Res. Lab., Washington, DC)
We present research on detection and classification of underwater targets buried in a saturated sediment using structural acoustic features. These efforts involve simulations using NRL’s STARS3D structural acoustics code and measurements in the NRL free-field and
sediment pool facilities, off the coast of Duck, NC, and off the Coast of Panama City, FL. The measurements in the sediment pool demonstrated RVM classifiers trained using numerical data on two features—target strength correlation and elastic highlight image symmetry. Measurements off the coast of Duck were inconclusive owing to tropical storms resulting in a damaged projector. Extensive
measurements were then carried out in 60 ft. of water in the Gulf using BOSS, an autonomous underwater vehicle with 40 receivers on
its wings. The target field consisted of nine simulant-filled UXO and two false targets buried in the sediment and twenty proud targets.
The AUV collected scattering data during north/south, east/west, and diagonal flights. We discuss the data analyzed so far from which
we have extracted 3-D images and acoustic color constructs for 18 of the targets and demonstrated UXO/false target separation using a
high dimensional acoustic color feature. Finally, we present related work involving targets buried in non-saturated elastic sediments.
[This work is supported by ONR and SERDP.]
Contributed Papers
4:45
1pUW11. Performance metrics for depth-based signal separation using
deep vertical line arrays. John K. Boyle, Gabriel P. Kniffin, and Lisa M.
Zurk (Northwest Electromagnetics and Acoust. Res. Lab. (NEAR-Lab),
Dept. of Elec. & Comput. Eng., Portland State Univ., 1900 SW 4th Ave.,
Ste. 160, Portland, OR 97201, jboyle@pdx.edu)
A publication [McCargar & Zurk, 2013] presented a method for passive
depth-separation of signals received on vertical line arrays (VLAs) deployed
below the critical depth in the deep ocean. This method, based on a modified
Fourier transform of the received signals from submerged targets, makes
2112
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
use of the depth-dependent modulation inherent in the signals due to interference between the direct and surface-reflected acoustic arrivals. Examination of the transform is necessary to determine performance of the algorithm
in terms of the minimum target depth and range, array aperture, and temporal sampling. However, traditional expressions for signal sampling requirements (Nyquist sampling theorem) do not directly apply to the measured
signal along a target trace due to uneven sampling in vertical angle imposed
by the spatiotemporal evolution of the target track as observed on the VLA.
In this paper, the effects of this uneven sampling on the ambiguity in the
estimated depth (i.e., aliasing) are discussed, and expressions for the maximum snapshot length are presented and validated using simulated data
168th Meeting: Acoustical Society of America
2112
5:00
1pUW12. Wideband imaging with the decomposition of time reversal
operator. Chunxiao Li, Mingfei Guo, and Huancai Lu (Zhejiang Univ. of
Technol., 18# ChaoWang Rd., Hangzhou, Zhejiang, Hangzhou 310014,
China, chunxiaoli@zju.edu.cn)
It has been shown that the decomposition of the time reversal operator
(DORT) is effective to achieve detection and selectively focusing on pointlike
scatterers. Moreover, the multiplicity of the invariant of the time reversal operator for a single extended (non-pointlike) scatterer has been also revealed.
Note: Payment of separate fee required to attend
MONDAY AFTERNOON, 27 OCTOBER 2014
HILBERT CIRCLE THEATER, 7:00 P.M. TO 9:00 P.M.
Session 1eID
Interdisciplinary: Tutorial Lecture on Musical Acoustics: Science and Performance
Uwe J. Hansen, Chair
Chemistry & Physics, Indiana State University, Terre Haute, IN 47803-2374
Invited Paper
7:00
1eID1. The physics of musical instruments with performance illustrations and a concert. Uwe J. Hansen (Dept. of Chemistry and
Phys., Indiana State Univ., Indiana State Univ., Terre Haute, IN 47809, uwe.hansen@indstate.edu) and Susan Kitterman (New World
Youth Orchestras, Indianapolis, IN)
Musical Instruments generally rely on the following elements for tone production: a power supply, an oscillator, a resonator, an amplifier, and a pitch control mechanism. The physical basis of these elements will be discussed for each instrument family with performance illustrations by the orchestra. Wave shapes and spectra will be shown for representative instruments. A pamphlet illustrating
important elements for each instrument group will be distributed to the audience. The Science presentation with orchestral performance
illustrations will be followed by a concert of the New World Youth Symphony Orchestra. This orchestra is one of three performing
groups of the New World Youth Orchestras, an organization founded by Susan Kitterman in 1982. Members of the Symphony are chosen from the greater Indianapolis and Central Indiana area by audition.
2113
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2113
1p MON. PM
In this paper, we investigate the characterization and imaging of the scatterers
when an extended scatterer and a pointlike scatterer are simultaneously present. The relationship between the quality of focusing and frequency is investigated by backpropagation of singular vectors using a model of the waveguide
in each frequency bin. When the extended scatterer is present, it is shown that
the second singular vector can also focus on the target. However, the task of
focusing can only be achieved in frequency bins with relatively large singular
values. When both scatterers are simultaneously present, the singular vectors
are a linear combination of the transfer vector from each scatterer. The first
singular vector can achieve focusing on the extended scatterer in frequency
bins with relatively large singular values. The second singular vector can
approximately focus on the pointlike scatterer in frequency bins that its scattering coefficients are relatively high and the first scattering coefficient of the
extended scatterer are relatively low.
produced with a normal-mode propagation model. Initial results are presented to show the requirements for snapshot lengths and target trajectories
for successful depth separation of slow moving targets at low frequencies.
TUESDAY MORNING, 28 OCTOBER 2014
MARRIOTT 7/8, 7:55 A.M. TO 12:00 NOON
Session 2aAA
Architectural Acoustics and Engineering Acoustics: Architectural Acoustics and Audio I
K. Anthony Hoover, Cochair
McKay Conant Hoover, 5655 Lindero Canyon Road, Suite 325, Westlake Village, CA 91362
Alexander U. Case, Cochair
Sound Recording Technology, University of Massachusetts Lowell, 35 Wilder St., Suite 3, Lowell, MA 01854
Chair’s Introduction—7:55
Invited Papers
8:00
2aAA1. Excessive reverberance in an outdoor amphitheater. K. Anthony Hoover (McKay Conant Hoover, 5655 Lindero Canyon
Rd., Ste. 325, Westlake Village, CA 91362, thoover@mchinc.com)
The historic Ford Theatre in Hollywood, CA, is undergoing an overall renovation and expansion. Its centerpiece is the inexplicably
asymmetrical 1200 seat outdoor amphitheater, built of concrete in 1931 after the original 1920 wood structure was destroyed by a brush
fire in 1929, and well before the adjacent Hollywood Freeway was nearly as noisy as now. Renovation includes reorienting seating for
better symmetry while maintaining the historic concrete, and improving audio, lighting, and support spaces. Sited within an arroyo overlooking a busy highway, and in view of the Hollywood Bowl, the new design features an expanded “sound wall” that will help to mitigate highway noise while providing optimal lighting and control positions. New sound-absorptive treatments will address the Ford’s
excessive reverberation, currently more than might be anticipated for an entirely outdoor space. The remarkably uniform distribution of
ambient noise and apparent contributions by the arroyo to the reverberation will be discussed, along with assorted design challenges.
8:20
2aAA2. Room acoustics analysis, recordings of real and simulated performances, and integration of an acoustic shell mock up
with performers for evaluation of a choir shell design. David S. Woolworth (Oxford Acoust., 356 CR 102, Oxford, MS 38655,
dave@oxfordacoustics.com)
The current renovation of the 1883 Galloway Memorial Methodist Church required the repair and replacement of a number of room
finishes, as well as resolution of acoustic problems related to their choir loft. This paper will present the various approaches used to
determine the best course of action using primarily an in-situ analysis that includes construction mockups, simulated sources, and critical
listening.
8:40
2aAA3. A decade later: What we’ve learned from The Pritzker Pavilion at Millennium Park. Jonathan Laney, Greg Miller, Scott
Pfeiffer, and Carl Giegold (Threshold Acoust., 53 W Jackson Blvd., Ste. 815, Chicago, IL 60604, jlaney@thresholdacoustics.com)
Each design and construction process yields a building and systems that respond to a particular client at a particular time. We launch
these projects into the wild and all too frequently know little of their daily lives and annual cycles after that. Occasionally, though, we
have the opportunity to stay close enough to watch a project wear in, weather (sometimes literally), and respond to changing client and
patron dynamics over time. Such is the case with the reinforcement and enhancement systems at the Pritzker Pavilion in Chicago’s Millennium Park. On a fine-grained scale, each outdoor loudspeaker is individually inspected for its condition at the end of each season. Signal-processing and amplification equipment is evaluated as well, so the overall system is maintained at a high degree of readiness and
reliability. Strengths and weaknesses of these components thereby reveal themselves over time. We will discuss these technical aspects
as well as changing audience behaviors, modifications made for special events, and the ways all of these factors inform the future of
audio (and video) in the Park.
2114
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2114
9:00
2aAA4. An electro-acoustic conundrum—Improving the listening experience at the Park Avenue Armory. Steve Barbar (E-coustic
Systems, 30 Dunbarton Rd., Belmont, MA 02478, steve@lares-lexicon.com) and Paul Scarbrough (Akustiks, South Norwalk, CT)
Larger than a hanger for a commercial airliner, the Park Avenue Armory occupies an entire city block in midtown Manhattan. Its
massive internal volume generates reverberation time in excess of three seconds. However, it functions as a true multi-purpose venue
with programming that includes dramatic performances produced by the Manchester International Festival, and musical performances
sponsored by Lincoln Center. We will discuss the unique nature of the venue as well as the tools and techniques employed in staging different productions.
9:20
2aAA5. Sound reinforcement in an acoustically challenging multipurpose space. Deb Britton (K2 Audio, 4900 Pearl East Circle,
Ste. 201E, Boulder, CO 80301, deb@k2audio.com)
2a TUE. AM
Often times, sound system designers are dealt less than ideal cards: design a sound reinforcement system that will provide great
speech intelligibility, in a highly reverberant space, without modifying any of the architectural finishes. While this can certainly be a
challenge, add to those prerequisites, the additional complication of the sound system serving a multi-purpose use, where different types
of presentations must take place in different locations in the space, and with varying audience sizes. This paper presents a case study of
such a scenario, and describes the approach taken in order to achieve the client’s goals.
9:40
2aAA6. Comparison of source stimulus input method on measured speech transmission index values of sound reinforcement
systems. Neil T. Shade (Acoust. Design Collaborative, Ltd., 7509 Lhirondelle Club Rd., Ruxton, MD 21204, nts@akustx.com)
One purpose of a sound reinforcement system is to increase the talker’s speech intelligibility. A common metric for speech intelligibility evaluation is the Speech Transmission Index (STI) defined by IEC-60268-16 Revision 4. The STI of a sound reinforcement system
can be measured by inputting a stimulus signal into the sound system, which is modified by the system electronics, and radiated by the
sound system loudspeakers to the audience seats. The stimulus signal can be input via a line level connection to the sound system or by
playing the stimulus signal through a small loudspeaker that is picked-up by a sound system microphone. This latter approach factors
the entire sound system signal chain from microphone input to loudspeaker output. STI measurements were performed on two sound
systems, one in a reverberant room and the other in relatively non-reverberant room. Measurement results compare both signal input
techniques using omnidirectional and hypercardioid sound system microphones and three loudspeakers claimed to be designed to have
directivity characteristics similar to the human voice.
10:00–10:20 Break
10:20
2aAA7. Enhancements in technology for improving access to active acoustic solutions in multipurpose venues. Ronald Freiheit
(Wenger Corp., 555 Park Dr., Owatonna, MN 55060, ron.freiheit@wengercorp.com)
With advancements in digital signal processing technology and higher integration of functionality, access to active acoustics systems
for multipurpose venues has been enhanced. One of the challenges with active acoustics systems in multipurpose venues is having
enough control over the various acoustic environments within the same room (e.g., under balcony versus over balcony). Each area may
require its own signal processing and control to be effective. Increasing the signal processing capacity to address these different environments will provide a more effective integration of the system in the room. A new signal processing platform with the flexibility to meet
these needs is discussed. The new platform addresses multiple areas with concurrent processing and is integrated with a digital audio
buss and a network-based control system. The system is flexible in its ability to easily expand to meet the needs of a variety of environments. Enhancing integration and flexibility of scale accelerates the potential for active systems with an attractive financial point of
entry.
10:40
2aAA8. Sound levels and the risk of hearing damage at a large music college. Thomas J. Plsek (Brass, Berklee College of Music,
MS 1140 Brass, 1140 Boylston St., Boston, MA 02215, tplsek@berklee.edu)
For a recent sabbatical from Berklee College of Music, my project was to study hearing loss especially among student and faculty
musicians and to measure sound levels in various performance situation ranging from rehearsals to classes/labs to actual public performances. The National Institute for Occupational Safety and Health (NIOSH) recommendations (85 dBA criterion with a 3 dB exchange
rate) were used to determine the daily noise dose obtained in each of the situations. In about half of the situations 100% or more of the
daily noise was reached. More measuring of actual levels reached is needed as are noise dosimetry measurements over an active 12–16
hour day.
11:00
2aAA9. Development of a tunable absorber/diffuser using micro-perforated panels. Matthew S. Hildebrand (Wenger Corp., 555
Park Dr., Owatonna, MN 55060, matt.hildebrand@wengercorp.com)
Shared rehearsal spaces are an all-too-common compromise in music education, pitting vocal, and instrumental ensembles against
each other for desirable room acoustics. More than ever, adjustable acoustics are needed in music spaces. An innovative new acoustic
panel system was developed with this need for flexibility in mind. Providing variable sound absorption with a truly static aesthetic,
2115
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2115
control of reverberation time in the mid-frequency bands is ultimately handed over to the end user. New product development test methods and critical design decisions are discussed, such as curving the micro-perforated panel to improve scattering properties. In situ reverberation measurements are also offered against a 3D CAD model prediction using lab-tested material properties.
11:20
2aAA10. Real case measurements of inflatable membranes absorption technique. Niels W. Adelman-Larsen (Flex Acoust., Diplomvej 377, Kgs. Lyngby 2800, Denmark, nwl@flexac.com)
After some years of development of the patented technology of inflated plastic membranes for sound absorption, an actual product
became available in 2012 and immediately implemented in a Danish music school. It absorbs sound somewhat linearly from 63 to 1k
Hz, when active, advantageous for amplified music. The absorption coefficient is close to 0.0 when deactivated. 75.000 ft2 of the mobile
version of the innovation was employed at the Eurovision Song Contest, the second largest annual television event worldwide. This contributed to a lowering of T30 in the 63, 125, and 250 Hz octave bands from up to 13 s to below 4 s in the former-shipyard venue. The
permanently installed version has been incorporated in a new theater in Korea. More detailed acoustic measurements from these cases
will be presented. The technology will further be used in the new, multi-functional Dubai Opera scheduled for 2015.
11:40
2aAA11. Virtual sound images and virtual sound absorbers misinterpreted as supernatural objects. Steven J. Waller (Rock Art
Acoust., 5415 Lake Murray Blvd. #8, La Mesa, CA 91942, wallersj@yahoo.com)
Complex sound behaviors such as echoes, reverberation, and interference patterns can be mathematically modeled using the modern
concepts of virtual sound sources or virtual sound absorbers. Yet prior to the scientific wave theory of sound, these same acoustical phenomena were considered baffling, and hence led to the illusion that they were due to mysterious invisible sources. Vivid descriptions of
the physical forms of echo spirits, hoofed thunder gods, and pipers’ stones, as engendered from the sounds they either produced or
blocked, are found in ancient myths and legends from around the world. Additional pieces of evidence attesting to these beliefs are
found in archaeological remains consisting of canyon petroglyphs, cave paintings, and megalithic stone circles. Blindfolded participants
in acoustic experimental set-ups demonstrated that they attributed various virtual sound effects to real sound sources and/or attenuators.
Ways in which these types of sonic phenomena can be manipulated to give rise to ultra-realistic auditory illusions of actual objects even
today will be discussed relative to enhancing experiences of multimedia entertainment and virtual reality. Conversely, understanding
how the mind can construct psychoacoustic models inconsistent with scientific reality could serve as a lesson helping prevent the supernatural misperceptions to which our ancestors were susceptible.
TUESDAY MORNING, 28 OCTOBER 2014
LINCOLN, 8:25 A.M. TO 12:00 NOON
Session 2aAB
Animal Bioacoustics, Acoustical Oceanography, and Signal Processing in Acoustics: Mobile Autonomous
Platforms for Bioacoustic Sensing
Holger Klinck, Cochair
Cooperative Institute for Marine Resources Studies, Oregon State University, Hatfield Marine Science Center, 2030 SE
Marine Science Drive, Newport, OR 97365
David K. Mellinger, Cochair
Coop. Inst. for Marine Resources Studies, Oregon State University, 2030 SE Marine Science Dr., Newport, OR 97365
Chair’s Introduction—8:25
Invited Papers
8:30
2aAB1. Real-time passive acoustic monitoring of baleen whales from autonomous platforms. Mark F. Baumgartner (Biology Dept.,
Woods Hole Oceanographic Inst., 266 Woods Hole Rd., MS #33, Woods Hole, MA 02543, mbaumgartner@whoi.edu)
An automated low-frequency detection and classification system (LFDCS) was developed for use with the digital acoustic monitoring (DMON) instrument to detect, classify, and report in near real time the calls of several baleen whale species, including fin, humpback, sei, bowhead, and North Atlantic right whales. The DMON/LFDCS has been integrated into the Slocum glider and APEX
profiling float, and integration projects are currently underway for the Liquid Robotics wave glider and a moored buoy. In a recent
2116
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2116
evaluation study, two gliders reported over 25,000 acoustic detections attributed to fin, humpback, sei, and North Atlantic right whales
over a 3-week period during late fall in the Gulf of Maine. The overall false detection rate for individual calls was 14%, and for right,
humpback, and fin whales, false predictions of occurrence during 15-minute reporting periods were 5% or less. Agreement between
acoustic detections and visual sightings from concurrent aerial and shipboard surveys was excellent (9 of 10 visual detections were
accompanied by real-time acoustic detections of the same species by a nearby glider). We envision that this autonomous acoustic monitoring system will be a useful tool for both marine mammal research and mitigation applications.
8:50
2aAB2. Detection, bearing estimation, and telemetry of North Atlantic right whale vocalizations using a wave glider autonomous
vehicle. Harold A. Cheyne (Lab of Ornithology, Cornell Univ., 95 Brown Rd., Rm. 201, Ithaca, NY 14850, haroldcheyne@gmail.com),
Charles R. Key, and Michael J. Satter (Leidos, Long Beach, MS)
2a TUE. AM
Assessing and mitigating the effects of anthropogenic noise on marine mammals is limited by the typically employed technologies
of archival underwater acoustic recorders and towed hydrophone arrays. Data from archival recorders are analyzed months after the activity of interest, so assessment occurs long after the events and mitigation of those activities is impossible. Towed hydrophone arrays
suffer from nearby ship and seismic air gun noise, and they require substantial on-board human and computing resources. This work has
developed an acoustic data acquisition, processing, and transmission system for use on a Wave Glider, to overcome these limitations by
providing near real-time marine mammal acoustic data from a portable and persistent autonomous platform. Sea tests have demonstrated
the proof-of-concept with the system recording four channels of acoustic data and transmitting portions of those data via satellite. The
system integrates a detection-classification algorithm on-board, and a beam-forming algorithm in the shore-side user interface, to provide a user with aural and visual review tools for the detected sounds. Results from a two-week deployment in Cape Cod Bay will be
presented and future development directions will be discussed.
9:10
2aAB3. Shelf-scale mapping of fish sound production with ocean gliders. David Mann (Loggerhead Instruments Inc., 6576 Palmer
Park Circle, Sarasota, FL 34238, dmann@loggerhead.com), Carrie Wall (Univ. of Colorado at Boulder, Boulder, CO), Chad Lembke,
Michael Lindemuth (College of Marine Sci., Univ. of South Florida, St.. Petersburg, FL), Ruoying He (Dept Marine, Earth, and Atmospheric Sci., NC State Univ., Raleigh, NC), Chris Taylor, and Todd Kellison (Beaufort Lab., NOAA Fisheries, Beaufort, NC)
Ocean gliders are a powerful platform for collecting large-scale data on the distribution of sound-producing animals while also collecting environmental data that may influence their distribution. Since 2009, we have performed extensive mapping on the West Florida
Shelf with ocean gliders equipped with passive acoustic recorders. These missions have revealed the distribution of red grouper as well
as identified several unknown sounds likely produced by fishes. In March 2014, we ran a mission along the shelf edge from Cape Canaveral, FL to North Carolina to map fish sound production. The Gulf Stream and its strong currents necessitated a team effort with ocean
modeling to guide the glider successfully to two marine protected areas. This mission also revealed large distributions of unknown
sounds, especially on the shallower portions of the shelf. Gliders provide valuable spatial coverage, but because they are moving and
most fish have strong diurnal sound production patterns, data analysis on presence and absence must be made carefully. In many of these
cases, it is best to use a combination of platforms, including fixed recorders and ocean profilers to measure temporal patterns of sound
production.
9:30
2aAB4. The use of passively drifting acoustic recorders for bioacoustic sensing. Jay Barlow, Emily Griffiths, and Shannon Rankin
(Marine Mammal and Turtle Div., NOAA-SWFSC, 8901 La Jolla Shores Dr., La Jolla, CA 92037, jay.barlow@noaa.gov)
Passively drifting recording systems offer several advantages over autonomous underwater or surface vessels for mobile bioacoustic
sensing in the sea. Because they lack of any propulsion, self noise is minimized. Also, vertical hydrophone arrays are easy to implement,
which is useful in estimating the distance to specific sound sources. We have developed an inexpensive (<$5000) Drifting Acoustic
Spar Buoy Recorder (DASBR) that features up to 1 TB of stereo recording capacity and a bandwidth of 10 Hz–96 kHz. Given their low
cost, many more recorders can be deployed to achieve greater coverage. The audio and GPS recording system floats at the surface, and
the two hydrophones (at 100 m) are de-coupled from wave action by a dampner disk and an elastic cord. During a test deployment in the
Catalina Basin (Nov 2013) we collected approximately 1200 hours of recordings using 5 DASBRs recording at 192 kHz sampling rate.
Each recorder was recovered (using GPS and VHF locators) and re-deployed 3–4 times. Dolphin whistles and echo-location clicks were
detectable approximately half of the total recording time. Cuvier’s beaked whales were also detected on three occasions. Cetacean density estimation and ocean noise measurements are just two of many potential uses for free-drifting recorders.
9:50
2aAB5. Small cetacean monitoring from surface and underwater autonomous vehicles. Douglas M. Gillespie, Mark Johnson (Sea
Mammal Res. Unit, Univ. of St. Andrews, Gatty Marine Lab., St Andrews, Fife KY16 8LB, United Kingdom, dg50@st-andrews.ac.uk),
Danielle Harris (Ctr. for Res. into Ecological and Environ. Modelling, Univ. of St. Andrews, St. Andrews, Fife, United Kingdom), and
Kalliopi Gkikopoulou (Sea Mammal Res. Unit, Univ. of St. Andrews, St. Andrews, United Kingdom)
We present results of Passive Acoustic surveys conducted from three types of autonomous marine vehicles, two submarine gliders
and a surface wave powered vehicle. Submarine vehicles have the advantage of operating at depth, which has the potential to increase
detection rate for some species. However, surface vehicles equipped with solar panels have the capacity to carry a greater payload and
currently allow for more on board processing which is of particular importance for high frequency odontocete species. Surface vehicles
are also more suited to operation in shallow or coastal waters. We describe the hardware and software packages developed for each vehicle type and give examples of the types of data retrieved both through real time telemetry and recovered post deployment. High frequency echolocation clicks and whistles have been successfully detected from all vehicles. Noise levels varied considerably between
vehicle types, though all were subject to a degree of mechanical noise from the vehicle itself.
2117
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2117
10:10–10:35 Break
10:35
2aAB6. A commercially available sound acquisition and processing board for autonomous passive acoustic monitoring platforms.
Haru Matsumoto, Holger Klinck, David K. Mellinger (CIMRS, Oregon State Univ., 2115 SE OSU Dr., Newport, OR 97365, haru.matsumoto@oregonstate.edu), and Chris Jones (Embedded Ocean Systems, Seattle, WA)
The U.S. Navy is required to monitor marine mammal populations in U.S. waters to comply with regulations issued by federal agencies. Oregon State University and Embedded Ocean Systems (EOS) co-developed a passive acoustic data acquisition and processing
board called Wideband Intelligent Signal Processor and Recorder (WISPR). This low-power, small-footprint system is suitable for autonomous platforms with limited battery and space capacity, including underwater gliders and profiler floats. It includes a high-performance digital signal processor (DSP) running the uClinux operating system, providing extensive flexibility for users to configure or reprogram the system’s operation. With multiple WISPR-equipped mobile platforms strategically deployed in an area of interest, operators
on land or at sea can now receive information in near-real time about the presence of protected species in the survey area. In April 2014,
WISPR became commercially available via EOS. We are implementing WISPR in the Seaglider and will conduct a first evaluation test
off the coast of Oregon in September. System performance, including system noise interference, flow noise, power consumption, and
file compression rates in the data-logging system, will be discussed. [Funding from the US Navy’s Living Marine Resources Program.]
10:55
2aAB7. Glider-based passive acoustic marine mammal detection. John Hildebrand, Gerald L. D’Spain, and Sean M. Wiggins
(Scripps Inst. of Oceanogr., UCSD, Mail Code 0205, La Jolla, CA 92093, jhildebrand@ucsd.edu)
Passive acoustic detection of delphinid sounds using the Wave Glider (WG) autonomous near-surface vehicle was compared with a
fixed bottom-mounted autonomous broadband system, the High-frequency Acoustic Recording Package (HARP). A group of whistling
and clicking delphinids was tracked using an array of bottom-mounted HARPs, providing ground-truth for detections from the WG.
Whistles in the 5–20 kHz band were readily detected by the bottom HARPs as the delphinids approached, but the WG revealed only a
brief period with intense detections as the animals approached within ~500 m. Refraction due to acoustic propagation in the thermocline
provides an explanation for why the WG may only detect whistling delphinids at close range relative to the long-range detection capabilities of the bottom-mounted HARPs. This work demonstrated that sound speed structure plays an important role in determining detection
range for high-frequency-calling marine mammals by autonomous gliders and bottom-mounted sensors.
Contributed Papers
11:15
11:30
2aAB8. Acoustic seagliders for monitoring marine mammal populations. Lora J. Van Uffelen (Ocean and Resources Eng., Univ. of Hawaii at
Manoa, 1000 Pope Rd., MSB 205, Honolulu, HI 96815, loravu@hawaii.
edu), Erin Oleson (Cetacean Res. Program, NOAA Pacific Islands Fisheries
Sci. Ctr., Honolulu, HI), Bruce Howe, and Ethan Roth (Ocean and Resources Eng., Univ. of Hawaii at Manoa, Honolulu, HI)
2aAB9. Prototype of a linear array on an autonomous surface vehicle
for the register of dolphin displacement patterns within a shallow bay.
Eduardo Romero-Vivas, Fernando D. Von Borstel-Luna (CIBNOR, Instituto
Politecnico Nacional 195, Playa Palo de Santa Rita Sur, La Paz, BCS
23090, Mexico, evivas@cibnor.mx), Omar A. Bustamante, Sergio Beristain
(Acoust. Lab, ESIME, IPN, IMA, Mexico City, Mexico), Miguel A. PortaGandara, Franciso Villa Medina, and Joaquın Gutierrez-Jag€
uey (CIBNOR,
La Paz, BCS, Mexico)
A DMON digital acoustic monitoring device has been integrated into a
Seaglider with the goal of passive, persistent acoustic monitoring of cetacean populations. The system makes acoustic recordings as it travels in a
sawtooth pattern between the surface and up to 1000 m depth. It includes
three hydrophones, located in the center of the instrument and on each wing.
An onboard real-time detector has been implemented to record continuously
after ambient noise has risen above a signal-to-noise (SNR) threshold level,
and the glider transmits power spectra of recorded data back to a shore-station computer via iridium satellite after each dive. The glider pilot has the
opportunity to set parameters that govern the amount of data recorded, thus
managing data storage and therefore the length of a mission. This system
was deployed in the vicinity of the Hawaiian Islands to detect marine mammals as an alternative or complement to conventional ship-based survey
methods. System design and implementation will be described and preliminary results will be presented.
2118
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
A semi-resident population of tursiops has been reported in the south of
La Paz bay in Baja California Sur, Mexico, where specific zones for social,
feeding and resting behaviors have been detected. Nevertheless, increasing
human activities and new constructions are attributed to have shifted the
areas of their main activity. Therefore, it becomes important to study displacement patterns of dolphins within the bay and their spatial relationship
to maritime traffic and other sources of anthropogenic noise. A prototype of
an Autonomous Surface Vehicle (ASV) designed for shallow water bathymetry has been adapted to carry a linear array of hydrophones previously
reported for the localization of dolphins from their whistles. Conventional
beam-forming algorithms and electrical steering are used to find Direction
of Arrival (DOA) of the sound sources. The left-right ambiguity typical of a
linear array and front-back lobes for sound sources located at end-fire can
be resolved by the trajectory of the ASV. Geo-referenced positions and
bearing of the array, provided by the Inertial Measurement Unit of the ASV,
along with DOA for various positions allows triangulating and mapping the
sound sources. Results from both, controlled experiments using geo-referenced know sources, and field trials within the bay, are presented.
168th Meeting: Acoustical Society of America
2118
11:45
2aAB10. High-frequency observations from mobile autonomous platforms. Holger Klinck, Haru Matsumoto, Selene Fregosi, and David K. Mellinger (Cooperative Inst. for Marine Resources Studies, Oregon State Univ.,
Hatfield Marine Sci. Ctr., 2030 SE Marine Sci. Dr., Newport, OR 97365,
Holger.Klinck@oregonstate.edu)
With increased human use of US coastal waters—including use by
renewable energy activities such as the deployment and operation of wind,
wave, and tidal energy converters—the issue of potential negative impacts
on coastal ecosystems arises. Monitoring these areas efficiently for marine
mammals is challenging. Recreational and commercial activities (e.g., fishing) can hinder long-term operation of fixed moored instruments.
TUESDAY MORNING, 28 OCTOBER 2014
INDIANA G, 8:25 A.M. TO 12:00 NOON
Session 2aAO
Acoustical Oceanography, Underwater Acoustics, and Signal Processing in Acoustics: Parameter
Estimation in Environments That Include Out-of-Plane Propagation Effects
Megan S. Ballard, Cochair
Applied Research Laboratories, The University of Texas at Austin, P.O. Box 8029, Austin, TX 78758
Timothy F. Duda, Cochair
Woods Hole Oceanographic Institution, WHOI AOPE Dept. MS 11, Woods Hole, MA 02543
Chair’s Introduction—8:25
Invited Papers
8:30
2aAO1. Estimating waveguide parameters using horizontal and vertical arrays in the vicinity of horizontal Lloyd’s mirror in
shallow water. Mohsen Badiey (College of Earth, Ocean, and Environment, Univ. of Delaware, 261 S. College Ave., Robinson Hall,
Newark, DE 19716, badiey@udel.edu)
When shallow water internal waves approach a source-receiver track, the interference between the direct and horizontally refracted
acoustic paths from a broadband acoustic source was previously shown to form Horizontal Lloyd’s mirror (Badiey et al. J. Acoust. Soc.
Am. 128(4), EL141–EL147, 2011). While the modal interference structure in the vertical plane may reveal arrival time for the out of
plane refracted acoustic wave front, analysis of moving interference pattern along the horizontal array allows measurement of the angle
of horizontal refraction and the speed of the nonlinear internal wave (NIW) in the horizontal plane. In this paper we present a full
account of the movement of NIW towards a source-receive track and how we can use the received acoustic signal on an L-shaped array
to estimate basic parameters of the waveguide and obtain related temporal and spatial coherence functions particularly in the vicinity of
the formation of the horizontal Lloyd mirror. Numerical results using Vertical Modes and Horizontal Rays as well as 3D PE calculations
are carried out to explain the experimental observations. [Work supported by ONR 322OA.]
8:50
2aAO2. Slope inversion in a single-receiver context for three-dimensional wedge-like environments. Frederic Sturm (LMFA (UMR
5509 ECL-UCBL1-INSA de Lyon), Ecole Centrale de Lyon, Ctr. Acoustique, Ecole Centrale de Lyon, 36, Ave. Guy de Collongue,
Ecully 69134, France, frederic.sturm@ec-lyon.fr) and Julien Bonnel (Lab-STICC (UMR CNRS 6285), ENSTA Bretagne, Brest Cedex
09, France)
In a single-receiver context, time-frequency (TF) analysis can be used to analyze modal dispersion of low-frequency broadband
sound pulses in shallow-water oceanic environments. In a previous work, TF analysis was used to study the propagation of low-frequency broadband pulses in three-dimensional (3-D) shallow-water wedge waveguides. Of particular interest is that TF analysis turns
out to be a suitable tool to better understand, illustrate and visualize 3-D propagation effects for such wedge-like environments. In the
present work, it is shown that TF analysis can also be used at the core of an inversion scheme to estimate the slope of the seabed in a
2119
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2119
2a TUE. AM
Additionally these shallow waters are often utilized by high-frequency cetaceans (e.g., harbor porpoises) which can only be acoustically detected over
short distances of a few hundred meters. Mobile acoustic platforms are a
useful tool to survey these areas of concern with increased temporal and
spatial resolution compared to fixed systems and towed arrays. A commercially available acoustic recorder (type Song Meter SM2 + , Wildlife Acoustics, Inc.) featuring sampling rates up to 384 kHz was modified and
implemented on an autonomous underwater vehicle (AUV) as well as an
unmanned surface vehicle (USV) and tested in the field. Preliminary results
indicate that these systems are effective at detecting the presence of highfrequency cetaceans such as harbor porpoises. Potential applications, limitations, and future directions of this technology will be discussed. [Project
partly supported by ONR and NOAA.]
same single hydrophone receiving configuration and for similar 3-D wedge-shaped waveguides. The inversion algorithm proposed,
based on a masking process, focuses on specific parts of the TF domain where modal energy is concentrated. The criterion used to quantify the match between the received signal and the replicas by a fully 3-D parabolic equation code, is defined as the amount of measured
time-frequency energy integrated inside the masks. Its maximization is obtained using an exhaustive search. The method is first benchmarked on numerical simulations and then successfully applied on experimental small-scale data.
9:10
2aAO3. Effects of environmental uncertainty on source range estimates from horizontal multipath. Megan S. Ballard (Appl. Res.
Labs., The Univ. of Texas at Austin, P.O. Box 8029, Austin, TX 78758, meganb@arlut.utexas.edu)
A method has been developed to estimate source range in continental shelf environments that exhibit three-dimensional propagation
effects [M. S. Ballard, J. Acoust. Soc. Am. 134, EL340–EL343, 2013]. The technique exploits measurements recorded on a horizontal
line array of a direct path arrival, which results from sound propagating across the shelf to the receiver array, and a refracted path arrival,
which results from sound propagating obliquely upslope and refracting back downslope to the receiver array. A hybrid modeling
approach using vertical modes and horizontal rays provides the ranging estimate. According to this approach, rays are traced in the horizontal plane with refraction determined by the modal phase speed. Invoking reciprocity, the rays originate from the center of the array
and have launch angles equal to the estimated bearing angles of the direct and refracted paths. The location of the source in the horizontal plane is estimated from the point where the rays intersect. In this talk, the effects of unknown environmental parameters, including
the sediment properties and the water-column sound-speed profile, on the source range estimate are discussed. Error resulting from
uncertainty in the measured bathymetry and location of the receiver array will also be addressed. [Work supported by ONR.]
Contributed Papers
9:30
9:45
2aAO4. Acoustical observation of the estuarine salt wedge at low-tomid-frequencies. D. Benjamin Reeder (Oceanogr., Naval Postgrad. School,
73 Hanapepe Loop, Honolulu, HI 96825, dbreeder@nps.edu)
2aAO5. A hybrid approach for estimating range-dependent properties
of shallow water environments. Michael Taroudakis and Costas Smaragdakis (Mathematics and Appl. Mathematics & IACM, Univ. of Crete and
FORTH, Knossou Ave., Heraklion 71409, Greece, taroud@math.uoc.gr)
The estuarine environment often hosts a salt wedge, the stratification of
which is a function of the tide’s range and speed of advance, river discharge
volumetric flow rate and river mouth morphology. Competing effects of
temperature and salinity on sound speed control the degree of acoustic
refraction occurring along an acoustic path. A field experiment was carried
out in the Columbia River to test the hypothesis that the estuarine salt wedge
is acoustically observable in terms of low-to-mid-frequency acoustic propagation. Linear frequency-modulated (LFM) acoustic signals in the 500–
2000 Hz band were collected during the advance and retreat of the salt
wedge during May 27–28, 2013. Results demonstrate that the three-dimensional salt wedge front is the dominant physical feature controlling acoustic
propagation in this environment: received signal energy is relatively stable
under single-medium conditions before and after the passage of the salt
wedge front, but suffers a 10–15 dB loss as well as increased variance during salt wedge front passage due to 3D refraction and scattering. Physical
parameters (i.e., temperature, salinity, current, and turbulence) and acoustic
propagation modeling corroborate and inform the acoustic observations.
H hybrid approach based on statistical signal characterization and a linear inversion scheme for the estimation of range dependent sound speed profiles of compact support in shallow water is presented. The approach is
appropriate for ocean acoustic tomography when there is a single receiver
available, as the first stage of the method is based on the statistical characterization of a single reception using wavelet transform to associate the signal with a set of parameters describing the statistical features of its wavelet
sub-band coefficients. A non-linear optimization algorithm is then applied
to associate these features with range-dependent sound speed profile in the
water column. This inversion method is restricted to cases where the range
dependency is of compact support. At the second stage a linear inversion
scheme based on modal arrivals identification and a first order perturbation
formula to associate sound speed differences with modal travel time perturbations is applied to fine tune the results obtained by the optimization
scheme. A second restriction of this stage is that mode identification is necessary. If this assumption is fulfilled the whole scheme may be applied in
ocean acoustic tomography for the retrieval of three-dimensional features,
combining inversion results at various slices.
10:00–10:15 Break
Invited Papers
10:15
2aAO6. Three-dimensional acoustics in basin scale propagation. Kevin D. Heaney (OASIS Inc., 11006 Clara Barton Dr., Fairfax Station, VA 22039, oceansound04@yahoo.com) and Richard L. Campbell (OASIS Inc., Seattle, U.S. Minor Outlying Islands)
Long-range, basin-scale acoustic propagation has long been considered deep water and well represented by the two-dimensional numerical solution (range/depth) of wave equation. Ocean acoustic tomography has even recently been demonstrated to be insensitive to
the three-dimensional affects of refraction and diffraction (Dushaw, JASA 2014). For frequencies below 50 Hz, where volume attenuation is negligible, the approximation that all propagation of significance is in the plane begins to break down. When examining very
long-range propagation in situations where the source/receiver are not specifically selected for open water paths, 3D effects can dominate. Seamounts and bathymetry rises cause both refraction away from the shallowing seafloor and diffraction behind sharp edges. In
2120
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2120
this paper a set of recent observations, many from the International Monitoring System (IMS) of the United Nations Comprehensive
Test Ban Treaty Organization (CTBTO) will be presented, demonstrating observations that are not well explained by Nx2D acoustic
propagation. The Peregrine PE model, a recent recoding of RAM in C, has been extended to include 3D split-step Pade propagation and
will be used to demonstrate how 3D acoustic propagation affects help explains some of the observations.
10:35
2aAO7. Sensitivity analysis of three-dimensional sound pressure fields in complex underwater environments. Ying-Tsong Lin
(Appl. Ocean Phys. and Eng., Woods Hole Oceanographic Inst., Bigelow 213, MS#11, WHOI, Woods Hole, MA 02543, ytlin@whoi.
edu)
2a TUE. AM
A sensitivity kernel for sound pressure variability due to variations of index of refraction is derived from a higher-order three-dimensional (3D) split-step parabolic-equation (PE) solution of the Helmholtz equation. In this study, the kernel is used to compute the acoustic sensitivity field between a source and a receiver in a 3D underwater environment, and to quantify how much of the medium change
can cause significant consequence on received acoustic signals. Using the chain rule, the dynamics of sensitivity fields can be connected
to the dynamics of ocean processes. This talk will present numerical examples of sound propagation in submarine canyons and continental slopes, where the ocean dynamics cause strong spatial and temporal variability in sound pressure. Using the sensitivity kernel technique, we can analyze the spatial distribution and the temporal evolution of the acoustic sensitivity fields in these geologically and
topographically complex environments. The paper will also discuss other applications of this sound pressure sensitivity kernel, including
uncertainty quantification of transmission loss prediction and adjoint models for 3D acoustic inversions. [Work supported by the ONR.]
10:55
2aAO8. Sensitivity analysis of the image source method to out-of-plane effects. Samuel Pinson (Laborat
orio de Vibraç~
oes e
Acustica, Universidade Federal de Santa Catarina, LVA Dept de Engenharia Mec^anica, UFSC, Bairro Trindade, Florian
opolis, SC
88040-900, Brazil, samuelpinson@yahoo.fr) and Charles W. Holland (Penn State Univ., State College, PA)
In the context of seafloor characterization, the image source method is a technique to estimate the sediment sound-speed profile from
broadband seafloor reflection data. Recently the method has been extended to treat non-parallel layering of the sediment stack. In using
the method with measured data, the estimated sound-speed profiles are observed to exhibit fluctuations. These fluctuations may be partially due to violation of several assumptions: (1) the layer interfaces are smooth with respect to the wavelength and (2) out-of-plane
effects are negligible. In order to better understand the impact these effects, the sensitivity of the image source method to roughness and
out-of-plane effects are examined.
Contributed Papers
11:15
11:30
2aAO9. Results of matched-field inversion in a three-dimensional oceanic environment ignoring horizontal refraction. Frederic Sturm (LMFA
(UMR 5509 ECL-UCBL1-INSA de Lyon), Ecole Centrale de Lyon, Ctr.
Acoustique, Ecole Centrale de Lyon, 36, Ave. Guy de Collongue, Ecully
69134, France, frederic.sturm@ec-lyon.fr) and Alexios Korakas (LabSTICC (UMR6285), ENSTA Bretagne, Brest Cedex 09, France)
2aAO10. Measurements of sea surface effects on the low-frequency
acoustic propagation in shallow water. Altan Turgut, Marshall H. Orr
(Acoust. Div., Naval Res. Lab, Acoust. Div., Code 7161, Washington, DC
20375, altan.turgut@nrl.navy.mil), and Jennifer L. Wylie (Fellowships
Office, National Res. Council, Washington, DC)
For some practical reasons, inverse problems in ocean acoustics are often based on 2-D modeling of sound propagation, hence ignoring 3-D propagation effects. However, the acoustic propagation in shallow-water
environments, such as the continental shelf, may be strongly affected by 3D effects, thus requiring 3-D modeling to be accounted for. In the present
talk, the feasibility and the limits of an inversion in fully 3-D oceanic environments assuming 2-D propagation are investigated. A simple matchedfield inversion procedure implemented in a Bayesian framework and based
on the exhaustive search of the parameter space is used. The study is first
carried out on a well-established wedge-like synthetic test case, which
exhibits well-known 3-D effects. Both synthetic data and replica are generated using a parabolic-equation-based code. This approach highlights the
relevance of using 2-D propagation models when inversions are performed
at relatively short ranges from the source. On the other hand, important mismatch occurs when inverting at farther ranges, demonstrating that the use of
fully 3-D forward models is required. Results of inversion on experimental
small-scale data, based on a subspace approach as suggested by the preliminary study made on the synthetic test case, are presented.
2121
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
In shallow water, spatial and temporal variability of the water column
often restricts accurate estimations of bottom properties from low-frequency
acoustic data, especially under highly active oceanographic conditions during the summer. These effects are reduced under winter conditions having a
more uniform sound-speed profile. However, during the RAGS03 winter
experiment, significant low-frequency (200–500 Hz) acoustic signal degradations have been observed on the New Jersey Shelf, especially in the presence of frequently occurring winter storms. Both in-plane and out-of-plane
propagation effects were observed on three moored VLAs and one bottommoored HLA. These effects were further analyzed using 3-D PE simulations
with inputs from a 3-D time-evolving surface gravity wave model. It is
shown that higher-order acoustic modes are highly scattered at high sea
states and out-of-plane propagation effects become important when surfacewave fronts are parallel to the acoustic propagation track. In addition, 3-D
propagation effects on the source localization and geoacoustic inversions
are investigated using the VLA data with/without the presence of winter
storms. [Work supported by ONR.]
168th Meeting: Acoustical Society of America
2121
11:45
found that the rate of pulse decay increases when the surface wave fronts
are perpendicular to the path of acoustic propagation and higher significant
wave height results in higher decay rates. Additionally, the effects from sea
surface roughness are found to vary with different waveguide parameters
including but not limited to sound speed profile, water depth, and seabed
properties. Of particular interest are the combined effects of sea bed properties and rough sea surfaces. It is shown that when clay like sediments are
present, higher-order modes are strongly attenuated and effects due to interaction with the rough sea surface are less pronounced. Finally, possible
influences of sea-state and 3D out-of-plane propagation effects on the
seabed characterization efforts will be discussed. [Work supported by
ONR.]
2aAO11. Effects of sea surface roughness on the mid-frequency acoustic
pulse decay in shallow water. Jennifer Wylie (National Res. Council, 6141
Edsall Rd., Apt. H, Alexandira, VA 22304, jennie.wylie@gmail.com) and
Altan Turgut (Acoust. Div., Naval Res. Lab., Washington, DC)
Recent and ongoing efforts to characterize sea bed parameters from
measured acoustic pulse decay have neglected the effects of sea surface
roughness. In this paper, these effects are investigated using a rough surface
version of RAMPE, RAMSURF, and random rough surface realizations,
calculated from a 2D JONSWAP sea surface spectrum with directional
spreading. Azimuthal dependence is investigated for sandy bottoms and
TUESDAY MORNING, 28 OCTOBER 2014
INDIANA A/B, 7:55 A.M. TO 12:10 P.M.
Session 2aBA
Biomedical Acoustics: Quantitative Ultrasound I
Michael Oelze, Cochair
UIUC, 405 N. Mathews, Urbana, IL 61801
Jonathan Mamou, Cochair
F. L. Lizzi Center for Biomedical Engineering, Riverside Research, 156 William St., 9th Floor, New York, NY 10038
Chair’s Introduction—7:55
Invited Papers
8:00
2aBA1. Myocardial tissue characterization: Myofiber-induced ultrasonic anisotropy. James G. Miller (Phys., Washington U Saint
Louis, Box 1105, 1 Brookings Dr., Saint Louis, MO 63130, james.g.miller@wustl.edu) and Mark R. Holland (Radiology and Imaging
Sci., Indiana Univ. School of Medicine, Indianapolis, IN)
One goal of this invited presentation is illustrate the capabilities of quantitative ultrasonic imaging (tissue characterization) to determine local myofiber orientation using techniques applicable to clinical echocardiographic imaging. Investigations carried out in our laboratory in the late 1970s were perhaps the first reported studies of the impact on the ultrasonic attenuation of the angle between the
incoming ultrasonic beam and the local myofiber orientation. In subsequent studies, we were able to show that the ultrasonic backscatter
exhibits a maximum and the ultrasonic attenuation exhibits a minimum when the sound beam is perpendicular to myofibers, whereas the
attenuation is maximum and the backscatter is minimum for parallel insonification. Results from our laboratory demonstrate three broad
areas of potential contribution derived from quantitative ultrasonic imaging and tissue characterization: (1) improved diagnosis and
patient management, such as monitoring alterations in regional myofiber alignment (for example, potentially in diseases such as hypertrophic cardiomyopathy), (2) improved echocardiographic imaging, such as reduced lateral wall dropout in short axis echocardiographic
images, and (3) improved understanding of myocardial physiology, such as contributing to a better understanding of myocardial twist
resulting from the layer-dependent helical configuration of cardiac myofibers. [NIH R21 HL106417.]
8:20
2aBA2. Quantitative ultrasound for diagnosing breast masses considering both diffuse and non-diffuse scatterers. James Zagzebski, Ivan Rosado-Mendez, Haidy Gerges-Naisef, and Timothy Hall (Medical Phys., Univ. of Wisconsin, 1111 Highland Ave., Rm. L1
1005, Madison, WI 53705, jazagzeb@wisc.edu)
Quantitative ultrasound augments conventional ultrasound information by providing parameters derived from scattering and attenuation properties of tissue. This presentation describes our work estimating attenuation (ATT) and backscatter coefficients (BSC), and
computing effective scatterer sizes (ESD) to differentiate benign from malignant breast masses. Radio-frequency echo data are obtained
from patients scheduled for biopsy of suspicious masses following an institutional IRB approved protocol. A Siemens S2000 equipped
with a linear array and recently a volume scanner transducer is employed. Echo signal power spectra are computed from the tissue and
from the same depth in a reference phantom having accurately measured acoustic properties. Ratios of the tissue-to-reference power
2122
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2122
spectra enable tissue ATT and BSC’s to be estimated. ESD’s are then computed by fitting BSC vs. frequency results to a size-dependent
scattering model. A heterogeneity index HDI expresses variability of the ESD over the tumor area. In preliminary data from 35 patients,
a Bayesian classifier incorporating ATT, ESD, and HDI successfully differentiated malignant masses from fibroadenomas. Future work
focuses on analysis methods when diffuse scattering and stationary signal conditions, implicitly assumed in the power spectra calculations, are not present. This approach tests for signal coherence and generates new parameters that characterize these scattering
conditions.
8:40
2aBA3. Quantitative ultrasound translates to human conditions. William O’Brien (Elec. and Comput. Eng., Univ. of Illinois, 405 N.
Mathews, Urbana, IL 61801, wdo@uiuc.edu)
2a TUE. AM
Two QUS studies will be discussed that demonstrate significant potential for translation to human conditions. One of the studies
deals with the early detection of spontaneous preterm birth (SPTB). In a cohort of 68 adult African American women, each agreed to
undergo up to five transvaginal ultrasound examinations for cervical ultrasonic attenuation (at 5 MHz) and cervical length between 20
and 36 weeks gestation (GA). At 21 weeks GA, the women who delivered preterm had a lower mean attenuation (1.0260.16 dB/cm
MHz) than the women delivering at term (1.3960.095 dB/cm MHz), p = 0.041. Cervical length at 21 weeks was not significantly different between groups. Attenuation risk of SPTB (1.2 dB/cm MHz threshold at 21 weeks): specificity = 83.3%, sensitivity = 65.4%. The
other QUS study deals with the early detection of nonalcoholic fatty liver disease (NAFLD). Liver attenuation (ATN) and backscattered
coefficients (BSC) were assessed at 3 MHz and compared to the liver MR-derived fat fraction (FF) in a cohort of 106 adult subjects. At
a 5% FF (for NAFLD, FF 5%), an ATN threshold of 0.78 dB/cm MHz provided a sensitivity of 89%, and specificity of 84%, whereas
a BSC threshold of 0.0028/cm-sr provided a sensitivity of 92% and specificity of 96%.
9:00
2aBA4. Quantitative-ultrasound detection of cancer in human lymph nodes based on support vector machines. Jonathan Mamou,
Daniel Rohrbach (F. L. Lizzi Ctr. for Biomedical Eng., Riverside Res., 156 William St., 9th Fl., New York, NY 10038, jmamou@rriusa.org), Alain Coron (Laboratoire d’Imagerie Biomedicale, Sorbonne Universites and UPMC Univ Paris 06 and CNRS and INSERM,
Paris, France), Emi Saegusa-Beecroft (Dept. of Surgery, Univ. of Hawaii and Kuakini Medical Ctr., Honolulu, HI), Thanh Minh Bui
(Laboratoire d’Imagerie Biomedicale, Sorbonne Universites and UPMC Univ Paris 06 and CNRS and INSERM, Paris, France), Michael
L. Oelze (BioAcoust. Res. Lab., Univ. of Illinois, Urbana-Champaign, IL), Eugene Yanagihara (Dept. of Surgery, Univ. of Hawaii and
Kuakini Medical Ctr., Honolulu, HI), Lori Bridal (Laboratoire d’Imagerie Biomedicale, Sorbonne Universites and UPMC Univ Paris 06
and CNRS and INSERM, Paris, France), Tadashi Yamaguchi (Ctr. for Frontier Medical Eng., Chiba Univ., Chiba, Japan), Junji Machi
(Dept. of Surgery, Univ. of Hawaii and Kuakini Medical Ctr., Honolulu, HI), and Ernest J. Feleppa (F. L. Lizzi Ctr. for Biomedical
Eng., Riverside Res., New York, NY)
Histological assessment of lymph nodes excised from cancer patients suffers from an unsatisfactory rate of false-negative determinations. We are evaluating high-frequency quantitative ultrasound (QUS) to detect metastatic regions in lymph nodes freshly excised from
cancer patients. Three-dimensional (3D) RF data were acquired from 289 lymph nodes of 82 colorectal-, 15 gastric-, and 70 breast-cancer patients with a custom scanner using a 26-MHz, single-element transducer. Following data acquisition, individual nodes underwent
step-sectioning at 50-mm to assure that no clinically significant cancer foci were missed. RF datasets were analyzed using 3D regions-ofinterest that were processed to yield 13 QUS estimates including spectral-based and envelope-statistics-based parameters. QUS estimates
are associated with tissue microstructure and are hypothesized to provide contrast between non-cancerous and cancerous regions. Leaveone-out classifications, ROC curves, and areas under the ROC (AUC) were used to compare the performance of support vector machines
(SVMs) and step-wise linear discriminant analyses (LDA). Results showed that SVM performance (AUC = 0.87) was superior to LDA
performance (AUC = 0.78). These results suggest that QUS methods may provide an effective tool to guide pathologists towards suspicious regions and also indicate that classification accuracy can be improved using sophisticated and robust classification tools. [Supported in part by NIH grant CA100183.]
9:20
2aBA5. Quantitative ultrasound assessment of tumor responses to chemotherapy using a time-integrated multi-parameter
approach. Hadi Tadayyon, Ali Sadeghi-Naini, Lakshmanan Sannachi, and Gregory Czarnota (Dept. of Medical Biophys., Univ. of Toronto, 2075 Bayview Ave., Toronto, ON M4N 3M5, Canada, gregory.czarnota@sunnybrook.ca)
Radiofrequency ultrasound data were collected from 60 breast cancer patients prior to treatment and at during the onset of their several-month treatment, using a clinical ultrasound scanner operating a ~7 MHz linear array probe. ACE, SAS, spectral, and BSC parameters were computed from 2 2 mm RF segments within the tumor region of interest (ROI) and averaged over all segments to obtain a
mean value for the ROI. The results were separated into two groups—responders and non-responders—based on the ultimate clinical/
pathologic response based on residual tumor size and tumor cellularity. Using a single parameter approach, the best prediction of
response was achieved using the ACE parameter (76% accuracy at week 1). In general, more favorable classifications were achieved
using spectral parameter combinations (82% accuracy at week 8), compared to BSC parameter combinations (73% accuracy). Using the
multi-parameter approach, the best prediction was achieved using the set [MBF, SS, SAS, ACE] and by combining week 1 QUS data
with week 4 QUS data to predict the response at week 4, providing accuracy as high as 91%. The proposed QUS method may potentially
provide early response information and guide cancer therapies on an individual patient basis.
2123
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2123
9:40
2aBA6. Quantitative ultrasound methods for uterine-cervical assessment. Timothy J. Hall (Medical Phys., Univ. of Wisconsin,
1005 WIMR, 1111 Highland Ave., Madison, WI 53705, tjhall@wisc.edu), Helen Feltovich (Medical Phys., Univ. of Wisconsin, Park
City, Utah), Lindsey C. Carlson, Quinton Guerrero, Ivan M. Rosado-Mendez, and Bin Huang (Medical Phys., Univ. of Wisconsin, Madison, WI)
The cervix is a remarkable organ. One of its tasks is to remain firm and “closed” (5 mm diameter cervical canal) prior to pregnancy.
Shortly after conception the cervix begins to soften through collagen remodeling and increased hydration. As the fetus reaches full-term
there is a profound breakdown in the collagen structure. At the end of this process, the cervix is as soft as warm butter and the cervical
canal has dilated to about 100 mm diameter. Errors in timing of this process are a cause for preterm birth, which has a cascade of lifethreatening consequences. Quantitative ultrasound is well-suited to monitoring these changes. We have demonstrated the ability to accurately assess the elastic properties and acoustic scattering properties (anisotropy in backscatter and attenuation) of the cervix in nonpregnant hysterectomy specimens and in third trimester pregnancy. We’ve shown that acoustic and mechanical properties vary along the
length of the cervix. When anisotropy and spatially variability are accounted for, there are clear differences in parameter values with
subtle differences in softening. We are corroborating acoustic observations with nonlinear optical microscopy imaging for a reality
check on underlying tissue structure. This presentation will provide an overview of this effort.
10:00–10:10 Break
10:10
2aBA7. Characterization of anisotropic media with shear waves. Matthew W. Urban, Sara Aristizabal, Bo Qiang, Carolina Amador
(Dept. of Physiol. and Biomedical Eng., Mayo Clinic College of Medicine, 200 First St. SW, Rochester, MN 55905, urban.matthew@
mayo.edu), John C. Brigham (Dept. of Civil and Environ. Eng., Dept. of BioEng., Univ. of Pittsburgh, Pittsburgh, PA), Randall R. Kinnick, Xiaoming Zhang, and James F. Greenleaf (Dept. of Physiol. and Biomedical Eng., Mayo Clinic College of Medicine, Rochester,
MN)
In conventional shear wave elastography materials are assumed to be linear, elastic, homogeneous, and isotropic. These assumptions
are important to account for in certain tissues because they are not always appropriate. Many tissues such as skeletal muscle, the kidney,
and the myocardium are anisotropic. Shear waves can be used to investigate the directionally dependent mechanical properties of anisotropic media. To study these tissues in a systematic way and to account for the effects of the anisotropic architecture, laboratory-based
phantoms are desirable. We will report on several phantom-based approaches for studying shear wave anisotropy, assuming that these
materials are transversely isotropic. Phantoms with embedded fibers were used to mimic anisotropic tissues. Homogeneous phantoms
were compressed to induce transverse isotropy according to the acoustoelastic phenomenon, which is related to nonlinear behavior of
the materials. The fractional anisotropy of these phantoms was quantified to compare with measurements made in soft tissues. In addition, soft tissues are also viscoelastic, and we have developed a method to model viscoelastic transversely isotropic materials with the finite element method (FEM). The viscoelastic property estimation from phantom experiments and FEM simulations will also be
discussed.
10:30
2aBA8. Applications of acoustic radiation force for quantitative elasticity evaluation of bladder, thyroid, and breast. Mostafa
Fatemi (Physiol. and Biomedical Eng., Mayo Clinic College of Medicine, 200 1st St. SW, Rochester, MN 55905, fatemi@mayo.edu)
Acoustic radiation force (ARF) provides a simple and yet non-invasive mechanism to induce a localized stress inside human body.
The response to this excitation is used to estimate the mechanical properties of the targeted tissue in vivo. This talk covers an overview
of three studies that use ARF for estimation of elastic properties of thyroid, breast, and the bladder in patients. The studies on thyroid
and breast were aimed at differentiating between malignant and benign nodules. The study on the bladder was aimed at indirect evaluation of bladder compliance; hence, only a global measurement was needed. The study on breast showed that 16 out of 18 benign masses
and 21 out of 25 malignant masses were correctly identified. The study on 9 thyroid patients with 7 benign and 2 malignant nodules
showed all malignant nodules were correctly classified and only 2 of the 7 benign nodules were misclassified. The bladder compliance
study revealed a high correlation between our method and independent clinical measurement of compliance (R-squared of 0.8–0.9). Further investigations on larger groups of patients are needed to fully evaluate the performances of the methods.
10:50
2aBA9. Multiband center-frequency estimation for robust speckle tracking applications. Emad S. Ebbini and Dalong Liu (Elec.
and Comput. Eng., Univ. of Minnesota, 200 Union St. SE, Minneapolis, MN 55455, ebbin001@umn.edu)
Speckle tracking is widely used for the detection and estimation of minute tissue motion and deformation with applications in elastography, shear-wave imaging, thermography, etc. The center frequency of the echo data within the tracking window is an important parameter in the estimation of the tissue displacement. Local variations in this quantity due to echo mixtures (specular and speckle
components) may produce a bias in the estimation of tissue displacement using correlation-based speckle tracking methods. We present
a new algorithm for estimation and tacking of the center frequency variation in pulse-echo ultrasound as a quantitative tissue property
and for robust speckle tracking applications. The algorithm employs multiband analysis in the determination of echo mixtures as a preprocessing step before the estimation of the center frequency map. This estimate, in turn, is used to improve the robustness of the displacement map produced by the correlation-based speckle tracking. The performance of the algorithm is demonstrated in two speckle
tracking applications of interest in medical ultrasound: (1) ultrasound thermography and (2) vascular wall imaging.
2124
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2124
11:10
2aBA10. Echo decorrelation imaging for quantification of tissue structural changes during ultrasound ablation. T. Douglas Mast,
Tyler R. Fosnight, Fong Ming Hooi, Ryan D. Keil, Swetha Subramanian, Anna S. Nagle (Biomedical Eng., Univ. of Cincinnati, 3938
Cardiovascular Res. Ctr., 231 Albert Sabin Way, Cincinnati, OH 45267-0586, doug.mast@uc.edu), Marepalli B. Rao (Environ. Health,
Univ. of Cincinnati, Cincinnati, OH), Yang Wang, Xiaoping Ren (Internal Medicine, Univ. of Cincinnati, Cincinnati, OH), Syed A.
Ahmad (Surgery, Univ. of Cincinnati, Cincinnati, OH), and Peter G. Barthe (Guided Therapy Systems/Ardent Sound, Mesa, AZ)
2a TUE. AM
Echo decorrelation imaging is a pulse-echo method that maps millisecond-scale changes in backscattered ultrasound signals, potentially providing real-time feedback during thermal ablation treatments. Decorrelation between echo signals from sequential image
frames is spatially mapped and temporally averaged, resulting in images of cumulative, heat-induced tissue changes. Theoretical analysis indicates that the mapped echo decorrelation parameter is equivalent to a spatial decoherence spectrum of the tissue reflectivity, and
also provides a method to compensate decorrelation artifacts caused by tissue motion and electronic noise. Results are presented from
experiments employing 64-element linear arrays that perform bulk thermal ablation, focal ablation, and pulse-echo imaging using the
same piezoelectric elements, ensuring co-registration of ablation and image planes. Decorrelation maps are shown to correlate with
ablated tissue histology, including vital staining to map heat-induced cell death, for both ex vivo ablation of bovine liver tissue and in
vivo ablation of rabbit liver with VX2 carcinoma. Receiver operating characteristic curve analysis shows that echo decorrelation predicts
local ablation with greater success than integrated backscatter imaging. Using artifact-compensated echo decorrelation maps, heatinginduced decoherence of tissue scattering media is assessed for ex vivo and in vivo ultrasound ablation by unfocused and focused beams.
11:30
2aBA11. Quantitative ultrasound imaging to monitor in vivo high-intensity ultrasound treatment. Goutam Ghoshal (Res. and Development, Acoust. MedSystems Inc., 208 Burwash Ave., Savoy, IL 61874, ghoshal2@gmail.com), Jeremy P. Kemmerer, Chandra Karunakaran, Rami Abuhabshah, Rita J. Miller, and Michael L. Oelze (Elec. and Comput. Eng., Univ. of Illinois at Urbana-Champaign,
Urbana, IL)
The success of any minimally invasive treatment procedure can be enhanced significantly if combined with a robust noninvasive
quantitative imaging modality. Quantitative ultrasound (QUS) imaging has been widely investigated for monitoring various treatment
responses such as chemotherapy and thermal therapy. Previously we have shown the feasibility of using spectral based quantitative ultrasound parameters to monitor high-intensity focused ultrasound (HIFU) treatment of in situ tumors [Ultrasonic Imaging, 2014]. In the
present study we examined the use the various QUS parameters to monitor HIFU treatment of an in vivo mouse mammary adenocarcinoma model. Spectral parameters in terms of the backscatter coefficient, integrated backscattered energy, attenuation coefficient, and
effective scatterer size and concentration were estimated from radiofrequency signals during the treatment. The characteristic of each parameter was compared to the temperature profile recorded by needle thermocouple inserted into the tumor a few millimeters away from
the focal zone of the intersecting HIFU and the imaging transducer beams. The changes in the QUS parameters during the HIFU treatment followed similar trends observed in the temperature readings recorded from the thermocouple. These results suggest that QUS
techniques have the potential to be used for non-invasive monitoring of HIFU exposure.
11:50
2aBA12. Rapid simulations of diagnostic ultrasound with multiple-zone receive beamforming. Pedro Nariyoshi and Robert
McGough (Dept. of Elec. and Comput. Eng., Michigan State Univ., 2120 Eng. Bldg., East Lansing, MI 48824, mcgough@egr.msu.edu)
Routines are under development in FOCUS, the “Fast Object-oriented C + + Ultrasound Simulator” (http://www.egr.msu.edu/~fultras-web), to accelerate B-mode image simulations by combining the fast nearfield method with time-space decomposition. The most
recent addition to the FOCUS simulation model implements receive beamforming in multiple zones. To demonstrate the rapid convergence of these simulations in the nearfield region, simulations of a 192 element linear array with an electronically translated 64 element
sub-aperture are evaluated for a transient excitation pulse with a center frequency of 3 MHz. The transducers in this simulated array are
5 mm high and 0.5133 mm wide with a 0.1 mm center to center spacing. The simulation is evaluated for a computer phantom with
100,000 scatterers. The same configuration is simulated in Field II (http://field-ii.dk), and the impulse response approach with a temporal
sampling rate of 1 GHz is used as reference. Simulations are evaluated for the entire B-mode image simulated with each approach. The
results show that, with sampling frequencies of 15 MHz and higher, FOCUS eliminates all of the numerical artifacts that appear in the
nearfield region of the B-mode image, whereas Field II requires much higher temporal sampling frequencies to obtain similar results.
[Supported in part by NIH Grant R01 EB012079.]
2125
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2125
TUESDAY MORNING, 28 OCTOBER 2014
MARRIOTT 6, 9:00 A.M. TO 11:00 A.M.
Session 2aED
Education in Acoustics: Undergraduate Research Exposition (Poster Session)
Uwe J. Hansen, Chair
Chemistry & Physics, Indiana State University, 64 Heritage Dr., Terre Haute, IN 47803-2374
All posters will be on display from 9:00 a.m. to 11:00 a.m. To allow contributors an opportunity to see other posters, contributors of
odd-numbered papers will be at their posters from 9:00 a.m. to 10:00 a.m. and contributors of even-numbered papers will be at their
posters from 10:00 a.m. to 11:00 a.m.
Contributed Papers
2aED1. Prediction of pressure distribution between the vocal folds using
Bernoulli’s equation. Alexandra Maddox, Liran Oren, Sid Khosla, and
Ephraim Gutmark (Univ. of Cincinnati, 3317 Bishop St., Apt. 312, Cincinnati, OH 45219, maddoxat@mail.uc.edu)
Determining the mechanisms of self-sustained oscillation of the vocal
folds requires characterization of intraglottal aerodynamics. Since most of
the intraglottal aerodynamics forces cannot be measured experimentally,
most of the current understanding of vocal fold vibration mechanism is
derived from analytical and computational models. Several of such studies
have used the Bernoulli’s equation in order to calculate the pressure distribution between the vibrating folds. In the current study, intraglottal pressure
measurements are taken in a hemilarynx model and are compared with pressure values that are computed form the Bernoulli’s equation. The hemilarynx model was made by removing one fold and having the remaining fold
vibrating against a metal plate. The plate was equipped with two pressure
ports located near the superior and inferior aspects of the fold. The results
show that pressure calculated using Bernoulli’s equation matched well with
the measured pressure waveform during the glottal opening phase and dissociated during the closing phase.
2aED2. Effects of room acoustics on subjective workload assessment
while performing dual tasks. Brenna N. Boyd, Zhao Peng, and Lily Wang
(Eng., Univ. of Nebraska at Lincoln, 11708 s 28th St., Bellevue, NE 68123,
bnboyd@unomaha.edu)
This investigation examines the subjective workload assessments of
individuals using the NASA Task Load Index (TLX), as they performed
speech comprehension tests under assorted room acoustic conditions. This
study was motivated due to the increasing diversity in US classrooms. Both
native and non-native English listeners participated, using speech comprehension test materials produced by native English speakers in the first phase
and by native Mandarin Chinese speakers in the second phase. The speech
materials were disseminated in an immersive listening environment to each
listener under 15 acoustic conditions, from combinations of background
noise level (three levels from RC-30, 40, and 50) and reverberation time
(five levels from 0.4 to 1.2 seconds). During each condition, participants
completed assorted speech comprehension tasks while also tracing a moving
dot for an adaptive rotor pursuit task. At the end of each acoustic condition,
listeners were asked to assess the perceived workload by completing the sixitem NASA TLX survey, e.g., mental demand, perceived performance,
effort, and frustration. Results indicate that (1) listeners’ workload assessments degraded as the acoustic conditions became more adverse, and (2) the
decrement in subjective assessment was greater for non-native listeners.
2126
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
2aED3. Analysis and virtual modification of the acoustics in the
Nebraska Wesleyan University campus theatre auditorium. Laura C.
Brill (Dept. of Phys., Nebraska Wesleyan Univ., 5000 St. Paul Ave, Lincoln,
NE 68504, lbrill@nebrwesleyan.edu), Matthew G. Blevins, and Lily M.
Wang (Durham School of Architectural Eng. and Construction, Univ. of
Nebraska-Lincoln, Omaha, NE)
NWU’s McDonald Theatre Auditorium is used for both musical and
non-musical performances. The acoustics of the space were analyzed in
order to determine whether the space could be modified to better fit its uses.
The acoustic characteristics of the room were obtained from impulse
responses using the methods established in ISO 3382-1 for measuring the
acoustic parameters of a performance space. A total of 22 source/receiver
pairs were used. The results indicate a need for increased reverberation in
the mid to high frequency ranges of 500–8000 Hz. The experimental results
were used to calibrate a virtual model of the space in ODEON acoustics
software. Materials in the model were then successfully modified to increase
reverberation time and eliminate unwanted flutter echoes to optimize the
acoustics to better suit the intended purposes of the space.
2aED4. The diffraction pattern associated with the transverse cusp
caustic. Carl Frederickson and Nicholas L. Frederickson (Phys. and Astronomy, Univ. of Central Arkansas, LSC 171, 201 Donaghey Ave., Conway,
AR 72035, nicholaslfrederickson@gmail.com)
New software
has been developed to evaluate the Pearcey function
Ð1
exp[6i(s4/4 + w2s2/2 + w1s)]ds. This describes the diffracP6(w1,w2)= (-1)
tion pattern of a transverse cusp caustic. Run-time comparisons between different coding environments will be presented. The caustic surface produced
by the reflection of a spherical wavefront from the surface given by h(x,y)
2
+ h23y will also be displayed.
= h21x + h2xy
2aED5. Architectural acoustical oddities. Zev C. Woodstock and Caroline
P. Lubert (Mathematics & Statistics, James Madison Univ., 301 Dixie Ave.,
Harrisonburg, VA 22801, lubertcp@jmu.edu)
The quad at James Madison University (Virginia, USA) exhibits an
uncommon, but not unique, acoustical oddity called Repetition Pitch. When
someone stands at certain places on the quad and makes a punctuated white
noise (claps, for example) a most unusual squeak is returned. This phenomenon only occurs at these specific places. A similar effect has been observed
in other locations, mostly notably Ursinus College (Pennsylvania, USA) and
the pyramid at Chichen Itza (Mexico). This talk will discuss Repetition
Pitch, as well as other interesting architectural acoustic phenomenon including the noisy animals in the caves at Arcy-sur-Cure (France), the early warning system at Golkonda Fort (Southern India) and the singing angels at
Wells Cathedral in the United Kingdom.
168th Meeting: Acoustical Society of America
2126
and size of the oral cavity in the vicinity of the sibilant constriction. Realtime three-dimensional ultrasound, palate impressions, acoustic recordings,
and electroglottography are brought to bear on these issues.
An impedance tube has been used to make measurements of the acoustic
impedance of porous samples. Porous with designed porosities and tortuosities have been produced using 3D printing. Measured impedances are compared to calculated values.
2aED11. Teaching acoustical interaction: An exploration of how teaching architectural acoustics to students spawns project-based learning.
Daniel Butko, Haven Hardage, and Michelle Oliphant (Architecture, The
Univ. of Oklahoma, 830 Van Vleet Oval, Norman, OK 73019, Haven.B.
Hardage-1@ou.edu)
2aED7. Stick bombs: A study of the speed at which a woven stick construction self-destructs. Scotty McKay and William Slaton (Phys. & Astronomy, The Univ. of Central Arkansas, 201 Donaghey Ave., Conway, AR
72034, SMCKAY2@uca.edu)
The language and methods of architecture typically evaluated through
small-scale models and drawings can be complemented by full-scale interactive constructs, augmenting learning through participatory, experiential,
and sometimes experimental means. Congruent with Constantin Brancusi’s
proclamation, “architecture is inhabitable sculpture,” opportunities to build
full-scale constructs introduce students to a series of challenges predicated
by structure, connections, safety, and a spirit of inquisition to learn from
human interaction. To educate and entertain through sensory design, undergraduate students designed and built an interactive intervention allowing
visual translation of acoustical impulses. The installation was developed and
calibrated upon the lively acoustics and outward campus display of the college’s gallery, employing excessive reverberation and resonance as a
method of visually demonstrating sound waves. People physically inhabiting the space were the participants and critics by real-time reaction to personal interaction. The learning process complemented studio-based instruction
through hands-on interaction with physical materials and elevated architectural education to a series of interactions with people. This paper documents
and celebrates the Interactive Synchronicity project as a teaching tool outside common studio project representation while enticing classmates, faculty, and complete strangers to interact with inhabitable space.
A stick bomb is created by weaving sticks together in a particular pattern. By changing the way the sticks are woven together, different types of
stick bombs are created. After the stick bomb is woven to the desired length,
one side of the stick bomb can be released causing it to rapidly begin tearing
itself apart in the form of a pulse that propagates down the weave. This
occurs due to a large amount of potential energy stored within the multitude
of bent sticks; however, the physics of this phenomena has not been studied
to the authors knowledge. The linear mass density of the stick bomb can be
changed by varying the tightness of the weave. Data on these stick bombs,
including video analysis to determine the pulse speed, will be presented.
2aED8. Three-dimensional printed acoustic mufflers and aeroacoustic
resonators. John Ferrier and William Slaton (Phys. & Astronomy, The
Univ. of Central Arkansas, 201 Donaghey Ave., Conway, AR 72034, jpferrierjr@gmail.com)
We explore and present the use of 3D printing technology to design,
construct, and test acoustic elements that could be used as a low-frequency
Helmholtz-resonator style muffler in a ventilation duct system. Acoustic elements such as these could be quickly prototyped, printed, and tested for any
noisy duct environment. These acoustic elements are tested with and without mean flow to characterize their sound absorption (and sound generation)
properties. It is found that at particular ranges of air flow speeds the simply
designed acoustic muffler acts as a site for aeroacoustic sound generation.
Measurement data and 3D model files with Python-scripting will be presented for several muffler designs. This work is supported by the Arkansas
Space Grant Consortium in collaboration with NASA’s Acoustics Office at
the Johnson Space Center.
2aED9. Determining elastic moduli of concrete using resonance. Gerard
Munyazikwiye and William Slaton (Phys. & Astronomy, The Univ. of Central
Arkansas, 201 Donaghey Ave., Conway, AR 72034, GMUNYAZIKWIYE1@
uca.edu)
The elastic moduli of rods of material can be determined by resonance
techniques. The torsional, longitudinal, and transverse resonance modes for
a rod of known mass and length can be measured experimentally. These resonance frequencies are related to the elastic properties of the material,
hence, by measuring these quantities the strength of the material can be
determined. Preliminary tests as proof of principle are conducted with metallic rods. Data and experimental techniques for determining the elastic
moduli for concrete using this procedure will be presented.
2aED10. Articulation of sibilant fricatives in Colombian Spanish. Alexandra Abell and Steven M. Lulich (Speech and Hearing Sci., Indiana Univ., 404
West Kirkwood Ave., Bloomington, IN 47404, alabell@indiana.edu)
Colombians constitute the largest South American population in the
United States at 909,000 (or 24% of the total South American population in
the U.S.), and Bogota, Colombia is the most populated area within the
Andean Highland region, yet relatively little is known about Colombian
Spanish speech production. The majority of previous studies of Colombian
phonetics have relied on perception and acoustic analysis. The present study
contributes to Colombian Spanish phonetics by investigating the articulation
of sibilant fricatives. In particular, the shape of the palate and tongue during
the production of sibilants is investigated in an attempt to quantify the shape
2127
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
2aED12. Palate shape and the central tongue groove. Coretta M. Talbert
(Speech and Hearing Sci., Univ. of Southern MS, 211 Glen Court, Jackson,
MS 39212, coretta.talbert@eagles.usm.edu) and Steven M. Lulich (Speech
and Hearing Sci., Indiana Univ., Bloomington, IN)
It is well known that the center of the tongue can be grooved so that it is
lower in the mouth than the lateral parts of the tongue, or it can bulge higher
than the lateral parts of the tongue. It has never been shown whether or how
this groove or bulge is related to the shape of the palate. In this study, we
investigated the shape and size of the palate for several speakers using digitized 3D laser-scans of palate impressions and measurements on the impression plasters themselves. The groove or bulge in the center of the tongue
was measured using real-time three-dimensional ultrasound. Pertinent findings will be presented concerning the relationship of the central groove/
bulge shape and size to the shape and size of the palate.
2aED13. Signal processing for velocity and range measurement using a
micromachined ultrasound transducer. Dominic Guri and Robert D.
White (Mech. Eng., Tufts Univ., 200 College Ave., Anderson 204, Medford,
MA 02155, dominic.guri@tufts.edu)
Signal processing techniques are under investigation for determination
of range and velocity information from MEMS based ultrasound transducers. The ideal technique will be real-time, result in high resolution and
accurate measurements, and operate successfully in noise. Doppler velocity
measurements were previously demonstrated using a MEMS cMUT array
(Shin et al., ASA Fall Meeting 2011, JASA 2013, Sens. Actuators A 2014).
The MEMS array has 168 nickel-on-glass capacitive ultrasound transducers
on a 1 cm die, and operates at 180 kHz in air. Post processing of the
received ultrasound demonstrated the ability to sense velocity using continuous wave (CW) Doppler at a range of up to 1.5 m. The first attempt at realtime processing using a frequency modulated continuous wave (FM/CW)
scheme was noise limited by the analog demodulation circuit. Further noise
analysis is ongoing to determine whether this scheme may be viable. Other
schemes under consideration include cross correlation chirp and single and
multi-frequency burst waveforms. Preliminary results from a single frequency burst showed that cross-correlation-based signal processing may
achieve acceptable range. The system is targeted at short range small robot
navigation tasks. Determination of surface roughness from scattering of the
reflected waves may also be possible.
168th Meeting: Acoustical Society of America
2127
2a TUE. AM
2aED6. Impedance tube measurements of printed porous materials.
Carl Frederickson and Forrest McDougal (Phys. and Astronomy, Univ. of
Central Arkansas, LSC 171, 201 Donaghey Ave., Conway, AR 72035,
FMCDOUGAL1@CUB.UCA.EDU)
2aED14. Investigation of a tongue-internal coordinate system for twodimensional ultrasound. Rebecca Pedro, Elizabeth Mazzocco (Speech and
Hearing Sci., Indiana Univ., 200 South Jordan Ave., Bloomington, IN
o (Dept. of Telecommunica47405, rebpedro@indiana.edu), Tamas G. Csap
tions and Media Informatics, Budapest Univ. of Technol. and Economics,
Budapest, Hungary), and Steven M. Lulich (Speech and Hearing Sci., Indiana Univ., Bloomington, IN)
In order to compare ultrasound recordings of tongue motion across utterances or across speakers, it is necessary to register the ultrasound images
with respect to a common frame of reference. Methods for doing this typically rely either (1) on fixing the position of the ultrasound transducer relative to the skull by means of a helmet or a similar device, or (2) re-aligning
the images by various means, such as optical tracking of head and transducer motion. These methods require sophisticated laboratory setups, and
are less conducive to fieldwork or other studies in which such methods are
impractical. In this study, we investigated the possibility of defining a rough
coordinate system for image registration based on anatomical properties of
the tongue itself. This coordinate system is anchored to the lower-jaw rather
than the skull, but may potentially be transformed into an approximately
skull-relative coordinate system by integrating video recordings of jaw
motion.
2aED15. The effect of finite impedance ground reflections on horizontal
full-scale rocket motor firings. Samuel Hord, Tracianne B. Neilsen, and
Kent L. Gee (Dept. of Phys. and Astronomy, Brigham Young Univ., 737 N
600 E #103, Provo, UT 84606, samuel.hord@gmail.com)
Ground reflections have a significant impact on the propagation of sound
from a horizontal rocket firing. The impedance of the ground relies strongly
on effective flow resistivity of the surface and determines the frequencies at
which interference nulls occur. For a given location, a softer ground, with
lower effective flow resistivity, shifts the location of interference nulls to
lower frequencies than expected for a harder ground. The difference in the
spectral shapes from two horizontal firings of GEM-60 rocket motors, over
snowy ground, clearly shows this effect and has been modeled. Because of
the extended nature of high energy launch vehicles, the exhaust plume is
modeled as a partially correlated line source, with distribution parameters
chosen to match the recorded data sets as best as possible. Different flow resistivity values yield reasonable comparisons to the results of horizontal
GEM-60 test firings.
2aED16. Palate-related constraints on sibilant production in three
dimensions. Sarah Janssen and Steven M. Lulich (Speech and Hearing Sci.,
Indiana Univ., 200 South Jordan Ave., Bloomington, IN 47405,
sejansse14@gmail.com)
Most studies of speech articulation are limited to a single plane, typically the midsagittal plane, although coronal planes are also used. Singleplane data have been undeniably useful in improving our understanding of
speech production, but for many acoustic and aerodynamic processes, a
knowledge of 3D vocal tract shapes is essential. In this study, we used palate
impressions to investigate variations in the 3D structure of the palates of
several individuals, and we used real-time 3D ultrasound to image the
tongue surface during sibilant production by the same individuals. Our analysis focused on the degree to which tongue shapes during sibilant productions are substantially similar or different between individuals with different
palate shapes and sizes.
2aED17. The evaluation of impulse response testing in low signal-tonoise ratio environments. Hannah D. Knorr (Audio Arts and Acoust.,
Columbia College Chicago, 134 San Carlos Rd, Address, Minooka, IL
60447, hknorr13@gmail.com), Jay Bleifnick (Audio Arts and Acoust., Columbia College Chicago, Schiller Park, IL), Andrew M. Hulva, and Dominique J. Cheenne (Audio Arts and Acoust., Columbia College Chicago,
Chicago, IL)
Impulse testing is used by industry professionals to test many parameters
of room acoustics, including the energy decay, frequency response, time
response, etc. Current testing software makes this process as streamlined as
2128
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
possible, but generally must be utilized in quiet environments to yield high
signal-to-ratios and more precise results. However, many real world situations cannot conform to the necessary standards needed for reliable data.
This study tests various methods of impulse responses in background noise
environments in an attempt to find the most reliable procedure for spaces
with a high ambient noise levels. Additionally, extreme situations will be
evaluated and a method will be derived to correct for the systematic error
attributed to high background noise levels.
2aED18. Comparison of palate impressions and palate casts from threedimensional laser-scanned digital models. Michelle Tebout and Steven M.
Lulich (Speech and Hearing Sci., Indiana Univ., 200 South Jordan Ave.,
Bloomington, IN 47405, mtebout@imail.iu.edu)
Palate impressions and their casts in plaster are negatives of each other.
While plaster casts are the standard for palate measurements and data preservation, making such casts can be time-consuming and messy. We
hypothesized that measurements from 3D laser-scanned palate impressions
are negligibly different from equivalent measurements from 3D laserscanned palate casts. If true, this would allow the step of setting impressions
in plaster to be skipped in future research. This poster presents the results of
our study.
2aED19. The analysis of sound wave scattering using a firefighter’s Personal Alert Safety System signal propagating through a localized region
of fire. Andrew L. Broda (Phys. Dept., U.S. Naval Acad., 572 C Holloway
Rd., Chauvenet Hall Rm. 295, Annapolis, MD 21402), Chase J. Rudisill
(Phys. Dept., U.S. Naval Acad., Harwood, MD), Nathan D. Smith (Phys.
Dept., U.S. Naval Acad., Davidsonville, MD), Matthew K. Schrader, and
Murray S. Korman (Phys. Dept., U.S. Naval Acad., Annapolis, MD, korman@usna.edu)
Firefighting is quite clearly a dangerous and risk-filled job. To combat
these dangers and risks, firefighters wear a (National Fire Protection
Agency, NFPA 2007 edition of the 1982 standard) Personal Alert Safety
System (PASS) that will sound a loud alarm if it detects (for example) the
lack of movement of a firefighter. However, firefighters have experienced
difficulty locating the source of these alarm chirps (95 dBA around 3 kHz)
in a burning building. The project goal is to determine the effect of pockets
of varying temperatures of air in a burning building on the sound waves produced by a PASS device. Sound scattering experiments performed with a
vertical heated air circular jet plume (anechoic chamber) and with a wood
fire plume from burning cylindrical containers (Anne Arundel Fire Department’s Training Facility) suggest that from Snell’s Law, sound rays refract
around such pockets of warmer air surrounded by less warmer ambient air
due to changes in the sound speed with temperature through the medium.
Real-time and spectral measurements of 2.7 kHz CW sound scattering
(using a microphone) exhibit some attenuation and considerable amplitude
and frequency modulation. This research may suggest future experiments
and effective modifications of the current PASS system.
2aED20. New phased array models for fast nearfield pressure simulations. Kenneth Stewart and Robert McGough (Dept. of Elec. and Comput.
Eng., Michigan State Univ., East Lansing, MI, stewa584@msu.edu)
FOCUS, the “Fast Object-oriented C + + Ultrasound Simulator,” is free
software that rapidly and accurately models therapeutic and
diagnostic ultrasound with the fast nearfield method, time-space decomposition, and the angular-spectrum approach. FOCUS presently supports arrays
of circular, rectangular, and spherically focused transducers arranged in flat
planar, spherically focused, and cylindrically focused geometries. Excellent
results are obtained with all of these array geometries in FOCUS for simulations of continuous-wave and transient excitations, and new array geometries are needed for B-mode simulations that are presently under
development. These new array geometries also require new data structures
that describe the electrical connectivity of the arrays. Efforts to develop
these new features in FOCUS are underway, and results obtained with these
new array geometries will be presented. Other new features for FOCUS will
also be demonstrated. [Supported in part by NIH Grant R01 EB012079.]
MATLAB-based
168th Meeting: Acoustical Society of America
2128
2aED21. Nonlinear scattering of crossed focused ultrasonic beams in
the presence of turbulence generated behind a model deep vein thrombosis using an orifice plate set in a thin tube. Daniel Fisher and Murray S.
Korman (Phys. Dept., U.S. Naval Acad., 572 C Holloway Rd., Chauvenet
Hall Rm. 295, Annapolis, MD 21402, korman@usna.edu)
An orifice plate (modeling a “blockage” in a deep vein thrombosis
DVT) creates turbulent flow in a downstream region of a submerged polyethylene tube (1.6 mm thick, diameter 4 cm and overall length 40 cm). In
the absence of the orifice plate, the water flow is laminar. The orifice plate is
mechanically secured between two 20 cm tube sections connected by a
union. The union allows a plate with an orifice to be slid into the union providing a concentric orifice plate that can obstruct the flow causing vorticity
and turbulent flow downstream. A set of orifice plates (3 mm thick) are used
(one at a time) to conveniently obstruct the tube flow with a different radius
compared to the inner wall tube radius. The nonlinear scattering at the sum
frequency (f+ = 3.8 MHz), from mutually perpendicular spherical focused
beams (f1 = 1.8 MHz and f2 = 2.0 MHz) is used to correlate the Doppler
shift, spectral, and intensity as a function of the orifice plate size in an effort
to correlate the blockage with the amount of nonlinear scattering. In the absence of turbulence in the overlap region, there is virtually no scattering.
Therefore, a slight blockage is detectable.
2aED23. Effects of sustainable and traditional building systems on
indoor environmental quality and occupant perceptions. Joshua J. Roberts and Lauren M. Ronsse (Audio Arts and Acoust., Columbia College Chicago, 4363 N. Kenmore Ave., Apt. #205, Chicago, IL 60613, joshua.
roberts@loop.colum.edu)
This study examines the effects of both sustainable and traditional building systems on the indoor environmental quality (IEQ) and occupant perceptions in an open-plan office floor of a high-rise building located in Chicago,
IL. The office evaluated has sustainable daylighting features as well as a
more traditional variable air volume mechanical system. Different measurement locations and techniques are investigated to quantify the indoor environmental conditions (i.e., acoustics, lighting, and thermal conditions)
experienced by the building occupants. The occupant perceptions of the
indoor environmental conditions are assessed via survey questionnaires
administered to the building occupants. The relationships between the IEQ
measured in the office and the occupant perceptions are assessed.
2aED22. Analysis of acoustic data acquisition instrumentation for
underwater blast dredging. Brenton Wallin, Alex Stott, James Hill, Timothy Nohara, Ryan Fullan, Jon Morasutti, Brad Clark, Alexander Binder, and
Michael Gardner (Ocean Eng., Univ. of Rhode Island, 30 Summit Ave.,
Narragansett, RI 02882, brentwallin@my.uri.edu)
A team of seniors from the University of Rhode Island were tasked with
analyzing the acoustic data and evaluating the data acquisition systems used
in Pacific Northwest National Laboratories’ (PNNL) study of blast dredging
in the Columbia River. Throughout the semester, the students learned about
TUESDAY MORNING, 28 OCTOBER 2014
MARRIOTT 9/10, 8:00 A.M. TO 12:15 P.M.
Session 2aID
Archives and History and Engineering Acoustics: Historical Transducers
Steven L. Garrett, Chair
Grad. Prog. in Acoustics, Penn State, Applied Research Lab, P. O. Box 30, State College, PA 16804-0030
Chair’s Introduction—8:00
Invited Papers
8:05
2aID1. 75th Anniversary of the Shure Unidyne microphone. Michael S. Pettersen (Applications Eng., Shure Inc., 5800 W. Touhy
Ave., Niles, IL 60714, pettersen_michael@shure.com)
2014 marks the 75th anniversary of the Shure model 55 microphone. Introduced in 1939 and still being manufactured today, the
Shure Unidyne was the first unidirectional microphone using a single dynamic element. The presentation provides an overview of the
Unidyne’s unique position in the history of 20th century broadcast, politics, and entertainment, plus the amazing story of Benjamin
Bauer, a 24 year old immigrant from the Ukraine who invented the Unidyne and earned his first of over 100 patents for audio technology. Rare Unidyne artifacts from the Shure Archive will be on display after the presentation, including prototypes fabricated by Ben
Bauer.
2129
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2129
2a TUE. AM
the unique acoustic signatures of confined underwater blasts and the necessary specifications of systems used to record them. PNNL used two data acquisition systems. One was a tourmaline underwater blast sensor system
created by PCB Piezotronics. The second was a hydrophone system using a
Teledyne TC 4040 hydrophone, a Dytran inline charge amplifier, and a signal conditioner built for the blast sensor system. The students concluded
that the data from the blast sensor system was reliable because the system
was built by the company for this specific application and there were calibration sheets showing the system worked properly. The hydrophone data
was deemed unreliable because components were orientated in an unusual
manner that lead to improper data acquisition. A class of URI graduate students built a new hydrophone system that accurately recorded underwater
dredge blasts performed in New York Harbor. This system is a fraction of
the price of the blast sensor system.
8:25
2aID2. Ribbon microphones. Wesley L. Dooley (Eng., Audio Eng. Assoc., 1029 North Allen Ave, Pasadena, CA 91104, wes@ribbonmics.com)
The ribbon microphone was invented by Dr. Walter Schottky who described it in German Patent 434855C, issued December 21, 1924 to
Siemens & Halske (S&H) in Berlin. An earlier “Electro-Dynamic Loudspeaker” Patent which Schottky had written with Dr. Erwin Gerlach
described a compliant, lightweight, and ribbed aluminum membrane whose thinnest dimension was at right angles to a strong magnetic field.
Passing an audio frequency current through this membrane causes it to move and create sound vibrations. The December Patent describes
how this design functions either as a loudspeaker or a microphone. A 1930 S&H patent for ribbon microphone improvements describes how
they use internal resonant and ported chambers to extend frequency response past 4 kHz. RCA dramatically advanced ribbon microphone performance in 1931. They opened the ribbon to free air to create a consistent, air-damped, low-distortion, figure-eight with smooth 30–10,000
Hz response. RCA ribbon microphones became the performance leader for cinema, broadcast, live sound and recording. Their 20–30,000 Hz
RCA 44B and BX was manufactured from 1936 to 1955. It is the oldest design still used every day at major studios. Ribbon microphones are
increasingly used for contemporary recordings. Come hear why ribbon microphones, like phonograph records, are relevant to quality sound.
8:45
2aID3. Iconic microphonic moments in historic vocal recordings. Alexander U. Case (Sound Recording Technol., Univ. of Massachusetts Lowell, 35 Wilder St., Ste. 3, Lowell, MA 01854, alex@fermata.biz)
Microphone selection—the strategic pairing of microphone make and model with each sound to be recorded—is one of the most important decisions a sound engineer must make. The technical specifications of the microphone identify which transducers are capable of
functioning properly for any given recording task, but the ultimate decision is a creative one. The goal is for the performance capabilities
of the microphone to not only address any practical recording session challenges, but also flatter the sound of the instrument, whether in
pursuit of palpable realism or a fictionalized new timbre. The creative decision is informed, in part, by demonstrated success in prior
recordings, the most important of which are described for that essential pop music instrument: the voice.
9:05
2aID4. The WE 640AA condenser microphone. Gary W. Elko (mh Acoust., 25A Summit Ave., Summit, NJ 07901, gwe@mhacoustics.com)
In 1916 Edward Wente, working for Western Electric an AT&T subsidiary, invented a microphone that was the foundation of the
modern condenser microphone. Wente’s early condenser microphone designs continued to be developed until Western Electric produced
the WE 361 in 1924 followed by the Model 394 condenser microphone in 1926. Western Electric used the WE 394 microphone as part
of the “Master Reference System” to rate audio transmission quality of the telephone network. The WE 394 was too large for some measurement purposes so in 1932 Bell Labs engineers H. Harrison and P. Flanders designed a smaller version. The diaphragm had a diameter of 0.6 in. However, this design proved too difficult to manufacture and F. Romanow, also at Bell Labs, designed the 640A “1 in.”
microphone in 1932. Years later it was discovered that the 640A sensitivity varied by almost 6 dB from -650 C to 250 C. To reduce the
thermal sensitivity, Bell Labs engineers M. Hawley and P. Olmstead carefully changed some of the 640A materials. The modified microphone was designated as the 640AA, which became the worldwide standard microphone for measuring sound pressure. This talk will
describe some more details of the history of the 640AA microphone.
9:25
2aID5. Reciprocity calibration of condenser microphones. Leo L. Beranek (Retired, 10 Longwood Dr., Westwood, MA 02090, beranekleo@ieee.org)
The theory of reciprocity began with Lord Rayleigh and was first well stated by S. Ballantine (1929). The first detailed use of the reciprocity theory for the calibration of microphones was by R. K. Cook (1940). At the wartime Electro-Acoustic Laboratory, at Harvard
University, the need arose to calibrate a large number of Western Electric 640-AA condenser microphones. A reciprocity apparatus was
developed that connected the two microphones with an optimum shaped cavity that included a means for introducing hydrogen or helium to extend the frequency range. The apparatus was published by A. L. Dimattia and F. M. Wiener (1946). A number of things
resulted. The Harvard group, in 1941, found that the international standard of sound pressure was off by 1.2 dB—that standard was
maintained by the French Telephone Company and the Bell Telephone Laboratories and was based on measurements made with Thermophones. This difference was brought to the attention of those organizations and the reciprocity method of calibration was subsequently adopted by them resulting in the proper standard of sound pressure adopted around 1942. The one-inch condenser microphone
has subsequently become the worldwide standard for precision measurement of sound field pressures.
9:45–10:00 Break
10:00
2aID6. Electret microphones. James E. West (ECE, Johns Hopkins Univ., 3400 N. Charles St., Barton Hall 105, Baltimore, MD
21218, jimwest@jhu.edu)
For nearly 40 years, condenser electret microphones have been the transducer of choice in most every area of acoustics including telephony, professional applications, hearing aids, and toys. More than 2 billion electret microphones are produced annually, primarily for
the communications and entertainment markets. E. C. Wente invented the condenser microphone in 1917 at Bell Labs while searching
for a replacement for the carbon microphone used in telephones; however, the necessary few hundred volt bias rendered the condenser
microphone unusable in telephony, but its acoustical characteristics were welcomed in professional and measurement applications. Permanently charged polymers (electrets) provided the necessary few hundred-volt bias, thus simplifying the mechanical and electrical
requirements for the condenser microphone and making it suitable for integration into the modern telephone. The introduction of inexpensive condenser microphones with matching frequency, phase, and impedance characteristics opened research opportunities for multiple microphone arrays. Array technology developed at Bell Labs will be presented in this talk.
2130
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2130
10:20
2aID7. Meucci’s telephone transducers. Angelo J. Campanella (Acculab, Campanella Assoc., 3201 Ridgewood Dr., Ohio, Hilliard,
OH 43026, a.campanella@att.net)
Antonio Meucci (1809–1889) developed variable reluctance transducers from 1854 to 1876 and beyond after noticing that he could
hear voice sounds from paddle electrodes while he participated in electrotherapy of a migraine patient around 1844 while in Havana,
Cuba. He immigrated to Staten Island, NY, in 1850 and continued experimenting to develop a telephone. He found better success from
electromagnetics using materials evolved from the telegraph developed by Morse, as well as a non-metal diaphragm with an iron tab,
iron bars and a horseshoe shape. Artifacts from his residence on Staten Island are presently on display at the museum of his life on Staten
Island from 1850 to his death. Those artifacts, thought until now to be only models, were found to be wired and still operative. Tests
were performed in July, 2011. Their electrical resistance is that expected for wire wound variable reluctance transducers. Voice signals
were produced without any externally supplied operating current. At least one transducer was found to be also operable as a receiver and
was driven to produce voice sounds to the ear. Meucci’s life and works will be discussed and these test results will be demonstrated
including recordings from voice tests.
2a TUE. AM
10:40
2aID8. The Fessenden Oscillator: The first sonar transducer. Thomas R. Howarth and Geoffrey R. Moss (U.S. Navy, 1176 Howell
St, B1346 R404A, Newport, RI 02841, thomas.howarth@navy.mil)
When the RMS Titanic sunk in 1912, there was a call placed forth by ship owners for inventors to offer solutions for ship collision
avoidance methods. Canadian born inventor Reginald A. Fessenden answered this call while working at the former Boston Submarine
Signal Company with the invention and development of the first modern transducer used in a sonar. The Fessenden oscillator was an
edge clamped circular metal with a radiating head facing the water on one side while the interior side had a copper tube attached that
moved in and out of a fixed magnetic coil. The coil consisted of a direct-current (DC) winding to provide a magnetic field polarization
and an alternating-current (AC) coil winding to induce the current into the copper tube and thus translate the magnetic field polarization
to the radiating plate with vibrations that translated from the radiating head to the water medium. The prototype and early model versions operated at 540 Hz. Later developments included adaptation of this same transducer for use in underwater communications, obstacle avoidance with WW I retrofits onto British submarines for both transmitting and receiving applications including mine detection.
This presentation will discuss design details including a modern numerical modelling effort.
11:00
2aID9. Historical review of underwater acoustic cylindrical transducer development in Russia for sonar arrays. Boris Aronov
(ATMC/ECE, Univ. of Massachusetts Dartmouth, Needam, MA) and David A. Brown (ATMC/ECE, Univ. of Massachusetts Dartmouth, 151 Martine St., Fall River, MA 02723, dbAcoustics@cox.net)
Beginning with the introduction of piezoelectric ceramics in the 1950’s, underwater acoustics transducer development for active sonar arrays proceeded in different directions in Russia (formerly USSR) than in the United States (US). The main sonar arrays in Russia
were equipped with cylindrical transducers, whereas in the US, the implementation was most often made with extensional bar transducers of the classic Tonpilz design. The presentation focuses on the underlying objectives and humane factors that shaped the preference towards the widespread application of baffled cylindrical transducers for arrays in Russia, the history of their development, and
contributions to theory of the transducers made by the pioneering developers.
11:20
2aID10. The phonodeik: Measuring sound pressure before electroacoustic transducers. Stephen C. Thompson (Graduate Program
in Acoust., Penn State Univ., N-249 Millennium Sci. Complex, University Park, PA 16802, sct12@psu.edu)
The modern ability to visualize sound pressure waveforms using electroacoustic transducers began with the development of the vacuum tube amplifier, and has steadily improved as better electrical amplification devices have become available. Before electoral amplification was available; however, a significant body of acoustic pressure measurements had been made using the phonodeik, a device
developed by Dayton C. Miller in the first decade of the twentieth century. The phonodeik employs acoustomechanical transduction to
rotate a small mirror that reflects an optical beam to visualize the pressure waveform. This presentation will review the device and some
of the discoveries made with it.
11:40
2aID11. A transducer not to be ignored: The siren. Julian D. Maynard (Phys., Penn State Univ., 104 Davey Lab, Box 231, University
Park, PA 16802, maynard@phys.psu.edu)
An historic transducer to which one should pay attention is the siren. While its early application was as a source for a musical instrument, the siren soon became the transducer of choice for long-range audible warning because of its high intensity and recognizable tone.
The components defining the siren include a solid stator and rotor, each with periodic apertures, and a compressed fluid (usually air but
could be other fluids). With the rotor rotating in close proximity to the stator, and the resulting opening and closing of passageways
through the apertures for the compressed fluid results in periodic sound waves in the surrounding fluid; usually a horn is used to enhance
the radiation efficiency. The high potential energy of the compressed fluid permits high intensity sound. Some sirens which received scientific study include that of R. Clark Jones (1946), a 50 horsepower siren with an efficiency of about 70%, and that of C. H. Allen and I.
Rudnick (1947), capable of ultrasonic frequencies and described as a “supersonic death ray” in the news media. Some design considerations, performance results, and applications for these sirens will be presented.
12:00–12:15 Panel Discussion
2131
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2131
TUESDAY MORNING, 28 OCTOBER 2014
SANTA FE, 9:00 A.M. TO 11:40 A.M.
Session 2aMU
Musical Acoustics: Piano Acoustics
Nicholas Giordano, Chair
Physics, College of Sciences and Mathematics, Auburn University, Auburn, AL 36849
Invited Papers
9:00
2aMU1. The slippery path from piano key to string. Stephen Birkett (Systems Design Eng., Univ. of Waterloo, 250 University Ave.,
Waterloo, ON N2L 3G1, Canada, sbirkett@uwaterloo.ca)
Everything that contributes to the excitation of a piano string, from key input to hammer–string interaction, is both deterministic and
consistently repeatable. Sequences of identical experimental trials give results that are indistinguishable. The simplicity of this behavior
contrasts with the elusive goal of predicting input–output response and the extreme difficulty of accurate physical characterization. The
nature and complexity of the mechanisms and material properties involved, as well as the sensitivity of their parameterization, place serious obstacles in the way of the usual investigative tools. This paper discusses and illustrates the limitations of modeling and simulation
as applied to this problem, and the special considerations required for meaningful experimentation.
9:25
2aMU2. Coupling between transverse and longitudinal waves in piano strings. Nikki Etchenique, Samantha Collin, and Thomas R.
Moore (Dept. of Phys., Rollins College, 1000 Holt Ave., Winter Park, FL 32789, netchenique@rollins.edu)
It is known that longitudinal waves in piano strings noticeably contribute to the characteristic sound of the instrument. These waves
can be induced by directly exciting the motion with a longitudinal component of the piano hammer, or by the stretching of the string
associated with the transverse displacement. Longitudinal waves that are induced by the transverse motion of the string can occur at frequencies other than the longitudinal resonance frequencies, and the amplitude of the waves produced in this way are believed to vary
quadratically with the amplitude of the transverse motion. We present the results of an experimental investigation that demonstrates the
quadratic relationship between the magnitude of the longitudinal waves and the magnitude of the transverse displacement for steadystate, low-amplitude excitation. However, this relationship is only approximately correct under normal playing conditions.
9:50
2aMU3. Microphone array measurements, high-speed camera recordings, and geometrical finite-differences physical modeling
of the grand piano. Rolf Bader, Florian Pfeifle, and Niko Plath (Inst. of Musicology, Univ. of Hamburg, Neue Rabenstr. 13, Hamburg
20354, Germany, R_Bader@t-online.de)
Microphone array measurements of a grand piano soundboard show similarities and differences between eigenmodes and forced oscillation patterns when playing notes on the instrument. During transients the driving point of the string shows enhanced energy radiation, still not as prominent as with the harpsichord. Lower frequencies are radiated stronger on the larger side of the soundboard wing
shape, while higher frequencies are radiated stronger on the smaller side. A separate region at the larger part of the wing shape, caused
by geometrical boundary conditions has a distinctly separate radiation behavior. High-speed camera recordings of the strings show
energy transfer between strings of the same note. In physical models including hammer, strings, bridge, and soundboard the hammer
movement is crucially necessary to produce a typical piano sound. Different bridge designs and bridge models are compared enhancing
inharmonic sound components due to longitudinal-transversal coupling of the strings at the bridge.
10:15–10:35 Break
10:35
2aMU4. Adjusting the soundboard’s modal parameters without mechanical change: A modal active control approach. Adrien
Mamou-Mani (IRCAM, 1 Pl. Stravinsky, Paris 75004, France, adrien.mamou-mani@ircam.fr)
How do modes of soundboards affect the playability and the sound of string instruments? This talk will investigate this question
experimentally, using modal active control. After identifying modal parameters of a structure, modal active control allows the adjustments of modal frequency and damping thanks to a feedback loop, without any mechanical changes. The potential of this approach for
musical acoustics research will be presented for three different instruments: a simplified piano, a guitar, and a cello. The effects of modal
active control of soundboards will be illustrated on attack, amplitude of sound partials, sound duration, playability, and “wolf tone”
production.
2132
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2132
11:00
2aMU5. Modeling the influence of the piano hammer shank flexibility on the sound. Juliette Chabassier (Magique 3D, Inria, 200
Ave. de la vieille tour, Talence 33400, France, juliette.chabassier@inria.fr)
A nonlinear model for a vibrating Timoshenko beam in non-forced unknown rotation is derived from the virtual work principle
applied to a system of beam with mass at the end. The system represents a flexible piano hammer shank coupled to a hammer head. A
novel energy-based numerical scheme is then provided and coupled to a global energy-preserving numerical solution for the whole piano
(strings, soundboard, and sound propagation in the air). The obtained numerical simulations show that the pianistic touch clearly influences the spectrum of the piano sound of equally loud isolated notes. These differences do not come from a possible shock excitation on
the structure, nor from a changing impact point, nor a “longitudinal rubbing motion” on the string, since neither of these features are
modeled in our study.
11:25
expensive. Instead, this paper proposes that the current key of the music is
both a good summary of past notes and a good prediction of future notes,
which can facilitate adaptive tuning. A method is proposed that uses a hidden Markov model to detect the current key of the music and compute optimal frequencies of notes based on the current key. In addition, a specialized
online machine learning method that enforces symmetry among diatonic
keys is presented, which can potentially adapt the model for different genres
of music. The algorithm can operate in real time, is responsive to the notes
played, and is applicable to various electronic instruments, such as MIDI
pianos. This paper also presents comparisons between this proposed tuning
system and conventional tuning systems.
2aMU6. Real-time tonal self-adaptive tuning for electronic instruments.
Yijie Wang and Timothy Y. Hsu (School of Music, Georgia Inst. of Technol., 950 Marietta St. NW Apt 7303, Atlanta, GA 30318, yijiewang@
gatech.edu)
A fixed tuning system cannot achieve just intonation on all intervals. A
better approximation of just intonation is possible if the frequencies of notes
are allowed to vary. Adaptive tuning is a class of methods that adjusts the
frequencies of notes dynamically in order to maximize musical consonance.
However, finding the optimal frequencies of notes directly based on some
definition of consonance has shown to be difficult and computationally
TUESDAY MORNING, 28 OCTOBER 2014
MARRIOTT 3/4, 9:25 A.M. TO 11:35 A.M.
Session 2aNSa
Noise and Psychological and Physiological Acoustics: New Frontiers in Hearing Protection I
William J. Murphy, Cochair
Hearing Loss Prevention Team, Centers for Disease Control and Prevention, National Institute for Occupational Safety and
Health, 1090 Tusculum Ave., Mailstop C-27, Cincinnati, OH 45226-1998
Elliott H. Berger, Cochair
Occupational Health & Environmental Safety Division, 3M, 7911, Zionsville Rd., Indianapolis, IN 46268-1650
Chair’s Introduction—9:25
Invited Papers
9:30
2aNSa1. How long are inexperienced-subjects “naive” for ANSI S12.6? Hilary Gallagher, Richard L. McKinley (Battlespace Acoust.
Branch, Air Force Res. Lab., 2610 Seventh St., Bldg. 441, Wright-Patterson AFB, OH 45433, richard.mckinley.1@us.af.mil), and Melissa A. Theis (ORISE, Air Force Res. Lab., Wright-Patterson AFB, OH)
ANSI S12.6-2008 describes the methods for measuring the real-ear attenuation of hearing protectors. Method A, trained-subject fit,
was intended to describe the capabilities of the devices fitted by thoroughly trained users while Method B, inexperienced-subject fit, was
intended to approximate the protection that can be attained by groups of informed users in workplace hearing conservation programs.
Inexperienced subjects are no longer considered “na€ıve” according to ANSI S12.6 after 12 or more sessions measuring the attenuation
of earplugs or semi-insert devices. However, an inexperienced subject that has received high quality video instructions may no longer be
considered “na€ıve” or “inexperienced” even after just one session. AFRL conducted an ANSI S12.6-2008 Method B study to determine
what effect, if any, high quality instructions had on the performance of na€ıve or inexperienced subjects and the number of trials where
the subject could still be considered na€ıve or inexperienced. This experiment used ten subjects who completed three ANSI S12.6
2133
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2133
2a TUE. AM
Contributed Paper
measurements using the A-B-A training order and another 10 subjects completed the study using the B-A-B training order (A = high
quality video instructions, B = short “earplug pillow-pack” written instructions). The attenuation results will be discussed and the implications for ANSI S12.6.
9:50
2aNSa2. Evaluation of variability in real-ear attenuation testing using a unique database—35 years of data from a single laboratory. Elliott H. Berger and Ronald W. Kieper (Personal Safety Div., 3M, 7911 Zionsville Rd., Indianapolis, IN 46268, elliott.berger@
mmm.com)
The gold standard in measuring hearing protector attenuation since the late 1950s has been real-ear attenuation at threshold (REAT).
Though well understood and standardized both in the U. S. (ANSI S3.19-1974 and ANSI S12.6-2008) and internationally (ISO 48691:1990), and known to provide valid and reliable estimates of protection for the test panel being evaluated, an area that is not clearly
defined is the variability of the test measurements within a given laboratory. The test standards do provide estimates of uncertainty, both
within and between laboratories, based on limited test data and interlaboratory studies, but thus far no published within-laboratory data
over numerous tests and years have been available to provide empirical support for variability statements. This paper provides information from a one-of-a-kind database from a single laboratory that has conducted nearly 2500 studies over a period of 35 years in a single
facility, managed by the same director (the lead author). Repeat test data on a controlled set of samples of a foam earplug, a premolded
earplug, and two different earmuffs, with one of the data sets comprising 25 repeat tests over that 35-year period, will be used to demonstrate the inherent variability of this type of human-subject testing.
10:10
2aNSa3. Sound field uncertainty budget for real-ear attenuation at threshold measurement per ANSI S12.6 standards. Jeremie
Voix and Celine Lapotre (Ecole
de technologie superieure, Universite du Quebec, 1100 Notre-Dame Ouest, Montreal, QC H3C 1K3,
Canada, jeremie.voix@etsmtl.ca)
In many national and international standards, the attenuation of Hearing Protection Devices is rated according to a psychophysical
method called Real-Ear Attenuation at Threshold (REAT), which averages on a group of test-subjects the difference between the open
and occluded auditory thresholds. In ANSI S12.6 standard, these REAT tests are conducted in a diffuse sound field in which sound uniformity and directionality are assessed by two objective microphone measurements. While the ANSI S12.6 standard defines these two
criteria, it does not link the microphone measurements to the actual variation of sound pressure level at the eardrum that may originate
from natural head movements during testing. This presentation examines this issue with detailed measurements conducted in an ANSI
S12.6-compliant audiometric booth using an Artificial Test Fixture (ATF). The sound pressure level variations were recorded for movements of the ATF along the three main spatial axes and two rotation planes. From these measured variations and different head movements hypothetical scenarios, various sound field uncertainty budgets were computed. These findings will be discussed in order to
eventually include them for uncertainty budget in a revised version of the ANSI S12.6 standard.
10:30
2aNSa4. Estimating effective noise dose when using hearing protection: Differences between ANSI S12.68 calculations and the
auditory response measured with temporary threshold shifts. Hilary L. Gallagher, Richard L. McKinley (Battlespace Acoust., Air
Force Res. Lab., AFRL/711HPW/RHCB, 2610 Seventh St, Wright-Patterson AFB, OH 45433-7901, hilary.gallagher.1@us.af.mil), Elizabeth A. McKenna (Ball Aerosp. and Technologies, Air Force Res. Lab., Wright-Patterson AFB, OH), and Mellisa A. Theis (ORISE,
Air Force Res. Lab., Wright-Patterson AFB, OH)
ANSI S12.6 describes the methods for measuring the real-ear attenuation at threshold of hearing protectors. ANSI S12.68 describes
the methods of estimating the effective A-weighted sound pressure levels when hearing protectors are worn. In theory, the auditory
response, as measured by temporary threshold shifts (TTS), to an unoccluded ear noise exposure and an equivalent occluded ear noise
exposure should produce similar behavioral results. In a series of studies conducted at the Air Force Research Laboratory, human subjects were exposed to continuous noise with and without hearing protection. Ambient noise levels during the occluded ear exposures
were determined using ANSI S12.6 and ANSI S12.68. These equivalent noise exposures as determined by the ANSI S12.68 “gold standard” octave-band method produced significantly different auditory responses as measured with TTS. The methods and results from this
study will be presented.
Contributed Papers
10:50
2aNSa5. Fit-testing, training, and timing—How long does it take to fittest hearing protectors? Taichi Murata (Environ. Health Sci., Univ. of
Michigan, School of Public Health, 1090 Tusculum Ave., Mailstop C-27,
Cincinnati, OH 45226, ygo7@cdc.gov), Christa L. Themann, David C.
Byrne, and William J. Murphy (Hearing Loss Prevention Team, Centers for
Disease Control and Prevention, National Inst. for Occupational Safety and
Health, Cincinnati, OH)
Hearing protector fit-testing is a Best Practice for hearing loss prevention programs and is gaining acceptance among US employers. Fit-testing
quantifies hearing protector attenuation achieved by individual workers and
ensures that workers properly fit and receive adequate protection from their
protectors. Employers may be reluctant to conduct fit-testing because of
2134
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
expenses associated with worker time away from the job, personnel to
administer the testing, and acquisition of a fit-test system. During field and
laboratory studies conducted by the National Institute for Occupational
Safety and Health (NIOSH), timing data for the fit-test process with the
NIOSH HPD Well-FitTM system were analyzed. For workers completely
na€ıve to fit-testing, the tests were completed within 15–20 minutes. Unoccluded test times were less than 4 minutes and occluded tests required less
than 3 minutes. A significant learning effect was seen for the psychoacoustic
method of adjustment used by HPD Well-Fit, explaining the shorter test
times as subjects progressed through the unoccluded and occluded conditions. Most of the workers required about 5 minutes of training time. Test
times and attenuations were tester-dependent, indicating the need to provide
training to staff administering fit-tests in the workplace.
168th Meeting: Acoustical Society of America
2134
11:20
2aNSa6. Intra-subject fit variability using field microphone-in-real-ear
attenuation measurement for foam, pre-molded and custom molded
earplugs. Jeremie Voix (Ecole
de technologie superieure, Universite du
Quebec, 1100 Notre-Dame Ouest, Montreal, QC H3C 1K3, Canada, jere
mie.voix@etsmtl.ca), Cecile Le Cocq (Ecole
de technologie superieure,
Universite du Quebec, Montreal, QC, Canada), and Elliott H. Berger
(E•A•RCAL Lab, 3M Personal Safety Div., Indianapolis, IN)
2aNSa7. A new perceptive method to measure active insertion loss of
active noise canceling headsets or hearing protectors by matching the
timbre of two audio signals. Remi Poncot and Pierre Guiu (Parrot S.A.
France, 15 rue de montreuil, Paris 75011, France, poncotremi@gmail.
com)
Attenuation of passive hearing protectors is assessed either by the Real
Ear Attenuation at Threshold subjective method or by objective Measurements In the Real Ear. For Active Noise Cancelling headsets both methods
do not practically apply. Alternative subjective methods based on loudness
balance and masked hearing threshold techniques were proposed. However,
they led to unmatched results with objective measurements at low frequency, diverging in either direction. Additionally, they are relatively long
as frequency points of interest are measured one after the other. This paper
presents a novel subjective method based on timbre matching, which has the
originality of involving other perceptive mechanisms than the previous ones
did. The attenuation performance of ANC headsets is rated by the change in
pressure level of eight harmonics when the active noise reduction functionality is switched on. All harmonics are played at once, and their levels are
adjusted by the test subject until he perceives the same timbre both in passive and active modes. A test was carried out by a panel of people in diffuse
noise field conditions to assess the performance of personal consumer headphones. Early results show that the method is as repeatable as MIRE and
lead to close results.
In recent years, the arrival of several field attenuation estimation systems
(FAES) on the industrial marketplace have enabled better assessment of
hearing protection in real-life noise environments. FAES measure the individual attenuation of a given hearing protection device (HPD) as fitted by
the end-user, but FAES enable predictions based only on measurements
taken over a few minutes and do not account for what may occur later in the
field over months or years as the earplug may be fitted slightly differently
over time. This paper will use the field microphone-in-real-ear (F-MIRE)
measurement technique to study in the laboratory how consistently a subject
can fit and refit an HPD. A new metric, the intra-subject fit variability, will
be introduced and quantified for three different earplugs (roll-down foam,
premolded and custom molded), as fitted by two types of test subjects (experienced and inexperienced). This paper will present the experimental process used and statistical calculations performed to quantify intra-subject fit
variability. As well, data collected from two different laboratories will be
contrasted and reviewed as to the impact of trained versus untrained test
subjects.
TUESDAY MORNING, 28 OCTOBER 2014
INDIANA E, 8:15 A.M. TO 11:20 A.M.
Session 2aNSb
Noise and Structural Acoustics and Vibration: Launch Vehicle Acoustics I
Kent L. Gee, Cochair
Brigham Young University, N243 ESC, Provo, UT 84602
Seiji Tsutsumi, Cochair
JEDI Center, JAXA, 3-1-1 Yoshinodai, Chuuou, Sagamihara 252-5210, Japan
Chair’s Introduction—8:15
Invited Papers
8:20
2aNSb1. Inclusion of source extent and coherence in a finite-impedance ground reflection model with atmospheric turbulence.
Kent L. Gee and Tracianne B. Neilsen (Dept. of Phys. and Astronomy, Brigham Young Univ., N243 ESC, Provo, UT 84602, kentgee@
byu.edu)
Acoustic data collected in static rocket tests are typically influenced by ground reflections. Furthermore, the partial coherence of the
ground interaction due to atmospheric turbulence can play a significant role for larger propagation distances. Because the rocket plume
is an extended radiator whose directionality is the result of significant source correlation, assessment of the impact of ground reflections
in the data must include these effects. In this paper, a finite impedance-ground, single-source interference approach [G. A. Daigle, J.
Acoust. Soc. Am. 65, 45–49 (1979)] that incorporates both amplitude and phase variations due to turbulence is extended to distributions
of correlated monopoles. The theory for obtaining the mean-square pressure from multiple correlated sources in the presence of atmospheric turbulence is described. The effects of source correlation and extent, ground effective flow resistivity, and turbulence parameters
are examined in terms differences in relative sound pressure level across a realistic parameter space. Finally, the model prediction is
compared favorably against data from horizontal firings of large solid rocket motors. [Work supported by NASA MSFC and Blue Ridge
Research and Consulting, LLC.]
2135
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2135
2a TUE. AM
11:05
8:40
2aNSb2. Estimation of acoustic loads on a launch vehicle fairing. Mir Md M. Morshed (Dept. of Mech. Eng., Jubail Univ. College,
Jubail Industrial City, Jubail 10074, Saudi Arabia, morshedm@ucj.edu.sa), Colin H. Hansen, and Anthony C. Zander (School of Mech.
Eng., The Univ. of Adelaide, Adelaide, SA, Australia)
During the launch of space vehicles, there is a large external excitation generated by acoustic and structural vibration. This is due to
acoustic pressure fluctuations on the vehicle fairing caused by the engine exhaust gases. This external excitation drives the fairing structure and produces large acoustic pressure fluctuations inside the fairing cavity. The acoustic pressure fluctuations not only produce high
noise levels inside the cavity but also cause damage such as structural fatigue, and damage to, or destruction of, the payload inside the
fairing. This is an important problem because one trend of the aerospace industry is to use composite materials for the construction of
launch vehicle fairings, resulted in large-scale weight reductions of launch vehicles, but increased the noise transmission inside the fairing. This work investigates the nature of the external acoustic pressure distribution on a representative small launch vehicle fairing during liftoff. The acoustic pressure acting on a representative small launch vehicle fairing was estimated from the complex acoustic field
generated by the rocket exhaust during liftoff using a non-unique source allocation technique which considered acoustic sources along
the rocket engine exhaust flow. Numerical and analytical results for the acoustic loads on the fairing agree well.
9:00
2aNSb3. Prediction of acoustic environments from horizontal rocket firings. Clothilde Giacomoni and Janice Houston (NASA/
MSFC, NASA Marshall Space Flight Ctr., Bldg 4203, Cube 3128, Msfc, AL 35812, clothilde.b.giacomoni@nasa.gov)
In recent years, advances in research and engineering have led to more powerful launch vehicles which yield acoustic environments
potentially destructive to the vehicle or surrounding structures. Therefore, it has become increasingly important to be able to predict the
acoustic environments created by these vehicles in order to avoid structural and/or component failure. The current industry standard technique for predicting launch-induced acoustic environments was developed by Eldred in the early 1970s. Recent work has shown Eldred’s
technique to be inaccurate for current state-of-the-art launch vehicles. Due to the high cost of full-scale and even sub-scale rocket experiments, very little rocket noise data is available. Much of the work thought to be applicable to rocket noise has been done with heated jets.
A model to predict the acoustic environment due to a launch vehicle in the far-field was created. This was done using five sets of horizontally fired rocket data, obtained between 2008 and 2012. Through scaling analysis, it is shown that liquid and solid rocket motors exhibit
similar spectra at similar amplitudes. This model is accurate for these five data sets within 5 dB of the measured data.
9:20
2aNSb4. Acoustics research of propulsion systems. Ximing Gao (NASA Marshall Space Flight Ctr., Atlanta, Georgia) and Janice
Houston (NASA Marshall Space Flight Ctr., 650 S. 43rd St., Boulder, Colorado 80305, janice.d.houston@nasa.gov)
The liftoff phase induces high acoustic loading over a broad frequency range for a launch vehicle. These external acoustic environments are used in the prediction of the internal vibration responses of the vehicle and components. Present liftoff vehicle acoustic environment prediction methods utilize stationary data from previously conducted hold-down tests to generate 1/3 octave band Sound
Pressure Level (SPL) spectra. In an effort to update the accuracy and quality of liftoff acoustic loading predictions, non-stationary flight
data from the Ares I-X were processed in PC-Signal in two flight phases: simulated hold-down and liftoff. In conjunction, the Prediction
of Acoustic Vehicle Environments (PAVE) program was developed in MATLAB to allow for efficient predictions of sound pressure levels
(SPLs) as a function of station number along the vehicle using semi-empirical methods. This consisted of generating the Dimensionless
Spectrum Function (DSF) and Dimensionless Source Location (DSL) curves from the Ares I-X flight data. These are then used in the
MATLAB program to generate the 1/3 octave band SPL spectra. Concluding results show major differences in SPLs between the holddown test data and the processed Ares I-X flight data making the Ares I-X flight data more practical for future vehicle acoustic environment predictions.
9:40
2aNSb5. Acoustics associated with liquid rocket propulsion testing. Daniel C. Allgood (NASA SSC, Bldg. 3225, Stennis Space Ctr.,
MS 39529, Daniel.C.Allgood@nasa.gov)
Ground testing of liquid rocket engines is a necessary step towards building reliable launch vehicles. NASA Stennis Space Center
has a long history of performing both developmental and certification testing of liquid propulsion systems. During these test programs,
the propulsion test article, test stand infrastructure and the surrounding community can all be exposed to significant levels of acoustic
energy for extended periods of time. In order to ensure the safety of both personnel and equipment, predictions of these acoustic environments are conducted on a routine basis. This presentation will provide an overview of some recent examples in which acoustic analysis
has been performed. Validation of these predictions will be shown by comparing the predictions to acoustic data acquired during smalland full-scale engine hot-fire testing. Applications of semi-empirical and advanced computational techniques will be reviewed for both
sea-level and altitude test facilities.
10:00–10:20 Break
2136
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2136
10:20
2aNSb6. Post-flight acoustic analysis of Epsilon launch vehicle at lift-off. Seiji Tsutsumi (JAXA’s Eng. Digital Innovation Ctr.,
JAXA, 3-1-1 Yoshinodai, Chuuou, Sagamihara, Kanagawa 252-5210, Japan, tsutsumi.seiji@jaxa.jp), Kyoichi Ui (Space Transportation
Mission Directorate, JAXA, Tsukuba, Japan), Tatsuya Ishii (Inst. of Aeronautical Technol., JAXA, Chofu, Japan), Shinichiro Tokudome
(Inst. of Space and Aeronautical Sci., JAXA, Sagamihara, Japan), and Kei Wada (Tokyo Office, Sci. Service Inc., Chuuou-ku, Japan)
Acoustic level both inside and outside the fairing is measured at the first Epsilon Launch Vehicle (Epsilon-1). The obtained data
shows time-varying fluctuation due to the ascent of the vehicle. Equivalent stationary duration for such non-stationary flight data is
determined based on the procedure described in NASA HDBK-7005. The launch pad used by the former M-V launcher is modified for
the Epsilon based on the Computational Fluid Dynamics (CFD) and 1/42-scale model tests. Although the launch pad is compact and any
water injection system is not installed, 10 dB reduction in overall sound pressure level (OASPL) is achieved due to the modification for
the Epsilon, comparing with the M-V. Acoustic level inside the fairing satisfies the design requirement. Acoustic design of the launch
pad developed here is revealed to be effective. Prediction of the acoustics level based on the Computational Fluid Dynamics (CFD) and
subscale testing is also investigated by comparing with the flight measurement.
2a TUE. AM
10:40
2aNSb7. Jet noise-based diagnosis of combustion instability in solid rocket motors. Hunki Lee, Taeyoung Park, Won-Suk Ohm
(Yonsei Univ., 50 Yonsei-ro, Seodaemun-gu, Seoul 120-749, South Korea, ohm@yonsei.ac.kr), and Dohyung Lee (Agency for Defense
Development, Daejeon, South Korea)
Diagnosis of combustion instability in a solid rocket motor usually involves in-situ measurements of pressure in the combustor, a
harsh environment that poses challenges in instrumentation and measurement. This paper explores the possibility of remote diagnosis of
combustion instability based on far-field measurements of rocket jet noise. Because of the large pressure oscillations associated with
combustion instability, the wave process in the combustor has many characteristic features of nonlinear acoustics such as shocks and
limit cycles. Thus the remote detection and characterization of instability can be performed by listening for the tell-tale signs of the combustor nonlinear acoustics, buried in the jet noise. Of particular interest is the choice of nonlinear acoustic measure (e.g., among skewness, bispectra, and Howell-Morfey Q/S) that best brings out the acoustic signature of instability from the jet noise data. Efficacy of
each measure is judged against the static test data of two tactical motors (one stable, the other unstable).
11:00
2aNSb8. Some recent experimental results concerning turbulent coanda wall jets. Caroline P. Lubert (Mathematics & Statistics,
James Madison Univ., 301 Dixie Ave., Harrisonburg, VA 22801, lubertcp@jmu.edu)
The Coanda effect is the tendency of a stream of fluid to stay attached to a convex surface, rather than follow a straight line in its
original direction. As a result, in such jets mixing takes place between the jet and the ambient air as soon as the jet issues from its exit
nozzle, causing air to be entrained. This air-jet mixture adheres to the nearby surface. Whilst devices employing the Coanda effect usually offer substantial flow deflection, and enhanced turbulence levels and entrainment compared with conventional jet flows, these prospective advantages are generally accompanied by significant disadvantages including a considerable increase in associated noise levels
and jet breakaway. Generally, the reasons for these issues are not well understood and thus the full potential offered by the Coanda effect
is yet to be realized. The development of a model for predicting the noise emitted by three-dimensional flows over Coanda surfaces
would suggest ways in which the noise could be reduced or attenuated. In this paper, the results of recent experiments on a 3-D turbulent
Coanda wall jet are presented. They include the relationship of SPL, shock cell distribution and breakaway to various flow parameters,
and predictions of the jet boundary.
2137
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2137
TUESDAY MORNING, 28 OCTOBER 2014
INDIANA C/D, 8:30 A.M. TO 11:30 A.M.
Session 2aPA
Physical Acoustics: Outdoor Sound Propagation
Kai Ming Li, Cochair
Mechanical Engineering, Purdue University, 140 South Martin Jischke, West Lafayette, IN 47907-2031
Shahram Taherzadeh, Cochair
Engineering & Innovation, The Open University, Walton Hall, Milton Keynes MK7 6AA, United Kingdom
Contributed Papers
8:30
9:00
2aPA1. On the inversion of sound fields above a locally reacting ground
for direct impedance deduction. Kai Ming Li and Bao N. Tong (Mech.
Eng., Purdue Univ., 177 South Russel St., West Lafayette, IN 47907-2099,
mmkmli@purdue.edu)
2aPA3. Wavelet-like models for random media in wave propagation
simulations. D. Keith Wilson (Cold Regions Res. and Eng. Lab., U.S. Army
Engineer Res. and Dev. Ctr., 72 Lyme Rd., Hanover, NH 03755-1290,
D.Keith.Wilson@usace.army.mil), Chris L. Pettit (Aerosp. Eng. Dept., U.S.
Naval Acad., Annapolis, MD), and Sergey N. Vecherin (Cold Regions Res.
and Eng. Lab., U.S. Army Engineer Res. and Dev. Ctr., Hanover, NH)
A complex root-finding algorithm is typically used to deduce the acoustic impedance of a locally reacting ground by inverting the measured sound
fields. However, there is an issue of uniquely determining the impedance
from a measurement of an acoustic transfer function. The boundary loss factor F, which is a complex function, is the source of this ambiguity. It is associated with the spherical wave reflection coefficient Q for the reflected
sound field. These two functions are dependent on a complex parameter
known as the numerical distance w. The inversion of F leading to the multiple solutions of w can be identified as the root cause of the problem. To
resolve this ambiguity, the zeroes and saddle points of F are determined for
a given source/receiver geometry and a known acoustic impedance. They
are used to establish the basins containing all plausible solutions. The topography of Q is further examined in the complex w-plane. A method for identifying the family of solutions and selecting the physically meaningful
branch is proposed. Validation is provided by using numerical simulations
as well as the experimentally data. The error and uncertainties in the
deduced impedance are quantified.
8:45
2aPA2. An improved method for direct impedance deduction of a
locally reacting ground. Bao N. Tong and Kai Ming Li (Mech. Eng., Purdue Univ., 177 South Russel St., West Lafayette, IN 47907-2099, bntong@
purdue.edu)
An accurate deduction of the acoustic impedance of a locally reacting
ground depends on a precise measurement of sound fields at short-ranges.
However, measurement uncertainties exists in both the magnitude and the
phase of the acoustic transfer function. By using the standard method, accurate determination of the acoustic impedance can be difficult when the
measured phases become unreliable in many outdoor conditions. An
improved technique, which relies only on the magnitude information, has
been developed. A minimum of two measurements at two source/receiver
configurations are needed to determine the acoustic impedance. Even in the
absence of measurement uncertainties, a more careful analysis suggests that
a third independent measurement is often needed to give an accurate solution. When experimental errors are inevitably introduced, a selection of
optimal geometry becomes necessary to reduce the sensitivity of the
deduced impedance to small variations in the data. A graphical method is
provided which offers greater insight into the deduction of impedance and a
downhill simplex algorithm has been developed to automate the procedure.
Physical constraints are applied to limit the search region and to eliminate
the rogue solutions. Several case studies using indoor and outdoor data are
presented to validate the proposed technique.
2138
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Simulations of wave propagation and scattering in random media are
often performed by synthesizing the media from Fourier modes, in which
the phases are randomized and the amplitudes tailored to provide a prescribed spectrum. Although this approach is computationally efficient, it
cannot capture organization and intermittency in random media, which
impacts higher-order statistical properties. As an alternative, we formulate a
cascade model involving distributions of wavelet-like objects (quasi-wavelets or QWs). The QW model is constructed in a self-similar fashion, with
the sizes, amplitudes, and numbers of offspring objects occurring at a constant ratios between generations. The objects are randomly distributed in
space according to a Poisson process. The QW model is formulated in static
(time-invariant), steady-state, and non-steady versions. Many diverse natural and man-made environments can be synthesized, including turbulence,
porous media, rock distributions, urban buildings, and vegetation. The synthesized media can then be used in simulations of wave propagation and
scattering.
9:15
2aPA4. Space-time correlation of acoustic signals in a turbulent atmosphere. Vladimir E. Ostashev, D. Keith Wilson (U.S. Army Engineer Res.
and Development Ctr., 72 Lyme Rd., Hanover, NH 03755, vladimir.ostashev@colorado.edu), Sandra Collier (U.S. Army Res. Lab., Adelphi, MD),
and Sylvain Cheinet (French-German Res. Inst. of Saint-Louis, Saint-Louis,
France)
Scattering by atmospheric turbulence diminishes the correlation, in both
space and time, of acoustic signals. This decorrelation subsequently impacts
beamforming, averaging, and other techniques for enhancing signal-to-noise
ratio. Space-time correlation can be measured directly with a phased microphone array. In this paper, a general theory for the space-time correlation
function is presented. The atmospheric turbulence is modeled using the von
Karman spatial spectra of temperature and wind velocity fluctuations and
locally frozen turbulence (i.e., the Taylor’s frozen turbulence hypothesis
with convection velocity fluctuations). The theory developed is employed to
calculate and analyze the spatial and temporal correlation of acoustic signals
for typical regimes of an unstable atmospheric boundary layer, such as
mostly cloudy or sunny conditions with light, moderate, or strong wind. The
results obtained are compared with available experimental data.
168th Meeting: Acoustical Society of America
2138
10:30
2aPA5. Characterization of wind noise by the boundary layer meteorology. Gregory W. Lyons and Nathan E. Murray (National Ctr. for Physical
Acoust., The Univ. of MS, 1 Coliseum Dr., University, MS 38677, gwlyons@go.olemiss.edu)
2aPA8. An investigation of wind-induced and acoustic-induced ground
motions. Vahid Naderyan, Craig J. Hickey, and Richard Raspet (National
Ctr. for Physical Acoust. and Dept. of Phys. and Astronomy, Univ. of MS,
NCPA, 1 Coliseum Dr.,, University, MS 38677, vnaderya@go.olemiss.
edu)
The fluctuations in pressure generated by turbulent motions of the
atmospheric boundary layer are a principal noise source in outdoor acoustic
measurements. The mechanics of wind noise involve not only stagnation
pressure fluctuations at the sensor, but also shearing and self-interaction of
turbulence throughout the flow, particularly at low frequencies. The contributions of these mechanisms can be described by the boundary-layer meteorology. An experiment was conducted at the National Wind Institute’s
200-meter meteorological tower, located outside Lubbock, Texas in the
Llano Estacado region. For two days, a 44-element 400-meter diameter
array of unscreened NCPA-UMX infrasound sensors recorded wind noise
continuously, while the tower and a Doppler SODAR measured vertical profiles of the boundary layer. Analysis of the fluctuating pressure with the meteorological data shows that the statistical structure of wind noise depends
on both mean velocity distribution and buoyant stability. The root-meansquare pressure exhibits distinct scalings for stable and unstable stratification. Normalization of the pressure power spectral density depends on the
outer scales. In stable conditions, the kurtosis of the wind noise increases
with Reynolds number. Measures of noise intermittency are explored with
respect to the meteorology.
9:45
2aPA6. Statistical moments for wideband acoustic signal propagation
through a turbulent atmosphere. Jericho E. Cain (US Army Res. Lab.,
1200 East West Hwy, Apt. 422, Silver Spring, MD 20910, jericho.cain@
gmail.com), Sandra L. Collier (US Army Res. Lab., Adelphi, MD),
Vladimir E. Ostashev, and David K. Wilson (U.S. Army Engineer Res. and
Development Ctr., Hanover, NH)
Developing methods for managing noise propagation, sound localization, sound classification, and for designing novel acoustic remote sensing
methods of the atmosphere requires a detailed understanding of the impact
that atmospheric turbulence has on acoustic propagation. In particular,
knowledge of the statistical moments of the sound field is needed. The first
statistical moment corresponds to the coherent part of the sound field and it
is need in beamforming applications. The second moment enables analysis
of the mean intensity of a pulse in a turbulent atmosphere. Numerical solutions to a set of recently derived closed form equations for the first and second order statistical moments of a wideband acoustic signal propagating in
a turbulent atmosphere with spatial fluctuations in the wind and temperature
fields are presented for typical regimes of the atmospheric boundary layer.
10:00–10:15 Break
10:15
2aPA7. Analysis of wind noise reduction by semi-porous fabric domes.
Sandra L. Collier (U.S. Army Res. Lab., 2800 Powder Mill Rd., RDRLCIE-S, Adelphi, MD 20783-1197, sandra.l.collier4.civ@mail.mil), Richard
Raspet (National Ctr. for Physical Acoust., Univ. of MS, University, MS),
John M. Noble, W. C. Kirkpatrick Alberts (U.S. Army Res. Lab., Adelphi,
MD), and Jeremy Webster (National Ctr. for Physical Acoust., Univ. of MS,
University, MS)
For low frequency acoustics, the wind noise contributions due to turbulence may be divided into turbulence–sensor, turbulence–turbulence, and
turbulence–mean shear interactions. Here, we investigate the use of a semiporous fabric dome for wind noise reduction in the infrasound region. Comparisons are made between experimental data and theoretical predictions
from a wind noise model [Raspet, Webster, and Naderyan, J. Acoust. Soc.
Am. 135, 2381 (2014)] that accounts for contributions from the three turbulence interactions.
2139
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Wind noise at low frequency is a problem in seismic surveys, which
reduces seismic image clarity. In order to find a solution for this problem,
we investigated the driving pressure perturbations on the ground surface
associated with wind-induced ground motions. The ground surface pressure
and shear stress at the air–ground interface were used to predict the displacement amplitudes of the horizontal and vertical ground motions as a
function of depth. The measurements were acquired at a site having a flat
terrain and low seismic ambient noise under windy conditions. Multiple triaxial geophones were deployed at different depths to study the induced
ground velocity as a function of depth. The measurements show that the
wind excites horizontal components more than vertical component on the
above ground geophone due to direct interaction with the geophone. For
geophones buried flush with the ground surface and at various depths below
the ground, the vertical components of the velocity are greater than the horizontal components. There is a very small decrease in velocity with depth.
The results are compared to acoustic-ground coupling case. [This work is
supported by USDA under award 58-6408-1-608.]
10:45
2aPA9. Using an electro-magnetic analog to study acoustic scattering in
a forest. Michelle E. Swearingen (US Army ERDC, Construction Eng. Res.
Lab., P.O. Box 9005, Champaign, IL 61826, michelle.e.swearingen@usace.
army.mil) and Donald G. Albert (US Army ERDC, Hanover, NH)
Using scale models can be a convenient method for investigating multiple scattering in complex environments, such as a forest. However, the
increased attenuation with increasing frequency limits the propagation distances available for such models. An electromagnetic analog is an alternative way to study multiple scattering from rigid objects, such as tree trunks.
This analog does not suffer from the intrinsic attenuation and allows for
investigation of a larger effective area. In this presentation, the results from
a 1:50 scale electromagnetic analog are compared to full-scale data collected in a forest. Further tests investigate propagation along multiple paths
through a random configuration of aluminum cylinders representing trees.
Special considerations and anticipated range of applicability of this analog
method are discussed.
11:00
2aPA10. Modeling of sound scattering by an obstacle located below a
hardbacked rigid porous medium. Yiming Wang and Kai Ming Li (Mech.
Eng., Purdue Univ., 177 South Russel St., West Lafayette, IN 47907-2031,
mmkmli@purdue.edu)
The boundary integral equation (BIE) formulation takes advantage of
the well-known Green’s function for the sound fields above a plane interface. It can then lead to a simplified numerical solution known as the boundary element method (BEM) that enables an accurate computation of sound
fields above the plane interface with the presence of obstacles of complex
shapes. The current study is motivated by the need to explore the acoustical
characteristics of a layer of sound absorption materials embedded with
equally spaced rigid inserts. In principle, this problem may be solved by a
standard finite element program but it is found more efficient to use the BIE
approach by discretizing only the boundary surfaces of the obstacles within
the medium. The formulation is facilitated by using accurate Green’s functions for computing the sound fields above and within a layer of rigid porous
medium. This paper reports a preliminary study to model the scattering of
sound by an obstacle placed within the layered rigid porous medium. The
two-dimensional Green’s functions will be derived and used for the development of a BEM model for computing the sound field above and within the
rigid porous medium due to the presence of an arbitrarily shaped obstacle.
168th Meeting: Acoustical Society of America
2139
2a TUE. AM
9:30
11:15
the one dimensional analytical and numerical solution of a finite channel
response between two semi-infinite planes. The branch integrals representing the reflection coefficient is implemented to evaluate the pressure amplitude of the boundary effect. The approach addresses the validation of
application of geometric image sources for finite boundaries. Consequently,
the 3D extension of the problem for a closed cavity is also investigated.
2aPA11. Analysis of the Green’s function for a duct and cavity using
geometric image sources. Ambika Bhatta, Charles Thompson, and Kavitha
Chandra (Univ. of Massachusetts Lowell, 1 University Ave., Lowell, MA
01854, ambika_bhatta@student.uml.edu)
The presented work investigates the solution for pressure response of a
point source in a two dimensional waveguide. The methodology is based on
TUESDAY MORNING, 28 OCTOBER 2014
MARRIOTT 1/2, 8:00 A.M. TO 10:00 A.M.
Session 2aSAa
Structural Acoustics and Vibration and Noise: Computational Methods in Structural Acoustics and
Vibration
Robert M. Koch, Cochair
Chief Technology Office, Naval Undersea Warfare Center, Code 1176 Howell Street, Bldg. 1346/4, Code 01CTO,
Newport, RI 02841-1708
Matthew Kamrath, Cochair
Acoustics, Pennsylvania State University, 717 Shady Ridge Road, Hutchinson, MN 55350
Invited Papers
8:00
2aSAa1. A radical technology for modeling target scattering. David Burnett (Naval Surface Warfare Ctr., Code CD10, 110 Vernon
Ave., Panama City, FL 32407, david.s.burnett@navy.mil)
NSWC PCD has developed a high-fidelity 3-D finite-element (FE) modeling system that computes acoustic color templates (target
strength vs. frequency and aspect angle) of single or multiple realistic objects (e.g., target + clutter) in littoral environments. High-fidelity means that 3-D physics is used in all solids and fluids, including even thin shells, so that solutions include not only all propagating
waves but also all evanescent waves, the latter critically affecting the former. Although novel modeling techniques have accelerated the
code by several orders of magnitude, NSWC PCD is now implementing a radically different FE technology, e.g., one thin-shell element
spanning 90 of a cylindrical shell. It preserves all the 3-D physics but promises to accelerate the code another two to three orders of
magnitude. The talk will briefly review the existing system and then describe the new technology.
8:20
2aSAa2. Faster frequency sweep methods for structural vibration and acoustics analyses. Kuangcheng Wu (Ship Survivability,
Newport News ShipBldg., 202 Schembri Dr., Yorktown, VA 23693, kc.wu@hii-nns.com) and Vincent Nguyen (Ship Survivability,
Newport News ShipBldg., Newport News, VA)
The design of large, complex structures typically requires knowledge of the mode shape and forced response near major resonances
to ensure deflection, vibration, and the resulting stress are kept below acceptable levels, and to guide design changes where necessary.
Finite element analysis (FEA) is commonly used to predict Frequency Response Functions (FRF) of the structure. However, as the complexity and detail of the structure grows, the system matrices, and the computational resources needed to solve them, get large. Furthermore, the need to use small frequency steps to accurately capture the resonant response peaks can drive up the number of FRF
calculations required. Thus, the FRF calculation can be computationally expensive for large structural systems. Several approaches have
been proposed that can significantly accelerate the overall process by approximating the frequency dependent response. Approximation
approaches based on Krylov Galerkin Projection (KGP) and Pade calculate the forced response at only a few frequencies, then use the
response and its derivatives to reconstruct the FRF in-between the selected direct calculation points. This paper first validates the two
approaches with analytic solutions for a simply supported plate, and then benchmarks several numerical examples to demonstrate the accuracy and efficiency of the new approximate methods.
2140
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2140
8:40
2aSAa3. Waves in continua with extreme microstructures. Paul E. Barbone (Mech. Eng., Boston Univ., 110 Cummington St., Boston, MA 02215, barbone@bu.edu)
The effective properties of a material may be generally defined as those that describe the limiting case where the wavelength of propagation is infinite compared to the characteristic scale of the microstructure. Generally, the limit of vanishingly small microstructural
scale in a heterogeneous elastic medium results in an effective homogeneous medium that is again elastic. We show that for materials
with extreme microstructures, the limiting effective medium can be quite exotic, including polar materials, or multiphase continuum.
These continuum models naturally give rise to unusual effective properties including negative or anisotropic mass. Though unusual,
these properties have straightforward interpretations in terms of the laws of classical mechanics. Finally, we discuss wave propagation
in these structures and find dispersion curves with multiple branches.
9:00
2aSAa4. A comparison of perfectly matched layers and infinite elements
for exterior Helmholtz problems. Gregory Bunting (Computational Solid
Mech. and Structural Dynam., Sandia National Labs., 709 Palomas Dr. NE,
Albuquerque, NM 87108, bunting.gregory@gmail.com), Arun Prakash
(Lyles School of Civil Eng., Purdue Univ., West Lafayette, IN), and
Timothy Walsh (Computational Solid Mech. and Structural Dynam., Sandia
National Labs., West Lafyette, IN)
Perfectly matched layers and infinite elements are commonly used for finite element simulations of acoustic waves on unbounded domains. Both
involve a volumetric discretization around the periphery of an acoustic
mesh, which itself surrounds a structure or domain of interest. Infinite elements have been a popular choice for these problems since the 1970s. Perfectly matched layers are a more recent technology that is gaining
popularity due to ease of implementation and effectiveness as an absorbing
boundary condition. In this study, we present massively parallel implementations of these two techniques, and compare their performance on a set of
representative structural–acoustic problems on exterior domains. We examine the conditioning of the linear systems generated by the two techniques
by examining the number of Krylov-iterations needed for convergence to a
fixed solver tolerance. We also examine the effects of PML parameters,
exterior boundary conditions, and quadrature rules on the accuracy of the
solution. [Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of
Lockheed martin Corporation, for the U.S. Department of Energy’s National
Nuclear Security Administration under contract DE-AC04-94AL850000.]
the effect of tire rotational speed on the natural frequencies of these various
modes types will also be discussed.
9:30
2aSAa6. Simulating sound absorption in porous material with the lattice Boltzmann method. Andrey R. da Silva (Ctr. for Mobility Eng., Fedopolis,
eral Univ. of Santa Catarina, Rua Monsenhor Topp, 173, Florian
Santa Catarina 88020-500, Brazil, andrey.rs@ufsc.br), Paulo Mareze, and
Eric Brand~ao (Structure and Civil Eng., Federal Univ. of Santa Maria, Santa
Maria, RS, Brazil)
The development of porous materials that are able to absorb sound in
specific frequency bands has been an important challenge in the acoustic
research. Thus, the development new numerical techniques that allow one to
correctly capture the mechanisms of sound absorption can be seen as an important step to developing new materials. In this work, the lattice Boltzmann
method is used to predict the sound absorption coefficient in porous material
with straight porous structure. Six configurations of porous material were
investigated, involving different thickness and porosity values. A very good
agreement was found between the numerical results and those obtained by
the analytical model provided in the literature. The results suggest that the
lattice Boltzmann model can be a powerful alternative to simulating viscous
sound absorption, particularly due to its reduced computational effort when
compared to traditional numerical methods.
9:45
2aSAa5. Improved model for coupled structural–acoustic modes of
tires. Rui Cao, Nicholas Sakamoto, and J. S. Bolton (Ray W. Herrick Labs.,
School of Mech. Eng., Purdue Univ., 177 S. Russell St., West Lafayette, IN
47907-2099, cao101@purdue.edu)
2aSAa7. Energy flow models for the out-of-plane vibration of horizontally curved beams. Hyun-Gwon Kil (Dept. of Mech. Eng., Univ. of
Suwon, 17, Wauan-gil, Bongdam-eup, Hwaseong-si, Gyeonggi-do 445-743,
South Korea, hgkil@suwon.ac.kr), Seonghoon Seo (Noise & Vib. CAE
Team, Hyundai Motor Co., Hwaseong-si, Gyeonggi-do, South Korea), SukYoon Hong (Dept. of Naval Architecture and Ocean Eng., Seoul National
Univ., Seoul, South Korea), and Chan Lee (Dept. of Mech. Eng., Univ. of
Suwon, Hwaseong-si, Gyeonggi-do, South Korea)
Experimental measurements of tire tread band vibration have provided
direct evidence that higher order structural-acoustic modes exist in tires, not
just the well-know fundamental mode. These modes display both circumferential and radial pressure variations. The theory governing these modes has
thus been investigated. A brief recapitulation of the previously-presented
coupled tire-acoustical model based on a tensioned membrane approach will
be given, and then an improved tire-acoustical model with a ring-like shape
will be introduced. In the latter model, the effects of flexural and circumferential stiffness are considered, as is the role of curvature in coupling the various wave types. This improved model accounts for propagating in-plane
vibration in addition to the essentially structure-borne flexural wave and the
essentially airborne longitudinal wave accounted for in the previous model.
The longitudinal structure-borne wave “cuts on” at the tire’s circumferential
ring frequency. Explicit solutions for the structural and acoustical modes
will be given in the form of dispersion relations. The latter results will be
compared with measured dispersion relations, and the features associated
primarily with the higher order acoustic modes will be highlighted. Finally,
The purpose of this work is to develop energy flow models to predict the
out-of-plane vibration of horizontally curved beams in the mid- and highfrequency range. The dispersion relations of waves are approximately separated into relations to the propagation of flexural waves and torsional waves
generating the out-of-plane vibration of the horizontally curved beams with
Kirchhoff-Love hypotheses. The energy flow models are based on the
energy governing equations for the flexural waves and the torsional waves
propagating in the curved beams. Those equations are driven to predict the
time- and locally space-averaged energy density and intensity in the curved
beams. Total values for the energy density and the intensity as well as contributions of each type of waves on those values are predicted. A verification
of the energy flow models for the out-of-plane vibration of the horizontally
curved beams is performed by comparing the energy flow solutions for the
energy density and the intensity with analytical solutions evaluated using
the wave propagation approach. The comparison shows that the energy flow
models can be effectively used to predict the out-of-plane vibration of the
horizontally curved beams in the mid- and high-frequency range.
9:15
2141
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2141
2a TUE. AM
Contributed Papers
TUESDAY MORNING, 28 OCTOBER 2014
MARRIOTT 1/2, 10:30 A.M. TO 11:40 A.M.
Session 2aSAb
Structural Acoustics and Vibration and Noise: Vehicle Interior Noise
Sean F. Wu, Chair
Mechanical Engineering, Wayne State University, 5050 Anthony Wayne Drive, College of Engineering Building, Rm. 2133,
Detroit, MI 48202
Chair’s Introduction—10:30
Invited Papers
10:35
2aSAb1. Structural–acoustic optimization of a pressurized, ribbed aircraft panel. Micah R. Shepherd and Stephen A. Hambric
(Appl. Res. Lab, Penn State Univ., PO Box 30, Mailstop 3220B, State College, PA 16801, mrs30@psu.edu)
A method to reduce the noise radiated by a ribbed, aircraft panel excited by turbulent boundary layer flow is presented. To compute
the structural-acoustic response, a modal approach based on finite element/boundary element analysis was coupled to a turbulent boundary flow forcing function. A static pressure load was also applied to the panel to simulate cabin pressurization during flight. The radiated
sound power was then minimized by optimizing the horizontal and vertical rib location and rib cross section using an evolutionary
search algorithm. Nearly 10 dB of reduction was achieved by pushing the ribs to the edge of the panel, thus lowering the modal amplitudes excited by the forcing function. A static constraint was then included in the procedure using a low-frequency dynamic calculation
to approximate the static response. The constraint limited the amount of reduction that was achieved by the optimizer.
11:00
2aSAb2. Extending interior near-field acoustic holography to visualize three-dimensional objective parameters of sound quality.
Huancai Lu (Mech. Eng., Zhejiang Univ. of Technol., 3649 Glenwood Ave., Windsor, ON N9E 2Y6, Canada, huancailu@zjut.edu.cn)
It is essential to understand that the ultimate goal of interior noise control is to improve the sound quality inside the vehicle, rather
than to suppress the sound pressure level. Therefore, the vehicle interior sound source localization and identification should be based on
the contributions of sound sources to the subjective and/or objective parameters of sound quality at targeted points, such as driver’s ear
positions. This talk introduces the visualization of three-dimensional objective parameters of sound quality based on interior near-field
acoustic holography (NAH). The methodology of mapping three-dimensional sound pressure distribution, which is reconstructed based
on interior NAH, to three-dimensional loudness is presented. The mathematical model of loudness developed by ANSI standard is discussed. The numerical interior sound field, which is generated by vibrating enclosure with known boundary conditions, is employed to
validate the methodology. In addition, the accuracy of reconstruction of loudness distribution is examined with ANSI standard and digital head. It is shown that the results of sound source localization based on three-dimensional loudness distribution are different from the
ones based on interior NAH.
Contributed Paper
11:25
2aSAb3. A comparative analysis of the Chicago Transit Authority’s
Red Line railcars. Chris S. Nottoli (Riverbank Acoust. Labs., 1145 Walter,
Lemont, IL 60439, cnottoli18@gmail.com)
A noise study was conducted on Chicago Transit Authority’s Red Line
railcars to assess the differences in interior sound pressure level between the
5000 series railcars and its predecessor, the 2400 series. The study took into
account potential variability associated with a rider’s location in the railcars,
above ground, and subway segments (between stations), and surveyed the
2142
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
opinion of everyday Red Line riders as pertaining to perceived noise. The
test data demonstrated a 3–6 dB noise reduction in ongoing CTA
renovations between new rapid transit cars and their predecessors. Location
on the train influenced Leq(A) measurements as reflections from adjacent
railcars induced higher noise levels. The new railcars also proved effective
in noise reduction throughout the subway segments as the averaged Leq(A)
deviated 1 dB from above ground rail stations. Additionally, this study
included an online survey that revealed a possible disconnect between
traditional methods of objective noise measurement and subjective noise
ratings.
168th Meeting: Acoustical Society of America
2142
TUESDAY MORNING, 28 OCTOBER 2014
MARRIOTT 5, 8:00 A.M. TO 12:00 NOON
Session 2aSC
Speech Communication: Speech Production and Articulation (Poster Session)
Sam Tilsen, Chair
Cornell University, 203 Morrill Hall, Ithaca, NY 14853
Contributed Papers
2aSC1. Tongue motion characteristics during vowel production in older
children and adults. Jennell Vick, Michelle Foye (Psychol. Sci., Case
Western Reserve Univ., 11635 Euclid Ave., Cleveland, OH 44106, jennell@case.edu), Nolan Schreiber, Greg Lee (Elec. Eng. Comput. Sci., Case
Western Reserve Univ., Cleveland, OH), and Rebecca Mental (Psychol.
Sci., Case Western Reserve Univ., Cleveland, OH)
This study examined tongue movements in consonant-vowel-consonant
sequences drawn from real words in phrases as produced by 36 older children (three male and three female talkers at each age from 10 to 15 years)
and 36 adults. Movements of four points on the tongue were tracked at 400
Hz using the Wave Electromagnetic Speech Research System (NDI, Waterloo, ON, CA). The four points were tongue tip (TT; 1 cm from tip on midline), tongue body (TB; 3 cm from tip on midline), tongue right (TR; 2 cm
from tip on right lateral edge), and tongue left (TR; 2 cm from tip on left lateral edge). The phrases produced included the vowels /i/, /I/, /ae/, and /u/ in
words (i.e., “see,” sit,” cat,” and “zoo”). Movement measures included 3D
distance, peak and average speed, and duration of vowel opening and closing strokes. The horizontal curvature of the tongue was calculated at the trajectory speed minimum associated with the vowel production using a leastsquares quadratic fit of the TR, TB, and TL positional coordinates. Symmetry of TR and TL vertical position was also calculated. Within-group comparisons were made between vowels and between-group comparisons were
made between children and adults.
2aSC2. Experimental evaluation of the constant tongue volume hypothesis. Zisis Iason Skordilis, Vikram Ramanarayanan (Signal Anal. and Interpretation Lab., Dept. of Elec. Eng., Univ. of Southern California, 3710
McClintock Ave., RTH 320, Los Angeles, CA 90089, skordili@usc.edu),
Louis Goldstein (Dept. of Linguist, Univ. of Southern California, Los
Angeles, CA), and Shrikanth S. Narayanan (Signal Anal. and Interpretation
Lab., Dept. of Elec. Eng., Univ. of Southern California, Los Angeles, CA)
The human tongue is considered to be a muscular hydrostat (Kier and
Smith, 1985). As such, it is considered to be incompressible. This constant
volume hypothesis has been incorporated in various mathematical models
of the tongue, which attempt to provide insights into its dynamics (e.g., Levine et al., 2005). However, to the best of our knowledge, this hypothesis has
not been experimentally validated for the human tongue during actual
speech production. In this work, we attempt an experimental evaluation of
the constant tongue volume hypothesis. To this end, volumetric structural
Magnetic Resonance Imaging (MRI) was used. A database consisting of 3D
MRI images of subjects articulating continuants was considered. The subjects sustained contextualized vowels and fricatives (e.g., IY in “beet,” F in
“afa”) for 8 seconds in order for the 3D geometry to be collected. To segment the tongue and estimate its volume, we explored watershed (Meyer
and Beucher, 1990) and region growing (Adams and Bischof, 1994) techniques. Tongue volume was estimated for each lingual posture for each
2143
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
subject. Intra-subject tongue volume variation was examined to determine if
there is sufficient statistical evidence for the validity of the constant volume
hypothesis. [Work supported by NIH and a USC Viterbi Graduate School
Ph.D. fellowship.]
2aSC3. A physical figure model of tongue muscles. Makoto J. Hirayama
(Faculty of Information Sci. and Technol., Osaka Inst. of Technol., 1-79-1
Kitayama, Hirakata 573-0196, Japan, mako@is.oit.ac.jp)
To help understanding tongue shape and motions, a physical figure
model of tongue muscles using viscoelastic material of urethane rubber gel
were made by improving previous models. Compare to previous shape
tongue models that had been made and presented, the new model is constructed from tongue body (consisting of Transversus linguae, Verticalis linguae, Longitudinalis linguae superior, and Longitudinalis linguae inferior),
and individual extrinsic tongue muscles (consisting of Genioglossus anterior, Genio glossus posterior, Hyoglossus, Styloglossus, and Palatoglossus)
parts. Therefore, each muscle’s shape, starting and ending points, and relation to other muscles and organs inside mouth are more understandable than
previous ones. As the model is made from viscoelastic material similar to
human skin, reshaping and moving tongue are possible by pulling or pushing some parts of the tongue muscle by hand, that is, tongue shape and
motion simulations by hand can be done. The proposed model is useful for
speech science education or a future speaking robot using realistic speech
mechanism.
2aSC4. Tongue width at rest versus tongue width during speech: A comparison of native and non-native speakers. Sunao Kanada and Ian Wilson
(CLR Phonet. Lab, Univ. of Aizu, Tsuruga, Ikki machi, Aizuwkamatsu,
Fukushima 965-8580, Japan, m5181137@u-aizu.ac.jp)
Most pronunciation researchers do not focus on the coronal view. However, it is also important to observe because the tongue is hydrostatic. We
believe that some pronunciation differences between native speakers and second-language (L2) speakers could be due to differences in the coronal plane.
Understanding these differences could be a key to L2 learning and modeling.
It may be beneficial for pedagogical purposes and the results of this research
may contribute to the improvement of pronunciation of L2 English speakers.
An interesting way to look at native and L2 articulation differences is through
the pre-speech posture and inter-speech posture (ISP—rest position between
sentences). In this research, we compare native speakers to L2 speakers. We
measure how different those postures are from the median position of the
tongue during speech. We focus on movement of a side tongue marker in the
coronal plane, and we normalize for speaker size. We found that the mean
distance from pre-speech posture to speech posture is shorter for native English speakers (0.95 mm) than for non-native English speakers (1.62 mm). So,
native speakers are more efficient in their pre-speech posture. Results will
also be shown for distances from ISP to speech posture.
168th Meeting: Acoustical Society of America
2143
2a TUE. AM
All posters will be on display from 8:00 a.m. to 12:00 noon. To allow contributors an opportunity to see other posters, contributors of
odd-numbered papers will be at their posters from 8:00 a.m. to 10:00 a.m. and authors of even-numbered papers will be at their posters
from 10:00 a.m. to 12:00 noon.
2aSC5. Intraglottal velocity and pressure measurements in a hemilarynx model. Liran Oren, Sid Khosla (Otolaryngol., Univ. of Cincinnati, PO
Box 670528, Cincinnati, OH 45267, orenl@ucmail.uc.edu), and Ephraim
Gutmark (Aerosp. Eng., Univ. of Cincinnati, Cincinnati, OH)
Determining the mechanisms of self-sustained oscillation of the vocal
folds requires characterization of intraglottal aerodynamics. Since most of
the intraglottal aerodynamics forces cannot be measured in a tissue model
of the larynx, most of the current understanding of vocal fold vibration
mechanism is derived from mechanical, analytical, and computational models. In the current study, intraglottal pressure measurements are taken in a
hemilarynx model and are compared with pressure values that are computed
form simultaneous velocity measurements. The results show that significant
negative pressure is formed near the superior aspect of the folds during closing, which is in agreement with previous measurements in a hemilarynx
model. Intraglottal velocity measurements show that the flow near the superior aspect separates from the glottal wall during closing and may develop
into a vortex, which further augments the magnitude of the negative pressure. The intraglottal pressure distributions are computed by solving the
pressure Poisson equation using the velocity field measurements and show
good agreement with the pressure measurements. The match between the
pressure computations and the pressure measurements validates the technique, which was also used in previous study to estimate the intraglottal
pressure distribution in a full larynx model.
2aSC6. Ultrasound study of diaphragm motion during tidal breathing
and speaking. Steven M. Lulich, Marguerite Bonadies (Speech and Hearing
Sci., Indiana Univ., 4789 N White River Dr., Bloomington, IN 47404, slulich@indiana.edu), Meredith D. Lulich (Southern Indiana Physicians, Indiana Univ. Health, Bloomington, IN), and Robert H. Withnell (Speech and
Hearing Sci., Indiana Univ., Bloomington, IN)
Studies of speech breathing by Ladefoged and colleagues (in the 1950s
and 1960s), and by Hixon and colleagues (in the 1970s, 1980s, and 1990s)
have substantially contributed to our understanding of respiratory mechanics
during speech. Even so, speech breathing is not well understood when contrasted with phonation, articulation, and acoustics. In particular, diaphragm
involvement in speech breathing has previously been inferred from inductive plethysmography and EMG, but it has never been directly investigated.
In this case study, we investigated diaphragm motion in a healthy adult
male during tidal breathing and conversational speech using real-time 3D
ultrasound. Calibrated inductive plethysmographic data were recorded
simultaneously for comparison with previous studies and in order to relate
lung volumes directly to diaphragm motion.
2aSC7. A gestural account of Mandarin tone sandhi. Hao Yi and Sam
Tilsen (Dept. of Linguist, Cornell Univ., 315-7 Summerhill Ln., Ithaca, NY
14850, hy433@cornell.edu)
Recently tones have been analyzed as articulatory gestures, which may
be coordinated with segmental gestures. Our data from electromagnetic
articulometry (EMA) show that purported neutralized phonological contrast
can nonetheless exhibit coordinative difference. We develop a model based
on gestural coupling to account for observed patterns. Mandarin Third Tone
Sandhi (e.g., Tone3 ! T3S /_ Tone3) is perceptually neutralizing in that the
sandhi output (T3S) shares great similarity with Tone2. Despite both tones
having rising pitch contours, there exist subtle acoustic differences. However, the difference in underlying representation between T3S and Tone2
remains unclear. By presenting evidence from the alignment pattern
between tones and segments, we show that the acoustic differences between
Tone2 and T3S arises out of the difference in gestural organizations. The
temporal lag between the initiation of the Vowel gesture and that of Tone
gesture in T3S is shorter than that in Tone2. We further argue that underlying Tone3 is the source of incomplete neutralization between the Tone2 and
T3S. That is, despite the surface similarity, T3S is stored in the mental lexicon as Tone3.
2144
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
2aSC8. A real-time MRI investigation of anticipatory posturing in prepared responses. Sam Tilsen (Linguist, Cornell Univ., 203 Morrill Hall,
Ithaca, NY 14853, tilsen@cornell.edu), Pascal Spincemaille (Radiology,
Cornell Weill Medical College, New York, NY), Bo Xu (Biomedical Eng.,
Cornell Univ., New York, NY), Peter Doerschuk (Biomedical Eng., Cornell
Univ., Ithaca, NY), Wenming Luh (Human Ecology, Cornell Univ., Ithaca,
NY), Robin Karlin, Hao Yi (Linguist, Cornell Univ., Ithaca, NY), and Yi
Wang (Biomedical Eng., Cornell Univ., Ithaca, NY)
Speakers can anticipatorily configure their vocal tracts in order to facilitate the production of an upcoming vocal response. We find that this anticipatory articulation results in decoherence of articulatory movements that are
otherwise coordinated; moreover, speakers differ in the strategies they
employ for response anticipation. Real-time MRI images were acquired
from eight native English speakers performing a consonant-vowel response
task; the task was embedded in a 2 2 design, which manipulated preparation (whether speakers were informed of the target response prior to a gosignal) and postural constraint (whether the response was preceded by a prolonged vowel). Analyses of pre-response articulatory postures show that all
speakers exhibited anticipatory posturing of the tongue root in unconstrained responses. Some exhibited interactions between preparation and
constraint, such that anticipatory posturing was more extensive in preparedvs. unprepared-unconstrained responses. Cross-speaker variation was also
observed in anticipatory posturing of the velum: some speakers raised the
velum in anticipation of non-nasal responses, while others failed to do so.
The results show that models of speech production must be flexible enough
to allow for gestures to be executed individually, and that speakers differ in
the strategies they employ for response initiation.
2aSC9. An airflow examination of the Czech trills. Ekaterina Komova
(East Asian Lang. and Cultures, Columbia Univ., New York, NY) and Phil
Howson (The Univ. of Toronto, 644B-60 Harbord St., Toronto, ON
M5S3L1, Canada, phil.howson@mail.utoronto.ca)
Previous studies have suggested that there is a difference between the
Czech trills /r/ and /r/ with respect to the airflow required to produce each
trill. This study examines this question using an airflow meter. Five speakers
of Czech produced /r/ and /r/ in the real words rad “order,” parat “talon,”
tvar “face,” rad “like,” parada “great,” and tvar “shape.” Airflow data were
recorded using Macquirer. The data indicate a higher airflow during the production of /r/ compared to /r/. /r/ was produced with approximately 3 l/s
more than /r/. The increased airflow is necessary to cross the boundary of
laminar flow into turbulent flow and supports previous findings that /r/ is
produced with breathy voice, which facilities trilling during frication. The
data also suggests that one of the factors that makes the plain trill /r/ difficult
to produce is that the airflow required to produce a sonorous trill is tightly
constrained. The boundaries between trill production and the production of
frication are only a few l/s apart and thus requires careful management of
the laryngeal mechanisms, which control airflow.
?
?
?
?
?
2aSC10. Comparison of tidal breathing and reiterant speech breathing
using whole body plethysmography. Marguerite Bonadies, Robert H. Withnell, and Steven M. Lulich (Speech and Hearing Sci., Indiana Univ., 505 W
Lava Way, Apt. C, Bloomington, IN 47404, mcbonadi@umail.iu.edu)
Classic research in the field of speech breathing has found differences in
the characteristics of breathing patterns between speech respiration and tidal
breathing. Though much research has been done on speech breathing mechanisms, relatively little research has been done using the whole body plethysmograph. In this study, we sought to examine differences and similarities
between tidal respiration and breathing in reiterant speech using measures
obtained through whole-body plethysmography. We hypothesize that there
are not significant differences between pulmonary measures in tidal respiration and in speech breathing. This study involves tidal breathing on a spirometer attached to the whole-body plethysmograph followed by reiterant speech
using the syllable /da/ while reading the first part of The Rainbow Passage.
Experimental measures include compression volumes during both breathing
tasks, and absolute lung volumes as determined from the spirometer and calibrated whole-body plethysmograph. These are compared with the pulmonary
subdivisions obtained from pulmonary function tests, including vital capacity,
functional residual capacity, and total lung volume.
168th Meeting: Acoustical Society of America
2144
It has been previously suggested that fricative production is marked by a
longer glottal opening as compared to sonorous segments. The present study
uses electroglottography (EGG) and acoustic measurements to test this hypothesis by examining the activity of the vocal cords during the articulation
of fricative and sonorant segments of English and Sorbian. An English and a
Sorbian speakers’ extended individual productions of the phonemes /s, z, S, Z,
m, n, r, l, a/ and each phoneme in the context #Ca were recorded. The open
quotient was calculated using MATLAB. H1-H2 measures were taken at 5% into
the vowel following each C and at 50% into the vowel. The results indicate
that the glottis is open longer during the production of fricatives than for sonorous segments. Furthermore, the glottis is slightly more open for the production of nasals and liquids than it is for vowels. These results suggest that a
longer glottal opening facilitates the increased airflow required to produce frication. This contrasts previous analyses which suggested that frication is primarily achieved through a tightened constriction. While a tighter constriction
may be necessary, the increased airflow velocity produced by a longer glottal
opening is critical for the production of frication.
2aSC12. SIPMI: Superimposing palatal profile from maxillary impression onto midsagittal articulographic data. Wei-rong Chen and Yuehchin Chang (Graduate Inst. of Linguist, National Tsing Hua Univ., 2F-5,
No. 62, Ln. 408, Zhong-hua Rd., Zhubei City, Hsinchu County-302,
Taiwan, waitlong75@gmail.com)
Palatal traces reconstructed by current advanced technologies of realtime mid-sagittal articulatory tracking (e.g., EMA, ultrasound, rtMRI, etc.)
are mostly in low-resolution and lack concrete anatomical/orthodontic reference points as firm articulatory landmarks for determining places of articulation. The present study proposes a method of superimposing a physical
palatal profile extracted from maxillary impression, onto mid-sagittal articulatory data. The whole palatal/dental profile is first obtained from performing an alginate maxillary impression, and a plaster maxillary mold is made
from the impression. Then, the mold is either (1) cut into halves for handtracing or (2) 3D-scanned to extract a high resolution mid-sagittal palatal
line. The mid-sagittal palatal line made from maxillary mold is further subdivided into articulatory zones, following definitions of articulatory landmarks in the literature (e.g., Catford 1988), by referring to anatomical/orthodontic landmarks imprinted on the mold. Lastly, the high-resolution,
articulatorily divided palatal line can be superimposed, by using modified
Iterative Closet Point (ICP) algorithm, onto the reconstructed, low-resolution palatal traces in the real-time mid-sagittal articulatory data, so that
clearly divided places of articulation on palate can be visualized with articulatory movements. Evaluation results show that both hand-tracing and 3Dscanned palatal profiles yield accurate superimpositions and satisfactory
visualizations of place of articulation in our EMA data.
2aSC13. Waveform morphology of pre-speech brain electrical potentials. Silas Smith and Al Yonovitz (Dept. of Commun. Sci. and Disord., The
Univ. of Montana, The Univ of Montana, Missoula, MT 59812, silas.
smith@umconnect.umt.edu)
The inter- and intra-subject variations of the cortical responses before the
initiation of speech were recorded. These evoked potentials were obtained at
a sufficient sample rate that both slow negative waves as well as faster neurogenic signals were obtained. The marking point for determining the pre-event
time epoch has been an EMG source. The data are typically acquired off-line
and later averaged. This research uses a vocal signal as the marking point,
and displays in real time the event-related potential. Subjects were 12 males
and females. Electrodes were recorded with a silver–silver chloride electrodes
positioned at Cz and using the earlobes as reference and ground. A biological
preamplifier was used to amplify the weak bioelectric signals 100,000 times.
Each time epoch was sampled at 20,000 samples/sec. The frequency response
of these amplifiers had a high-pass of 0.1 Hz and a low-pass of 3 kHz. One
second of these signals were averaged for 100 trials just prior to the subject
initiation of the word “pool.” Electrical brain potentials have proven to be
extremely useful for diagnosis, treatment, and research in the auditory system,
and are expected to be of equal importance for the speech system.
2145
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
2aSC14. Acoustic correlates of bilingualism: Relating phonetic production to language experience and attitudes. Wai Ling Law (Linguist, Purdue Univ., Beering Hall, 00 North University St., West Lafayette, IN 47907,
wlaw@purdue.edu) and Alexander L. Francis (Speech, Lang. & Hearing
Sci., Purdue Univ., West Lafayette, IN)
Researchers tend to quantify degree of bilingualism according to agerelated factors such as age of acquisition (Flege, et al. 1999, Yeni-Komshian, et al. 2000). However, previous research suggests that bilinguals may
also show different degrees of accent and patterns of phonetic interaction
between their first language (L1) and second language (L2) as a result of
factors such as the quantity and quality of L2 input (Flege & Liu, 2001),
amount of L1 vs. L2 use (Flege, et al. 1999), and attitude toward each
language (Moyer, 2007). The goal of this study is to identify gradient properties of speech production that can be related to gradient language experience and attitudes in a bilingual population that is relatively homogeneous
in terms of age-related factors. Native Cantonese-English bilinguals living
in Hong Kong produced near homophones in both languages under conditions emphasizing one language or the other on different days. Acoustic
phonetic variables related to phonological inventory differences between
the two languages, including lexical tone/stress, syllable length, nasality, fricative manner and voicing, release of stop, voice onset time, and vowel
quality and length, will be quantified and compared to results from a
detailed survey of individual speakers’ experience and attitudes toward the
two languages.
2aSC15. Dialectal variation in affricate place of articulation in Korean.
Yoonjnung Kang (Ctr. for French and Linguist, Univ. of Toronto Scarborough, 1265 Military Trail, HW314, Toronto, ON M1C 1A4, Canada, yoonjung.kang@utoronto.ca), Sungwoo Han (Dept. of Korean Lang. and Lit.,
Inha Univ., Incheon, South Korea), Alexei Kochetov (Dept. of Linguist,
Univ. of Toronto, Toronto, ON, Canada), and Eunjong Kong (Dept. of English, Korea Aerosp. Univ., Goyang, South Korea)
The place of articulation (POA) of Korean affricates has been a topic of
much discussion in Korean linguistics. The traditional view is that the affricates were dental in the 15th century and then changed to a posterior coronal
place in most dialects of Korean but the anterior articulation is retained in
major dialects of North Korea, most notably Phyengan and Yukjin. However, recent instrumental studies on Seoul Korean and some impressionistic
descriptions of North Korean dialects cast doubt on the validity of this traditional view. Our study examines the POA of /c/ (lenis affricate) and /s/ (anterior fricative) before /a/ in Seoul Korean (26 younger and 32 older
speakers) and in two North Korean varieties, as spoken by ethnic Koreans in
China (14 Phyengan and 21 Yukjin speakers). The centre of gravity of the
frication noise of /c/ and /s/ was examined. The results show that in both
North Korean varieties, both sibilants are produced as anterior coronal and
comparable in their POA. In Seoul Korean, while the POA contrast shows a
significant interaction with age and gender, the affricate is consistently and
substantially more posterior than the anterior fricative across all speaker
groups. The results support the traditional description.
2aSC16. An articulatory study of high vowels in Mandarin produced by
native and non-native speakers. Chenhuei Wu (Dept. of Chinese Lang.
and Lit., National Hsinchu Univ. of Education, No. 521, Nanda Rd, Hsinchu
300, Taiwan, chenhueiwu@gmail.com), Weirong Chen (Graduate Inst. of
Linguist, National Tsing-hua Univ., Hsinchu, Taiwan), and Chilin Shih
(Dept. of Linguist, Univ. of Illinois at Urbana-Champaign, Urbana, IL)
This paper examined the articulatory properties of high vowels [i], [y],
and [u] in Mandarin produced by four Taiwanese Mandarin native speakers
and four English-speaking Chinese learners (L2 learners) with an Electromagnetic Articulagroph AG500. The articulatory positions of the tongue top
(TT), the tongue body (TB), the tongue dorsal (TD), and the lips were investigated. The TT, TB, and TD of [y] produced by the L2 learner were further
back than that by the native. In addition, the TD of [y] by the L2 learners
was higher than the native. Further comparison found that the tongue positions of [y] was similar to [u] in L2 production. Regarding to the lip positions, the [y] and [u] were more protruded that [i] in the native production,
while there is no difference among these three vowels in the L2 production.
168th Meeting: Acoustical Society of America
2145
2a TUE. AM
2aSC11. An electroglottography examination of fricative and sonorous
segments. Phil Howson (The Univ. of Toronto, 644B-60 Harbord St.,
Toronto, ON M5S3L1, Canada, phil.howson@mail.utoronto.ca)
The findings suggested that most of the L2 learner were not aware that the
lingual target [y] should be very similar to [i] but the lip articulators of [y]
are more protruded than [i]. Some L2 learners pronounce [y] more like a
diphthong [iu] rather than a monophthong.
2aSC17. Production and perception training of /r l/ with native Japanese speakers. Anna M. Schmidt (School of Speech Path. & Aud., Kent
State Univ., A104 MSP, Kent, OH 444242, aschmidt@kent.edu)
Visual feedback with electropalatometry was used to teach accurate /r/
and /l/ to a native Japanese speaker. Perceptual differentiation of the phonemes did not improve. A new perceptual training protocol was developed
and tested.
2aSC18. Production of a non-phonemic variant in a second language:
Acoustic analysis of Japanese speakers’ production of American English flap. Mafuyu Kitahara (School of Law, Waseda Univ., 1-6-1 Nishiwaseda, Shinjuku-ku, Tokyo 1698050, Japan, kitahara@waseda.jp), Keiichi
Tajima (Dept. of Psych., Hosei Univ., Tokyo, Japan), and Kiyoko
Yoneyama (Dept. of English Lang., Daito Bunka Univ., Tokyo, Japan)
Second-language (L2) learners need to learn the sound system of an L2
so that they can distinguish L2 words. However, it is also instructive to learn
non-phonemic, allophonic variations, particularly if learners want to sound
native-like. The production of intervocalic /t d/ as an alveolar flap is a prime
example of a non-phonemic variation that is salient in American English
and presumably noticeable to many L2 learners. Yet, how well such nonphonemic variants are learned by L2 learners is a relatively under-explored
subject. In the present study, Japanese learners’ production of alveolar flaps
was investigated, to clarify how well learners can learn the phonetic environments in which flapping tends to occur, and how L2 experience affects
their performance. Native Japanese speakers who had lived in North
America for various lengths of time read a list of words and phrases that
contained a potentially flappable stop, embedded in a carrier sentence.
Preliminary results indicated that the rate of flapping varied considerably
across different words and phrases and across speakers. Furthermore, acoustic parameters such as flap closure duration produced by some speakers
showed intermediate values between native-like flaps and regular stops, suggesting that flapping is a gradient phenomenon. [Work supported by JSPS.]
2146
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
2aSC19. A comparison of speaking rate consistency in native and nonnative speakers of English. Melissa M. Baese-Berk (Linguist, Univ. of Oregon, 1290 University of Oregon, Eugene, OR 97403, mbaesebe@uoregon.
edu) and Tuuli Morrill (Linguist, George Mason Univ., Fairfax, VA)
Non-native speech differs from native speech in many ways, including
overall longer durations and slower speech rates (Guion et al., 2000). Speaking rate also influences how listeners perceive speech, including perceived
fluency of non-native speakers (Munro & Derwing, 1998). However, it is
unclear what aspects of non-native speech and speaking rate might influence
perceived fluency. It is possible that in addition to differences in mean
speaking rate, there may be differences in the consistency of speaking rate
within and across utterances. In the current study, we use production data to
examine speaking rate in native and non-native speakers of English, and ask
whether native and non-native speakers differ in the consistency of their
speaking rate across and within utterances. We examined a corpus of read
speech, including isolated sentences and longer narrative passages. Specifically, we test whether the overall slower speech rate of non-native speakers
is coupled with an inconsistent speech rate that may result in less predictability in the produced speech signal.
2aSC20. Relative distances among English front vowels produced by
Korean and American speakers. Byunggon Yang (English Education,
Pusan National Univ., 30 Changjundong Keumjunggu, Pusan 609-735,
South Korea, bgyang@pusan.ac.kr)
This study examined the relative distances among English front vowels
in a message produced by 47 Korean and American speakers from an internet speech archive in order to provide better pronunciation skills for Korean
English learners. The Euclidean distances in the vowel space of F1 and F2
were measured among the front vowel pairs. The first vowel pair [i-E] was
set as the reference from which the relative distances of the other two vowel
pairs were measured in percent in order to compare the vowel sounds among
speakers of different vocal tract lengths. Results show that F1 values of the
front vowels produced by the Korean and American speakers increased
from the high front vowel to the low front vowel with differences among the
groups. The Korean speakers generally produced the front vowels with
smaller jaw openings than the American speakers did. Second, the relative
distance of the high front vowel pair [i-I] showed a significant difference
between the Korean and American speakers while that of the low front
vowel pair [E-æ] showed a non-significant difference. Finally, the Korean
speakers in the higher proficiency level produced the front vowels with
higher F1 values than those in the lower proficiency level.
168th Meeting: Acoustical Society of America
2146
TUESDAY MORNING, 28 OCTOBER 2014
INDIANA F, 8:00 A.M. TO 11:45 A.M.
Session 2aUW
Underwater Acoustics: Signal Processing and Ambient Noise
Jorge E. Quijano, Chair
University of Victoria, 3800 Finnerty Road, A405, Victoria, BC V8P 5C2, Canada
8:00
8:30
2aUW1. Moving source localization and tracking based on data. Tsih C.
Yang (Inst. of Undersea Technol., National Sun Yat-sen Univ., 70 Lien Hai
Rd., Kaohsiung 80404, Taiwan, tsihyang@gmail.com)
2aUW3. Test for eigenspace stationarity applied to multi-rate adaptive
beamformer. Jorge E. Quijano (School of Earth and Ocean Sci., Univ. of
Victoria, Bob Wright Ctr. A405, 3800 Finnerty Rd., Victoria, BC V8P 5C2,
Canada, jorgess39@hotmail.com) and Lisa M. Zurk (Elec. and Comput.
Eng. Dept., Portland State Univ., Portland, OR)
Matched field processing (MFP) was introduced sometimes ago for
source localization based on the replica field for a hypothesized source location that best matches the acoustic data received on a vertical line array
(VLA). A data-based matched-mode source localization method is introduced in this paper for a moving source, using mode wavenumbers and
depth functions estimated directly from the data, without requiring any environmental acoustic information and assuming any propagation model to calculate the replica field. The method is in theory free of the environmental
mismatch problem since the mode replicas are estimated from the same data
used to localize the source. Besides the estimation error due to the approximations made in deriving the data-based algorithms, the method has some
inherent drawbacks: (1) it uses a smaller number of modes than theoretically
possible, since some modes are not resolved in the measurements, and (2)
the depth search is limited to the depth covered by the receivers. Using
simulated data, it is found that the performance degradation due to the above
approximation/limitation is marginal compared with the original matchedmode source localization method. Certain aspects of the proposed method
have previously been tested against data. The key issues are discussed in
this paper.
Array processing in the presence of moving targets is challenging since
the number of stationary data snapshots required for estimation of the data
covariance are limited. For experimental scenarios that include a combination of fast maneuvering loud interferers and quiet targets, the multi-rate
adaptive beamformer (MRABF) can mitigate the effect of non-stationarity.
In MRABF, the eigenspace associated to loud interferers is first estimated
and removed, followed by application of adaptive beamforming techniques
to the remaining, less variable, “target” subspace. Selection of the number
of snapshots used for estimation of the interferer eigenspace is crucial in the
operation of MRABF, since too few snapshots result in poor eigenspace estimation, while too many snapshots result in leakage of non-stationary interferer effects into the target subspace. In this work an eigenvector-based test
for data stationarity, recently developed in the context of very large arrays
with snapshot deficiency, is used as a quantitative method to select the optimal number of snapshots for the estimation of the non-stationary eigenspace. The approach is demonstrated with simulated and experimental data
from the Shallow Water Array Performance (SWAP) experiment.
8:15
8:45
2aUW2. Simultaneous localization of multiple vocalizing humpback
whale calls in an ocean waveguide with a single horizontal array using
the array invariant. Zheng Gong, Sunwoong Lee (Mech. Eng., Massachusetts Inst. of Technol., 5-435, 77 Massachusetts Ave., Cambridge, MA
02139, zgong@mit.edu), Purnima Ratilal (Elec. and Comput. Eng., Northeastern Univ., Boston, MA), and Nicholas C. Makris (Mech. Eng., Massachusetts Inst. of Technol., Cambridge, MA)
2aUW4. Design of a coprime array for the North Elba sea trial. Vaibhav
Chavali, Kathleen E. Wage (Elec. and Comput. Eng., George Mason Univ.,
4307 Ramona Dr., Apt. # H, Fairfax, VA 22030, vchavali@gmu.edu), and
John R. Buck (Elec. and Comput. Eng., Univ. of Massachusetts Dartmouth,
Dartmouth, MA)
The array invariant method, previously derived for instantaneous range
and bearing estimation of a broadband impulsive source in a horizontally
stratified ocean waveguide [Lee and Makris, J. Acoust. Soc. Am. 119, 336–
351 (2006)], is generalized to instantaneously and simultaneously localize
multiple uncorrelated broadband noise sources that not necessarily impulsive in the time domain. In an ideal Pekeris waveguide, we theoretically
show that source range and bearing can be instantaneously obtained from
beam-time migration lines measured with a horizontal array through range
and bearing dependent differences that arise between modal group speeds
along the array. We also show that this theory is approximately valid in a
horizontally stratified ocean waveguide. A transform, similar to the Radon
transform, is employed to enable simultaneous localization of multiple
uncorrelated broadband noise sources without ambiguity using the array
invariant method. The method is now applied to humpback whale vocalization data from the Gulf of Maine 2006 Experiment for humpback whale
ranges up to tens of kilometers, where it is shown that accurate bearing and
range estimation of multiple vocalizing humpback whales can be simultaneously made with little computational effort.
2147
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Vaidyanathan and Pal [IEEE Trans. Signal Process. 2011] proposed the
use of Coprime Sensor Arrays (CSAs) to sample spatial fields using fewer
elements than a Uniform Line Array (ULA) spanning the same aperture. A
CSA consists of two interleaved uniform subarrays that are undersampled
by coprime factors M and N. The subarrays are processed independently
and then their scanned responses are multiplied to obtain a unaliased output.
Although the CSA achieves resolution comparable to that of a fully populated ULA, the CSA beampattern has higher sidelobes. Adhikari et al.
[Proc. ICASSP, 2013] showed that extending the subarrays and applying
spatial tapers could reduce CSA sidelobes. This paper considers the problem
of designing a CSA for the North Elba Sea Trial described by Gingras
[SACLANT Tech. Report, 1994]. The experimental dataset consists of
receptions recorded by a 48-element vertical ULA in a shallow water environment for two different source frequencies: 170 Hz and 335 Hz. This paper considers all possible coprime subsamplings for this array and selects
the configuration that provides the best tradeoff between number of sensors
and performance. Results are shown for both simulated and experimental
data. [Work supported by ONR Basic Research Challenge Program.]
168th Meeting: Acoustical Society of America
2147
2a TUE. AM
Contributed Papers
9:00
9:45
2aUW5. Localization of a high frequency source in a shallow ocean sound
channel using frequency-difference matched field processing. Brian
Worthmann (Appl. Phys., Univ. of Michigan, 3385 Oakwood St., Ann Arbor,
MI 48104, bworthma@umich.edu), H. C. Song (Marine Physical Lab.,
Scripps Inst. for Oceanogr., Univ. of California - San Diego, La Jolla, CA),
and David R. Dowling (Mech. Eng., Univ. of Michigan, Ann Arbor, MI)
2aUW8. Space-time block code with equalization technology for underwater acoustic channels. Chunhui Wang, Xueli Sheng, Lina Fan, Jia Lu,
and Weijia Dong (Sci. and Technol. on Underwater Acoust. Lab., College
of Underwater Acoust. Engineering, Harbin Eng. Univ., Harbin 150001,
China, 740443619@qq.com)
Matched field processing (MFP) is an established technique for locating
remote acoustic sources in known environments. Unfortunately, environment-to-propagation model mismatch prevents successful application of
MFP in many circumstances, especially those involving high frequency signals. For beamforming applications, this problem was found to be mitigated
through the use of a nonlinear array-signal-processing technique called frequency difference beamforming (Abadi et al. 2012). Building on that work,
this nonlinear technique was extended to MFP, where Bartlett ambiguity
surfaces were calculated at frequencies two orders of magnitude lower than
the propagated signal, where the detrimental effects of environmental mismatch are much reduced. In the Kauai Acomms MURI 2011 (KAM11)
experiment, underwater signals of frequency 11.2 kHz to 32.8 kHz were
broadcast 3 km through a 106-m-deep shallow-ocean sound channel and
were recorded by a sparse 16-element vertical array. Using the ray-tracing
code Bellhop as the propagation model, frequency difference MFP was performed, and some degree of success was found in localizing the high frequency source. In this presentation, the frequency difference MFP technique
is explained, and comparisons of this nonlinear MFP technique with conventional Bartlett MFP using both simulations and KAM11 experimental data
are provided. [Sponsored by the Office of Naval Research.]
9:15
2aUW6. Transarctic acoustic telemetry. Hee-Chun Song (SIO, UCSD,
9500 Gilman Dr., La Jolla, CA 92093-0238, hcsong@mpl.ucsd.edu), Peter
Mikhalvesky (Leidos Holdings, Inc., Arlington, VA), and Arthur Baggeroer
(Mech. Eng., MIT, Cambridge, MA)
On April 9 and 13, 1999, two Arctic Climate Observation using Underwater Sound (ACOUS) tomography signals were transmitted from a 20.5-Hz
acoustic source moored at the Franz Victoria Strait to an 8-element, 525-m
vertical array at ice camp APLIS in the Chukchi Sea at a distance of approximately 2720 km. The transmitted signal was a 20-min long, 255-digit msequence that can be treated as a binary-phase shift-keying communication
signal with a data rate of 2 bits/s. The almost error-free performance using either spatial diversity (three elements) for a single transmission or temporal diversity (two transmissions) with a single element demonstrates the feasibility
of ice-covered trans-Arctic acoustic communications.
9:30
2aUW7. Performance of adaptive multichannel decision-feedback
equalization in the simulated underwater acoustic channel. Xueli Sheng,
Lina Fan (Sci. and Technol. on Underwater Acoust. Lab., Harbin Eng.
Univ., Harbin Eng. University Shuisheng Bldg. 803, Nantong St. 145, Harbin, Heilongjiang 150001, China, shengxueli@aliyun.com), Aijun Song,
and Mohsen Badiey (College of Earth, Ocean, and Environment, Univ. of
Delaware, Newark, DE)
Adaptive multichannel decision feedback equalization [M. Stojanovic, J.
Catipovic, and J. G. Proakis, J. Acoust. Soc. Am. 94, 1621–1631 (1993)] is
widely adopted to address the severe inter-symbol interference encountered
in the underwater acoustic communication channel. In this presentation, its
performance will be reported in the simulated communication channel provided by a ray-based acoustic model, for different ocean conditions and
source-receiver geometries. The ray model uses the Rayleigh parameter to
prescribe the sea surface effects on the acoustic signal. It also supports different types of sediment. The ray model output has been compared with the
experimental data and shows comparable results in transmission loss. We
will also compare against the performance of multichannel decision feedback equalization supported by existing ray models, for example,
BELLHOP.
2148
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
In order to combat the effects of multipath interference and fading for
underwater acoustic (UWA) channels, this paper investigates a scheme of
the combination of space-time block code (STBC) and equalization technology. STBC is used in this scheme to reduce the effects of fading for the
UWA channels, then equalization technology is used in this scheme to mitigate intersymbol interference. The performance of the scheme is analyzed
with Alamouti space-time coding for the UWA channels. Our simulations
indicate that the combination of STBC and equalization technology provides
lower bit error rates.
10:00–10:15 Break
10:15
2aUW9. Robust focusing in time-reversal mirror with a virtual source
array. Gi Hoon Byun and Jea Soo Kim (Ocean Eng., Korea Maritime and
Ocean Univ., Dongsam 2-dong, Yeongdo-gu, Busan, Korea, Busan, South
Korea, knitpia77@gmail.com)
The effectiveness of Time-Reversal (TR) focusing has been demonstrated in various fields of ocean acoustics. In TR focusing, a probe source
is required for a coherent acoustic focus at the original probe source location. Recently, the need of a probe source has been partially relaxed by
introduction of the concept of a Virtual Source Array (VSA) [S. C. Walker,
Philippe Roux, and W. A. Kuperman, J. Acoust. Soc. Am. 125(6), 3828–
3834 (2009)]. In this study, Adaptive Time-Reversal Mirror (ATRM) based
on multiple constraint method [J. S. Kim, H. C. Song, and W. A. Kuperman,
J. Acoust. Soc. Am. 109(5), 1817–1825 (2001)] and Singular Value Decomposition (SVD) method are applied to a VSA for robust focusing. The numerical simulation results are presented and discussed.
10:30
2aUW10. Wind generated ocean noise in deep sea. Fenghua Li and
Jingyan Wang (State Key Lab. of Acoust., Inst. of Acoust., CAS, No. 21
Beisihuanxi Rd., Beijing 100190, China, lfh@mail.ioa.ac.cn)
Ocean noise is an important topic in underwater acoustics, which has
been paid much attention in last decades. Ocean noise sources may consist
of wind, biological sources, ships, earthquakes and so on. This paper discusses measurements of the ocean noise intensity in deep sea during strong
wind periods. During the experiment, shipping density is small enough and
the wind generated noise is believed to be the dominated effect in the
observed frequency range. The analyses of the recoded noise data reveal
that the wind generated noise source has a strong dependence on the wind
speed and frequency. Based on the data, a wind generated noise source
model is presented. [Work supported by National Natural Science Foundation of China, Grant No. 11125420.]
10:45
2aUW11. Ocean ambient noise in the North Atlantic during 1966 and
2013–2014. Ana Sirovic, Sean M. Wiggins, John A. Hildebrand (Scripps
Inst. of Oceanogr., UCSD, 9500 Gilman Dr. MC 0205, La Jolla, CA 920930205, asirovic@ucsd.edu), and Mark A. McDonald (Whale Acoust., Bellvue, CO)
Low-frequency ocean ambient noise has been increasing in many parts
of the worlds’ oceans as a result of increased shipping. Calibrated passive
acoustic recordings were collected from June 2013 to March 2014 on the
south side of Bermuda in the North Atlantic, at a location where ambient
noise data were collected in 1966. Monthly and hourly mean power spectra
(15–1000 Hz) were calculated, in addition to skewness, kurtosis, and percentile distributions. Average spectrum levels at 40 Hz, representing shipping noise, ranged from 78 to 80 dB re: 1 mPa2/Hz, with a peak in March
and minimum in July and August. Values recorded during this recent period
were similar to those recorded during 1966. This is different from trends
168th Meeting: Acoustical Society of America
2148
11:00
2aUW12. Adaptive passive fathometer processing of surface-generated
noise received by Nested array. Junghun Kim and Jee W. Choi (Marine
Sci. and Convergent Technol., Hanyang Univ., 1271 Sa-3-dong, Ansan 426791, South Korea, Kimjh0927@hanyang.ac.kr)
Recently, a passive fathometer technique using surface-generated ambient noise has been applied to the estimate of bottom profile. This technique
performs the beamforming of ambient noise received by a vertical line array
to estimate the sub-bottom layer structure as well as water depth. In the previous works, the surface noise signal processing was performed with equally
spaced line arrays and the main topic of the research was the comparison of
the results estimated using several beamforming techniques. In this talk, the
results estimated from the ambient noise received by the Nested vertical line
array (called POEMS) which consists of the total 24-elments and four subbands are presented. The measurements were made on the eastern coast
(East Sea) of Korea. Four kinds of beamforming algorithms are applied to
each sub-band and also, nested array processing combining each sub-band
signal was performed to obtain the best result. The results are compared to
the bottom profiles from the chirp sonar. [This research was supported by
the Agency for Defense Development, Korea.]
11:15
2aUW13. Feasibility of low-frequency acoustic thermometry using deep
ocean ambient noise in the Atlantic, Pacific, and Indian Oceans. Katherine
F. Woolfe and Karim G. Sabra (Mech. Eng., Georgia Inst. of Technol., 672
Brookline St SW, Atlanta, GA 30310, katherine.woolfe@gmail.com)
Previous work has demonstrated the feasibility of passive acoustic thermometry using coherent processing of low frequency ambient noise (1–40
Hz) recorded on triangular hydrophones arrays spaced ~130 km and located
in the deep sound channel. These triangular arrays are part of hydroacoustic
stations of the International Monitoring System operated by the
2149
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Comprehensive Nuclear Test Ban Treaty Organization (Woolfe et al., J.
Acoust. Soc. Am. 134, 3983). To understand how passive thermometry
could potentially be extended to ocean basin scales, we present a comprehensive study of the coherent components of low-frequency ambient noise
recorded on five hydroacoustic stations located Atlantic, Pacific, and Indian
Oceans. The frequency dependence and seasonal variability of the spatial
coherence and directionality of the low-frequency ambient noise were systematically examined at each of the tested site locations. Overall, a dominant coherent component of the low-frequency noise was found to be
caused by seasonal ice-breaking events at the poles for test sites that have
line-of-sight paths to polar ice. These findings could be used to guide the
placement of hydrophone arrays over the globe for future long-range passive
acoustic thermometry experiments.
11:30
2aUW14. Ambient noise in the Arctic Ocean measured with a drifting
vertical line array. Peter F. Worcester, Matthew A. Dzieciuch (Scripps
Inst. of Oceanogr., Univ. of California, San Diego, 9500 Gilman Dr., 0225,
La Jolla, CA 92093-0225, pworcester@ucsd.edu), John A. Colosi (Dept. of
Oceanogr., Naval Postgrad. School, Monterey, CA), and John N. Kemp
(Woods Hole Oceanographic Inst., Woods Hole, MA)
In mid-April 2013, a Distributed Vertical Line Array (DVLA) with 22
hydrophone modules over a 600-m aperture immediately below the subsurface float was moored near the North Pole. The top ten hydrophones were
spaced 14.5 m apart. The distances between the remaining hydrophones
increased geometrically with depth. Temperature and salinity were measured by thermistors in the hydrophone modules and ten Sea-Bird MicroCATs. The mooring parted just above the anchor shortly after deployment
and subsequently drifted slowly south toward Fram Strait until it was recovered in mid-September 2013. The DVLA recorded low-frequency ambient
noise (1953.125 samples per second) for 108 minutes six days per week.
Previously reported noise levels in the Arctic are highly variable, with periods of low noise when the wind is low and the ice is stable and periods of
high noise associated with pressure ridging. The Arctic is currently undergoing dramatic changes, including reductions in the extent and thickness of
the ice cover, the amount of multiyear ice, and the size of the ice keels. The
ambient noise data collected as the DVLA drifted will test the hypothesis
that these changes result in longer and more frequent periods of low noise
conditions than experienced in the past.
168th Meeting: Acoustical Society of America
2149
2a TUE. AM
observed in the Northern Pacific, where ocean ambient noise has been
increasing; however, the location of this monitoring site was exposed only
to shipping lanes to the south of Bermuda. At frequencies dominated by
wind and waves (500 Hz), noise levels ranged from 55 to 66 dB re: 1 mPa2/Hz,
indicating low sea states (2–3) prevailed during the summer, and higher sea
states (4–5) during the winter. Seasonally important contribution to ambient
sound also came from marine mammals, such as blue and fin whales.
TUESDAY AFTERNOON, 28 OCTOBER 2014
MARRIOTT 7/8, 1:00 P.M. TO 4:30 P.M.
Session 2pAA
Architectural Acoustics and Engineering Acoustics: Architectural Acoustics and Audio II
K. Anthony Hoover, Cochair
McKay Conant Hoover, 5655 Lindero Canyon Road, Suite 325, Westlake Village, CA 91362
Alexander U. Case, Cochair
Sound Recording Technology, University of Massachusetts Lowell, 35 Wilder St., Suite 3, Lowell, MA 01854
Invited Papers
1:00
2pAA1. Defining home recording spaces. Sebastian Otero (Acustic-O, Laurel 14, San Pedro Martir, Tlalpan, Mexico, D.F. 14650,
Mexico, sebastian@acustic-o.com)
The idea of home recording has been widely used through out the audio and acoustics community for some time. The effort and
investment put into these projects fluctuate in such a wide spectrum that there is no clear way to unify the concept of “home studio,”
making it difficult for acoustical consultants and clients to reach an understanding on each other project goals. This paper analyses different spaces which vary in terms of privacy, comfort, size, audio quality, budget, type of materials, acoustic treatments, types of projects developed and equipment, but which can all be called “home recording spaces,” in order to develop a more specific classification of
these environments.
1:20
2pAA2. Vibrato parameterization. James W. Beauchamp (School of Music and Elec. & Comput. Eng., Univ. of Illinois at UrbanaChampaign, 1002 Eliot Dr., Urbana, IL 61801-6824, jwbeauch@illinois.edu)
In an effort to improve the quality of synthetic vibrato many musical instrument tones with vibrato have been analyzed and
frequency-vs-time curves have been parameterized in terms of a time-varying offset and a time-varying vibrato depth. Results for variable mean F0 and instrument are presented. Whereas vocal vibrato appears to sweep out the resonance characteristic of the vocal tract,
as shown by amplitude-vs-frequency curves for the superposition of a range of harmonics, amplitude-vs-frequency curves for instruments are dominated by hysteresis effects that obscure their interpretation in terms of resonance characteristics. Nevertheless, there is a
strong correlation between harmonic amplitude and frequency modulations. An effort is being made to parameterize this effect in order
to provide efficient and expressive synthesis of vibrato tones with independent control of vibrato rate and tone duration.
1:40
2pAA3. Get real: Improving acoustic environments in video games. Yuri Lysoivanov (Recording Arts, Tribeca Flashpoint Media
Arts Acad., 28 N. Clark St. Ste. 500, Chicago, IL 60602, yuri.lysoivanov@tfa.edu)
As processing power grows the push for realism in video games continues to expand. However, techniques for generating realistic
acoustic environments in games have often been limited. Using examples from major releases, this presentation will take a historical perspective on interactive environment design, discuss current methods for modeling acoustic environments in games and suggest specific
cases where acoustic expertise can provide an added layer to the interactive experience.
2:00
2pAA4. Applications of telematic mixing consoles in networked audio for musical performance, spatial audio research, and
sound installations. David J. Samson and Jonas Braasch (Rensselaer Polytechnic Inst., 1521 6th Ave., Apt. 303, Troy, NY 12180, samsod2@rpi.edu)
In today’s technologically driven world, the ability to connect across great distance via Internet Protocol is more important than
ever. As the technology evolves, so does the art and science that relies upon it for collaboration and growth. Developing the state of the
art system for flexible and efficient routing of networked audio provides a platform for experimental musicians, researchers, and artists
to create freely without the restrictions imposed by traditional telepresence. Building on previous development and testing of a telematic
mixing console while addressing critical issues with the current platform and current practice, the console allows for the integration of
high-quality networked audio into computer assisted virtual environments (CAVE systems), sound and art installations, and other audio
driven research projects. Through user study, beta testing, and integration into virtual audio environments, the console has evolved to
meet the demand for power and flexibility critical to multi-site collaboration with high-quality networked audio. Areas of concern
addressed in development are computational efficiency, system latency, routing architecture, and results of continued user study.
2150
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2150
2:20
2pAA5. Twenty years of electronic architecture in the Hilbert Circle Theatre. Paul Scarbrough (Akustiks, 93 North Main St., South
Norwalk, CT 06854, pscarbrough@akustiks.com) and Steve Barbar (E-coustic Systems, Belmont, MA)
In 1984, the Circle Theatre underwent a major renovation, transforming the original 3000 + seat venue into a 1780 seat hall with
reclaimed internal volume dedicated to a new lobby and an orchestra rehearsal space. In 1996, a LARES acoustic enhancement system
replaced the original electronic architecture system, and has been used in every performance since that time. We will discuss details of
the renovation, the incorporation of the electronic architecture with other acoustical treatments, system performance over time, and plans
for the future.
2:40
2pAA6. Equalization and compression—Friends or foes? Alexander U. Case (Sound Recording Technol., Univ. of Massachusetts
Lowell, 35 Wilder St., Ste. 3, Lowell, MA 01854, alex@fermata.biz)
These two essential signal processors have overlapping capabilities. Tuning a sound system for any function requires complementary
interaction between equalization and compression. The timbral impact of compression is indirect, and can be counterintuitive. A deeper
understanding of compression parameters, particularly attack and release, clarifies the connection between compression and tone and
makes coordination with equalization more productive.
3:00–3:15 Break
3:15
3:45
2pAA7. Analysis of room acoustical characteristics by plane wave
decomposition using spherical microphone arrays. Jin Yong Jeon,
Muhammad Imran, and Hansol Lim (Dept. of Architectural Eng., Hanyang
Univ., 17 Haengdang-dong, Seongdong-gu, Seoul, 133791, South Korea,
jyjeon@hanyang.ac.kr)
2pAA9. A case study of a high end residential condominium building
acoustical design and field performance testing. Erik J. Ryerson and Tom
Rafferty (Acoust., Shen Milsom & Wilke, LLC, 2 North Riverside Plaza,
Ste. 1460, Chicago, IL 60606, eryerson@smwllc.com)
The room acoustical characteristics have been investigated in temporal
and spatial structures of room impulse responses (IRs) at different audience
positions in real halls. The spherical microphone array of 32-channel is used
for measurements process. Specular and diffusive reflections in IRs have
been visualized in temporal domain with sound-field decomposition analysis. For plane wave decomposition, the spherical harmonics are used. The
beamforming technique is also employed to make directional measurements
and for the spatio-temporal characterization of sound field. The directional
measurements by beamforming are performed for producing impulse
responses for the different directions to characterize the sound. From the
estimation of spatial characterization, the reflective surfaces of the hall are
indicated as responsible for specular and diffusive reflections.
3:30
2pAA8. Comparing the acoustical nature of a compressed earth block
residence to a traditional wood-framed residence. Daniel Butko (Architecture, The Univ. of Oklahoma, 830 Van Vleet Oval, Norman, OK 73019,
butko@ou.edu)
Various lost, misunderstood, or abandoned materials and methods
throughout history can serve as viable options in today’s impasse of nature
and mankind. Similar to the 19th century resurgence of concrete, there is a
developing interest in earth as an architectural material capable of dealing
with unexpected fluctuations and rising climate changes. Studying the
acoustical nature of earthen construction can also serve as a method of
application beyond aesthetics and thermal comfort. Innovations using Compressed Earth Block (CEB) have been developed and researched over the
past few decades and recently the collaborative focus for a collaborative
team of faculty and students at a NAAB accredited College of Architecture,
an ABET accredited College of Engineering, and a local chapter of Habitat
for Humanity. The multidisciplinary research project resulted in the design
and simultaneous construction of both a CEB residence and a conventionally wood-framed version of equal layout, area, volume, apertures, and roof
structure on adjacent sites to prove the structural, thermal, economical, and
acoustical value of CEB as a viable residential building material. This paper
defines acoustical measurements of both residences such as STC, OITC, TL,
NC, FFT, frequency responses, and background noise levels prior to
occupancy.
2151
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
A high end multi-owner condominium building complex consists of 314
units configured in a central tower with a 39-story central core, as well as 21
and 30 story side towers. A series of project specific acoustical design considerations related to condominium unit horizontal and vertical acoustical
separation as well as background noise control for building HVAC systems
were developed for the project construction documents and later field tested
to confirm conformance with the acoustical design criteria. This paper
presents the results of these building wide field tests as well as a discussion
of pitfalls encountered during design, construction, and post-construction.
4:00
2pAA10. Innovative ways to make cross laminated timber panels
sound-absorptive. Banda Logawa and Murray Hodgson (Mech. Eng., Univ.
of Br. Columbia, 2160-2260 West Mall, Vancouver, BC, Canada, logawa_
b@yahoo.com)
Cross Laminated Timber (CLT) panels typically consist of several glued
layers of wooden boards with orthogonally alternating directions. This
cross-laminating process allows CLT panels to be used as load-bearing plate
elements similar to concrete slabs. However, they are very sound-reflective,
which can lead to concerns about acoustics. Growing interest in applications
of CLT panels as building materials in North America has initiated much
current research on their acoustical properties. This project is aimed at
investigating ways to improve the sound-absorption characteristics of the
panels by integrating arrays of Helmholtz-resonator (HR) absorbers into the
panels and establishing design guidelines for CLT-HR absorber panels for
various room-acoustical applications. To design the new prototype panels,
several efforts have been made to measure and analyze the sound-absorption
characteristics of the exposed CLT surfaces in multiple buildings in British
Columbia, investigate suitable methods and locations to measure both normal and random incidence sound absorption characteristics, study the current manufacturing method of CLT panels, create acoustic models of CLTHR absorber panels with various shapes and dimensions, and evaluate the
sound absorption performance of prototype panels. This paper will report
progress on this work.
168th Meeting: Acoustical Society of America
2151
2p TUE. PM
Contributed Papers
4:15
2pAA11. Investigate the persistence of sound frequencies Indoor television decors. Mohsen Karami (Dept. of Media Eng., IRIB Univ., No. 8, Dahmetry 4th Alley, Bahar Ave., Kermanshah, Kermanshah 6718839497, Iran,
mohsenkarami.ir@gmail.com)
changes in the frequency of sound energy absorption occurs. To address this
issue, the pink noise playback with B&K2260 device, standard time
ISO3382, reverberation time in IRIB channels programs twelve decors was
measured in various studios. Survey shows values obtained in all the decor,
the persistence of high frequencies and this effect occurred regardless of the
decor’s shape and the studio is.
It seems to add to the decor of the studio and make a half-closed spaces
and reduce the absorption of waves hitting the studio sound and lasting
TUESDAY AFTERNOON, 28 OCTOBER 2014
LINCOLN, 1:25 P.M. TO 5:00 P.M.
Session 2pAB
Animal Bioacoustics: Topics in Animal Bioacoustics II
Cynthia F. Moss, Chair
Psychological and Brain Sci., Johns Hopkins Univ., 3400 N. Charles St., Ames Hall 200B, Baltimore, MD 21218
Chair’s Introduction—1:25
Contributed Papers
1:30
2pAB1. Amplitude shifts in the cochlear microphonics of Mongolian
gerbils created by noise exposure. Shin Kawai and Hiroshi Riquimaroux
(Life and Medical Sci., Doshisha Univ., 1-3 Miyakotani, Tatara, Kyotanabe
610-0321, Japan, hrikimar@mail.doshisha.ac.jp)
The Mongolian gerbil (Meriones unguiculatus) was used to evaluate
effects of intense noise exposure on functions of the hair cells. Cochlear
microphonics (CM) served an index to show functions of the hair cells. The
purpose of this study was to verify which frequency was most damaged by
noise exposure and examine relationships between the frequency and the
animal’s behaviors. We measured growth and recovery of the temporal
shifts in amplitude of CM. The CM was recorded from the round window.
Test stimuli used were tone bursts (1–45 kHz in half octave step), with duration of 50 ms (5 ms rise/fall times). The subject was exposed to broadband
noise (0.5 to 60 kHz) at 90 dB SPL for 5 minutes. Threshold shifts were
measured for the testing tone bursts from immediately after the exposure up
to 120 minutes after the exposure. Findings showed that reduction in CM
amplitude was observed after the noise exposure. Especially, large reduction
was produced in a frequency range of 22.4 kHz. However, little reduction
was observed around a frequency range of 4 kHz.
1:45
2pAB2. Detection of fish calls by using the small underwater sound recorder. Ikuo Matsuo (Tohoku Gakuin Univ., Tenjinzawa 2-1-1, Izumi-ku,
Sendai 9813193, Japan, matsuo@cs.tohoku-gakuin.ac.jp), Tomohito Imaizumi, and Tmonari Akamatsu (National Res. Inst. of Fisheries Eng., Fisheries Res. Agency, Kamisu, Japan)
Passive acoustic monitoring has been widely used for the survey of marine mammals. This method can be applied for any sounding creatures in
the ocean. Many fish, including croaker, grunt, and snapper, produce species-specific low-frequency sounds associated with courtship and spawning
behavior in chorus groups. In this paper, the acoustic data accumulated by
an autonomous small underwater recorder were used for the sound detection
analysis. The recorder was set on the sea floor off the coast of Chosi in
2152
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Japan (35 400 5500 N, 140 490 1400 E). The observed signals include not only
target fish calls (white croaker) but also calls of another marine life and
noises of vessels. We tried to extract the target fish calls out of the sounds.
First, recordings were processed by bandpass filter (400–2400 Hz) to eliminate low frequency noise contamination. Second, a low frequency filter
applied to extract envelope of the waveform and identified high intensity
sound units, which are possibly fish calls. Third, parameter tuning has been
conducted to fit the detection of target fish call using absolute received intensity and duration. In this method, 28614 fish calls could be detected from
the observed signals during 130 hours Comparing with manually identified
fish call, correct detection and false alarm were 0.88 and 0.03, respectively.
[This work was supported by CREST, JST.]
2:00
2pAB3. Changes in note order stereotypy during learning in two species
of songbird, measured with automatic note classification. Benjamin N.
Taft (Landmark Acoust. LLC, 1301 Cleveland Ave., Racine, WI 53405,
ben.taft@landmarkacoustics.com)
In addition to mastering the task of performing the individual notes of a
song, many songbirds must also learn to produce each note in a stereotyped
order. As a bird practices its new song, it may perform thousands of bouts,
providing a rich source of information about how note phonology and note
type order change during learning. A combination of acoustic landmark
descriptions, neural network and hierarchical clustering classifiers, and Markov models of note order made it possible to measure note order stereotypy
in two species of songbird. Captive swamp sparrows (Melospiza melodia,
11 birds, 92063 notes/bird), and wild tree swallows (Tachycineta bicolor, 18
birds, 448 syllables/bird) were recorded song development. The predictability of swamp sparrow note order showed significant increase during the
month-long recording period (F1,162 = 9977, p < 0.001). Note order stereotypy in tree swallows also increased by a significant amount over a monthlong field season (Mann-Whitney V = 12, p-value < 0.001). Understanding
changes in song stereotypy can improve our knowledge of vocal learning,
performance, and cultural transmission.
168th Meeting: Acoustical Society of America
2152
3:15
2pAB4. Plugin architecture for creating algorithms for bioacoustic signal
processing software. Christopher A. Marsh, Marie A. Roch (Dept. of Comput. Sci., San Diego State Univ., 5500 Campanile Dr., San Diego, CA 921827720, cmarsh@rohan.sdsu.edu), and David K. Mellinger (Cooperative Inst.
for Marine Resources Studies, Oregon State Univ., Newport, OR)
2pAB7. Temporal patterns in detections of sperm whales (Physeter
macrocephalus) in the North Pacific Ocean based on long-term passive
acoustic monitoring. Karlina Merkens (Protected Species Div., NOAA Pacific Islands Fisheries Sci. Ctr., NMFS/PIFSC/PSD/Karlina Merkens, 1845
Wasp Blvd., Bldg. 176, Honolulu, HI 96818, karlina.merkens@noaa.gov),
Anne Simonis (Scripps Inst. of Oceanogr., Univ. of California San Diego,
La Jolla, CA), and Erin Oleson (Protected Species Div., NOAA Pacific
Islands Fisheries Sci. Ctr., Honolulu, HI)
There are several acoustic monitoring software packages that allow for
the creation and execution of algorithms that automate detection, classification, and localization (DCL). Algorithms written for one program are generally not portable to other programs, and usually must be written in a specific
programming language. We have developed an application programming
interface (API) that seeks to resolve these issues by providing a plugin
framework for creating algorithms for two acoustic monitoring packages:
Ishmael and PAMGuard. This API will allow new detection, classification,
and localization algorithms to be written for these programs without requiring knowledge of the monitoring software’s source code or inner workings,
and lets a single implementation run on either platform. The API also allows
users to write DCL algorithms in a wide variety of languages. We hope that
this will promote the sharing and reuse of algorithm code. [Funding from
ONR.]
2:30
2pAB5. Acoustic detection of migrating gray, humpback, and blue
whales in the coastal, northeast Pacific. Regina A. Guazzo, John A. Hildebrand, and Sean M. Wiggins (Scripps Inst. of Oceanogr., Univ. of California, San Diego, 9450 Gilman Dr., #80237, La Jolla, CA 92092, rguazzo@
ucsd.edu)
Many large cetaceans of suborder Mysticeti make long annual migrations along the California coast. A bottom-mounted hydrophone was
deployed in shallow water off the coast of central California and recorded
during November 2012 to September 2013. The recording was used to
determine the presence of blue whales, humpback whales, and gray whales.
Gray whale calls were further analyzed and the number of calls per day and
per hour were calculated. It was found that gray whales make their migratory M3 calls at a higher rate than previously observed. There were also
more M3 calls recorded at night than during the day. This work will be continued to study the patterns and interactions between species and compared
with shore-based survey data.
2:45
2pAB6. Importing acoustic metadata into the Tethys scientific workbench/database. Sean T. Herbert (Marine Physical Lab., Scripps Inst. of
Oceanogr., 8237 Lapiz Dr., San Diego, CA 92126, sth.email@gmail.com)
and Marie A. Roch (Comput. Sci., San Diego State Univ., San Diego,
CA)
Tethys is a temporal-spatial scientific workbench/database created to
enable the aggregation and analysis of acoustic metadata from recordings
such as animal detections and localizations. Tethys stores data in a specific
format and structure, but researchers produce and store data in various formats. Examples of storage formats include spreadsheets, relational databases, or comma-separated value (CSV) text files. Thus, one aspect of the
Tethys project has been to focus on providing options to allow data import
regardless of the format in which it is stored. Data import can be accomplished in one of two ways. The first is translation, which transforms source
data from other formats into the format Tethys uses. Translation does not
require any programming, but rather the specification of an import map
which associates the researcher’s data with Tethys fields. The second
method is a framework called Nilus that enables detection and localization
algorithms to create Tethys formatted documents directly. Programs can either be designed around Nilus, or be modified to make use of it, which does
require some programming. These two methods have been used to successfully import over 4.5 million records into Tethys. [Work funded by NOPP/
ONR/BOEM.]
3:00–3:15 Break
2153
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Sperm whales (Physeter macrocephalus), a long-lived, cosmopolitan
species, are well suited for long-term studies, and their high amplitude echolocation signals make them ideal for passive acoustic monitoring. NOAA’s
Pacific Islands Fisheries Science Center has deployed High-frequency
Acoustic Recording Packages (200 kHz sampling rate) at 13 deep-water
locations across the central and western North Pacific Ocean since 2005.
Recordings from all sites were manually analyzed for sperm whale signals,
and temporal patterns were examined on multiple scales. There were sperm
whale detections at all sites, although the rate of detection varied by location, with the highest rate at Wake Island (15% of samples), and the fewest
detections at sites close to the equator (<1%). Only two locations (Saipan
and Pearl and Hermes Reef) showed significant seasonal patterns, with more
detections in the early spring and summer than in later summer or fall. There
were no significant patterns relating to lunar cycles. Analysis of diel variation revealed that sperm whales were detected more during the day and
night compared to dawn and dusk at most sites. The variability shown in
these results emphasizes the importance of assessing basic biological patterns and variations in the probability of detection before progressing to further analysis, such as density estimation, where the effects of uneven
sampling effort could significantly influence results.
3:30
2pAB8. Automatic detection of tropical fish calls recorded on moored
acoustic recording platforms. Maxwell B. Kaplan, T. A. Mooney (Biology, Woods Hole Oceanographic Inst., 266 Woods Hole Rd., MS50, Woods
Hole, MA 02543, mkaplan@whoi.edu), and Jim Partan (Appl. Ocean Phys.
and Eng., Woods Hole Oceanographic Inst., Woods Hole, MA)
Passive acoustic recording of biological sound production on coral reefs
can help identify spatial and temporal differences among reefs; however,
the contributions of individual fish calls to overall trends are often overlooked. Given that the diversity of fish call types may be indicative of fish
species diversity on a reef, quantifying these call types could be used as a
proxy measure for biodiversity. Accordingly, automatic fish call detectors
are needed because long acoustic recorders deployments can generate large
volumes of data. In this investigation, we report the development and performance of two detectors—an entropy detector, which identifies troughs in
entropy (i.e., uneven distribution of entropy across the frequency band of interest, 100–1000 Hz), and an energy detector, which identifies peaks in root
mean square sound pressure level. Performance of these algorithms is
assessed against a human identification of fish sounds recorded on a coral
reef in the US Virgin Islands in 2013. Results indicate that the entropy and
energy detectors, respectively, have false positive rates of 9.9% and 9.9%
with false negative rates of 28.8% and 31.3%. These detections can be used
to cluster calls into types, in order to assess call type diversity at different
reefs.
3:45
2pAB9. Social calling behavior in Southeast Alaskan humpback whales
(Megaptera novaeangliae): Communication and context. Michelle Fournet (Dept. of Fisheries and Wildlife, Oregon State Univ., 425 SE Bridgeway
Ave., Corvallis, OR 97333, mbellalady@gmail.com), Andrew R. Szabo
(Alaska Whale Foundation, Petersburg, AK), and David K. Mellinger (Cooperative Inst. for Marine Resources Studies, Oregon State Univ., Newport,
OR)
Across their range humpback whales (Megaptera novaeangliae) produce
a wide array of vocalizations including song, foraging vocalizations, and a
variety of non-song vocalizations known as social calls. This study investigates the social calling behavior of Southeast Alaskan humpback whales
from a sample of 299 vocalizations paired with 365 visual surveys collected
168th Meeting: Acoustical Society of America
2153
2p TUE. PM
2:15
over a three-month period on a foraging ground in Frederick Sound in
Southeast Alaska. Vocalizations were classified using visual-aural analysis,
statistical cluster analyses, and discriminant function analysis. The relationship between vocal behavior and spatial-behavioral context was analyzed
using a Poisson log-linear regression (PLL). Preliminary results indicate
that some call types were commonly produced while others were rare, and
that the greatest variety of calling occurred when whales were clustered.
Moreover, calling rates in one vocal class, the pulsed (P) vocal class, were
negatively correlated with mean nearest-neighbor distance, indicating that P
calling rates increased as animals clustered, suggesting that the use of P
calls may be spatially mediated. The data further suggest that vocal behavior
varies based on social context, and that vocal behavior trends toward complexity as the potential for social interactions increases. [Work funded by
Alaska Whale Foundation and ONR.]
4:00
2pAB10. First measurements of humpback whale song received sound
levels recorded from a tagged calf. Jessica Chen, Whitlow W. L. Au
(Hawaii Inst. of Marine Biology, Univ. of Hawaii at Manoa, 46-007 Lilipuna Rd., Kaneohe, HI 96744, jchen2@hawaii.edu), and Adam A. Pack
(Departments of Psych. and Biology, Univ. of Hawaii at Hilo, Hilo, HI)
There is increasing concern over the potential ecological effects from
high levels of oceanographic anthropogenic noise on marine mammals. Current US NOAA regulations on received noise levels as well as the Draft
Guidance for Assessing the Effect of Anthropogenic Sound on Marine
Mammals are based on limited studies conducted on few species. For the
regulations to be effective, it is important to first understand what whales
hear and their received levels of natural sounds. This novel study presents
the measurement of sound pressure levels of humpback whale song received
at a humpback whale calf in the wintering area of Maui, Hawaii. This individual was tagged with an Acousonde acoustic and data recording tag and
captured vocalizations from a singing male escort associated with the calf
and its mother. Although differences in behavioral reaction to anthropogenic
versus natural sounds have yet to be quantified, this represents the first
known measurement of sound levels that a calf may be exposed to naturally
from conspecifics. These levels can also be compared to calculated humpback song source levels. Despite its recovering population, the humpback
whale is an endangered species and understanding their acoustic environment is important for continued regulation and protection.
4:15
2pAB11. Seismic airgun surveys and vessel traffic in the Fram Strait
and their contribution to the polar soundscape. Sharon L. Nieukirk,
Holger Klinck, David K. Mellinger, Karolin Klinck, and Robert P. Dziak
(Cooperative Inst. for Marine Resources Studies, Oregon State Univ., 2030
SE Marine Sci. Dr., Newport, OR 97365, sharon.nieukirk@oregonstate.
edu)
Low-frequency (<1 kHz) noise associated with human offshore activities has increased dramatically over the last 50 years. Of special interest are
areas such as the Arctic where anthropogenic noise levels are relatively low
but could change dramatically, as sea ice continues to shrink and trans-polar
shipping routes open. In 2009, we began an annual deployment of two calibrated autonomous hydrophones in the Fram Strait to record underwater ambient sound continuously for one year at a sampling rate of 2 kHz. Ambient
noise levels were summarized via long-term spectral average plots and
reviewed for anthropogenic sources. Vessel traffic data were acquired from
the Automatic Identification System (AIS) archive and ship density was
2154
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
estimated by weighting vessel tracklines by vessel length. Background noise
levels were dominated by sounds from seismic airguns during spring,
summer and fall months; during summer these sounds were recorded in all
hours of the day and all days of a month. Ship density in the Fram Strait
peaked in late summer and increased every year. Future increases in ship
traffic and seismic surveys coincident with melting sea ice will increase ambient noise levels, potentially affecting the numerous species of acoustically
active whales using this region.
4:30
2pAB12. Using the dynamics of mouth opening in echolocating bats to
predict pulse parameters among individual Eptesicus fuscus. Laura N.
Kloepper, James A. Simmons (Dept. of Neurosci., Brown Univ., 185 Meeting St. Box GL-N, Brown University Providence, RI 02912, laura_kloepper@brown.edu), and John R. Buck (Elec. and Comput. Eng., Univ. of
Massachusetts Dartmouth, Dartmouth, MA)
The big brown bat (Eptesicus fuscus) produces echolocation sounds in
its larynx and emits them through its open mouth. Individual mouth-opening
cycles last for about 50 ms, with the sound produced in the middle, when
the mouth is approaching or reaching maximum gape angle. In previous
work, the mouth gape-angle at pulse emission only weakly predicted pulse
duration and the terminal frequency of the first-harmonic FM downsweep.
In the present study, we investigated whether the dynamics of mouth opening around the time of pulse emission predict additional pulse waveform
characteristics. Mouth angle openings for 24 ms before and 24 ms after
pulse emission were compared to pulse waveform parameters for three big
brown bats performing a target detection task. In general, coupling to the air
through the mouth seems less important than laryngeal factors for determining acoustic parameters of the broadcasts. Differences in mouth opening dynamics and pulse parameters among individual bats highlight this relation.
[Supported by NSF and ONR.]
4:45
2pAB13. Investigating whistle characteristics of three overlapping populations of false killer whales (Pseudorca crassidens) in the Hawaiian
Islands. Yvonne M. Barkley, Erin Oleson (NOAA Pacific Islands Fisheries
Sci. Ctr., 1845 Wasp Blvd., Bldg. 176, Honolulu, HI 96818, yvonne.barkley@noaa.gov), and Julie N. Oswald (Bio-Waves, Inc., Encinitas, CA)
Three genetically distinct populations of false killer whales Pseudorca
crassidens) reside in the Hawaiian Archipelago: two insular populations
(one within the main Hawaiian Islands [MHI] and the other within the
Northwestern Hawaiian Islands [NWHI]), and a wide-ranging pelagic population with a distribution overlapping the two insular populations. The
mechanisms that created and maintain the separation among these populations are unknown. To investigate the distinctiveness of whistles produced
by each population, we adapted the Real-time Odontocete Call Classification Algorithm (ROCCA) whistle classifier to classify false killer whale
whistles to population based on 54 whistle measurements. 911 total whistles
from the three populations were included in the analysis. Results show that
the MHI population is vocally distinct, with up to 80% of individual whistles correctly classified. The NWHI and pelagic populations achieved
between 48 and 52% correct classification for individual whistles. We evaluated the sensitivity of the classifier to the input whistle measurements to
determine which variables are driving the classification results. Understanding how these three populations differ acoustically may improve the efficacy
of the classifier and create new acoustic monitoring approaches for a difficult-to-study species.
168th Meeting: Acoustical Society of America
2154
TUESDAY AFTERNOON, 28 OCTOBER 2014
INDIANA G, 1:45 P.M. TO 4:15 P.M.
Session 2pAO
Acoustical Oceanography: General Topics in Acoustical Oceanography
John A. Colosi, Chair
Department of Oceanography, Naval Postgraduate School, 833 Dyer Road, Monterey, CA 93943
1:45
2pAO1. Analysis of sound speed fluctuations in the Fram Strait near
Greenland during summer 2013. Kaustubha Raghukumar, John A. Colosi
(Oceanogr., Naval Postgrad. School, 315B Spanagel Hall, Monterey, CA
93943, kraghuku@nps.edu), and Peter F. Worcester (Scripps Inst. of Oceanogr., Univ. of California San Diego, San Diego, CA)
We analyze sound speed fluctuations in roughly 600 m deep polar waters
from a recent experiment. The Thin-ice Arctic Acoustics Window
(THAAW) experiment was conducted in the waters of Fram Strait, east of
Greenland, during the summer of 2013. A drifting acoustic mooring that
incorporated environmental sensors measured temperature and salinity over
a period of four months, along a 500 km north-south transect. We examine
the relative contributions of salinity-driven polar internal wave activity, and
temperature/salinity variability along isopycnal surfaces (spice) on sound
speed perturbations in the Arctic. Both internal-wave and spice effects are
compared against the more general deep water PhilSea09 measurements.
Additionally, internal wave spectra, energies, and modal bandwidth are
compared against the well-known Garrett-Munk spectrum. Given the resurgence of interest in polar acoustics, we believe that this analysis will help
parameterize sound speed fluctuations in future acoustic propagation
models.
2:00
2pAO2. Sound intensity fluctuations due to mode coupling in the presence of nonlinear internal waves in shallow water. Boris Katsnelson (Marine geoSci., Univ. of Haifa, Mt Carmel, Haifa 31905, Israel, katz@phys.
vsu.ru), Valery Grogirev (Phys., Voronezh Univ., Voronezh, Russian Federation), and James Lynch (WHOI, Woods Hole, MA)
Intensity fluctuations of the low frequency LFM signals (band 270–330
Hz) were observed in Shallow Water 2006 experiment in the presence of
moving train consisting of about seven separate nonlinear internal waves
crossing acoustic track at some angle (~ 80 ). It is shown that spectrum of
the sound intensity fluctuations calculated for time period of radiation
(about 7.5 minutes) contains a few peaks, corresponding to predominating
frequency ~6.5 cph (and its harmonics) and small peak, corresponding to
comparatively high frequency, about 30 cph, which is interpreted by authors
as manifestation of horizontal refraction. Values of mentioned frequencies
are in accordance with theory of mode coupling and horizontal refraction on
moving nonlinear internal waves, developed earlier by authors. [Work was
supported by BSF.]
2:15
2pAO3. A comparison of measured and forecast acoustic propagation
in a virtual denied area characterized by a heterogeneous data collection asset-network. Yong-Min Jiang and Alberto Alvarez (Res. Dept.,
NATO-STO-Ctr. for Maritime Res. and Experimentation, Viale San Bartolomeo 400, La Spezia 19126, Italy, jiang@cmre.nato.int)
The fidelity of sonar performance predictions depends on the model
used and the quantity and quality of the environmental information that is
available. To investigate the impact of the oceanographic information collected by a heterogeneous and near-real time adaptive network of robots in a
2155
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
simulated access denied area, a field experiment (REP13-MED) was conducted by CMRE during August 2013 in an area (70 81 km) located offshore La Spezia (Italy), in the Ligurian Sea. The sonar performance assessment makes use of acoustic data recorded by a vertical line array at
source—receiver ranges from 0.5 to 30 km. Continuous wave pulses at multiple frequencies (300–600 Hz) were transmitted at two source depths, 25
and 60 meters, at each range. At least 60 pings were collected for each
source depth to build up the statistics of the acoustic received level and
quantify the measurement uncertainty. A comparison of the acoustic transmission loss measured and predicted using an ocean prediction model
(ROMS) assimilating the observed oceanographic data is presented, and the
performance of the observational network is evaluated. [Work funded by
NATO–Allied Command Transformation]
2:30
2pAO4. Performance assessment of a short hydrophone array for
seabed characterization using natural-made ambient noise. Peter L.
Nielsen (Res. Dept., STO-CMRE, V.S. Bartolomeo 400, La Spezia 19126,
Italy, nielsen@cmre.nato.int), Martin Siderius, and Lanfranco Muzi (Dept.
of Elec. and Comput. Eng., Portland State Univ., Portland, OR)
The passive acoustic estimate of seabed properties using natural-made
ambient noise received on a glider equipped hydrophone array provides the
capability to perform long duration seabed characterization surveys on
demand in denied areas. However, short and compact arrays associated with
gliders are limited to a few hydrophones and small aperture. Consequently,
these arrays exhibit lower resolution of the estimated seabed properties, and
the reliability of the environmental estimates may be questionable. The
objective of the NATO-STO CMRE sea trial REP14-MED (conducted west
of Sardinia, Mediterranean Sea) is to evaluate the performance of a prototype glider array with eight hydrophones in a line and variable hydrophone
spacing for seabed characterization using natural-made ambient noise. This
prototype array is deployed vertically above the seabed together with a 32element reference vertical line array. The arrays are moored at different sites
with varying sediment properties and stratification. The seabed reflection
properties and layering structure at these sites are estimated from ambient
noise using both arrays and the results are compared to assess the glider
array performance. Synthetic extension of the glider array is performed to
enhance resolution of the bottom properties, and the results are compared
with these from the longer reference array.
2:45
2pAO5. Species classification of individual fish using the support vector
machine. Atsushi Kinjo, Masanori Ito, Ikuo Matsuo (Tohoku Gakuin Univ.,
Tenjinzawa 2-1-1, Izumi-ku, Sendai, MIyagi 981-3193, Japan, atsushi.
kinjo@gmail.com), Tomohito Imaizumi, and Tomonari Akamatsu (Fisheries Res. Agency, National Res. Inst. of Fisheries Eng., Hasaki, Ibaraki,
Japan)
The fish species classification using echo-sounder is important for fisheries. In the case of fish school of mixed species, it is necessary to classify
individual fish species by isolating echoes from multiple fish. A broadband
signal, which offered the advantage of high range resolution, was applied to
detect individual fish for this purpose. The positions of fish were estimated
168th Meeting: Acoustical Society of America
2155
2p TUE. PM
Contributed Papers
from the time difference of arrivals by using the split-beam system. The target strength (TS) spectrum of individual fish echo was computed from the
isolated echo and the estimated position. In this paper, the Support Vector
Machine was introduced to classify fish species by using these TS spectra.
In addition, it is well known that the TS spectra are dependent on not only
fish species but also fish size. Therefore, it is necessary to classify both fish
species and size by using these features. We tried to classify two species
and two sizes of schools. Subject species were chub mackerel (Scomber
japonicas) and Japanese jack mackerel (Trachurus japonicus). We calculated the classification rates to limit the training data, frequency bandwidth
and tilt angles. It was clarified that the best classification rate was 71.6 %.
[This research was supported by JST, CREST.]
Values of these parameters in the experiment are estimated by optimizing
focusing of the back-propagated CCFs. The results are consistent with the
values of the seafloor parameters evaluated independently by other means.
3:00–3:15 Break
Estimation of the shear properties of seafloor sediments in littoral waters
is important in modeling the acoustic propagation and predicting the
strength of sediments for geotechnical applications. One of the promising
approaches to estimate shear speed is by using the dispersion of seismoacoustic interface (Scholte) waves that travel along the water-sediment
boundary. The propagation speed of the Scholte waves is closely related to
the shear wave speed over a depth of 1–2 wavelengths into the seabed. A
geophone system for the measurement of these interface waves, along with
an inversion scheme that inverts the Scholte wave dispersion data for sediment shear speed profiles have been developed. The components of this
inversion scheme are a genetic algorithm and a forward model which is
based on dynamic stiffness matrix approach. The effects of the assumptions
of the forward model on the inversion, particularly the shear wave depth
profile, will be explored using a finite element model. The results obtained
from a field test conducted in very shallow waters in Davisville, RI, will be
presented. These results are compared to historic estimates of shear speed
and recently acquired vibracore data. [Work sponsored by ONR, Ocean
Acoustics.]
3:15
2pAO6. Waveform inversion of ambient noise cross-correlation functions in a coastal ocean environment. Xiaoqin Zang, Michael G. Brown,
Neil J. Williams (RSMAS, Univ. of Miami, 4600 Rickenbacker Cswy.,
Miami, FL 33149, xzang@rsmas.miami.edu), Oleg A. Godin (ESRL,
NOAA, Boulder, CO), Nikolay A. Zabotin, and Liudmila Zabotina (CIRES,
Univ. of Colorado, Boulder, CO)
Approximations to Green’s functions have been obtained by cross-correlating concurrent records of ambient noise measured on near-bottom instruments at 5 km range in a 100 m deep coastal ocean environment. Inversion
of the measured cross-correlation functions presents a challenge as neither
ray nor modal arrivals are temporally resolved. We exploit both ray and
modal expansion of the wavefield to address the inverse problem using two
different parameterizations of the seafloor structure. The inverse problem is
investigated by performing an exhaustive search over the relevant parameter
space to minimize the integrated squared difference between computed and
measured correlation function waveforms. To perform the waveform-based
analysis described, it is important that subtle differences between correlation
functions and Green’s functions are accounted for. [Work supported by NSF
and ONR.]
3:30
2pAO7. Application of time reversal to acoustic noise interferometry in
shallow water. Boris Katsnelson (Marine GeoSci., Univ. of Haifa, 1, Universitetskaya sq, Voronezh 394006, Russian Federation, katz@phys.vsu.ru),
Oleg Godin (Univ. of Colorado, Boulder, CO), Jixing Qin (State Key Lab,
Inst. of Acoust., Beijing, China), Nikolai Zabotin, Liudmila Zabotina (Univ.
of Colorado, Boulder, CO), Michael Brown, and Neil Williams (Univ. of
Miami, Miami, FL)
Two-point cross-correlations function (CCF) of diffuse acoustic noise
approximates the Green’s function, which describes deterministic sound
propagation between the two measurement points. Similarity between CCFs
and Green’s functions motivates application to acoustic noise interferometry
of the techniques that were originally developed for remote sensing using
broadband, coherent compact sources. Here, time reversal is applied to
CCFs of the ambient and shipping noise measured in 100 meter-deep water
in the Straits of Florida. Noise was recorded continuously for about six days
at three points near the seafloor by pairs of hydrophones separated by 5.0,
9.8, and 14.8 km. In numerical simulations, a strong focusing occurs in the
vicinity of one hydrophone when the Green’s function is back-propagated
from the other hydrophone, with the position and strength of the focus being
sensitive to density, sound speed, and attenuation coefficient in the bottom.
2156
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
3:45
2pAO8. Shear wave inversion in a shallow coastal environment. Gopu
R. Potty (Dept. of Ocean Eng., Univ. of Rhode Island, 115 Middleton Bldg.,
Narragansett, RI 02882, potty@egr.uri.edu), Jennifer L. Giard (Marine
Acoust., Inc., Middletown, RI), James H. Miller, Christopher D. P. Baxter
(Dept. of Ocean Eng., Univ. of Rhode Island, Narragansett, RI), Marcia J.
Isakson, and Benjamin M. Goldsberry (Appl. Res. Labs., The Univ. of
Texas at Austin, Austin, TX)
4:00
2pAO9. The effects of pH on acoustic transmission loss in an estuary.
James H. Miller (Ocean Eng., Univ. of Rhode Island, URI Bay Campus, 215
South Ferry Rd., Narragansett, RI 02882, miller@egr.uri.edu), Laura Kloepper (Neurosci., Brown Univ., Providence, RI), Gopu R. Potty (Ocean Eng.,
Univ. of Rhode Island, Narragansett, RI), Arthur J. Spivack, Steven
D’Hondt, and Cathleen Turner (Graduate School of Oceanogr., Univ. of
Rhode Island, Narragansett, RI)
Increasing atmospheric CO2 will cause the ocean to become more acidic
with pH values predicted to be more than 0.3 units lower over the next 100
years. These lower pH values have the potential to reduce the absorption
component of transmission loss associated with dissolved boron. Transmission loss effects have been well studied for deep water where pH is relatively stable over time-scales of many years. However, estuarine and coastal
pH can vary daily or seasonally by about 1 pH unit and cause fluctuations in
one-way acoustic transmission loss of 2 dB over a range of 10 km at frequencies of 1 kHz or higher. These absorption changes can affect the sound
pressure levels received by animals due to identifiable sources such as
impact pile driving. In addition, passive and active sonar performance in
these estuarine and coastal waters can be affected by these pH fluctuations.
Absorption changes in these shallow water environments offer a potential
laboratory to study their effect on ambient noise due to distributed sources
such as shipping and wind. We introduce an inversion technique based on
perturbation methods to estimate the depth-dependent pH profile from measurements of normal mode attenuation. [Miller and Potty supported by ONR
322OA.]
168th Meeting: Acoustical Society of America
2156
TUESDAY AFTERNOON, 28 OCTOBER 2014
INDIANA A/B, 1:30 P.M. TO 5:45 P.M.
Session 2pBA
Biomedical Acoustics: Quantitative Ultrasound II
Michael Oelze, Chair
UIUC, 405 N Mathews, Urbana, IL 61801
Contributed Papers
2pBA1. Receiver operating characteristic analysis for the detectability
of malignant breast lesions in acousto-optic transmission ultrasound
breast imaging. Jonathan R. Rosenfield (Dept. of Radiology, The Univ. of
Chicago, 5316 South Dorchester Ave., Apt. 423, Chicago, IL 60615, jrosenfield@uchicago.edu), Jaswinder S. Sandhu (Santec Systems Inc., Arlington
Heights, IL), and Patrick J. La Rivière (Dept. of Radiology, The Univ. of
Chicago, Chicago, IL)
Conventional B-mode ultrasound imaging has proven to be a valuable
supplement to x-ray mammography for the detection of malignant breast
lesions in premenopausal women with high breast density. We have developed a high-resolution transmission ultrasound breast imaging system
employing a novel acousto-optic (AO) liquid crystal detector to enable rapid
acquisition of full-field breast ultrasound images during routine cancer
screening. In this study, a receiver operating characteristic (ROC) analysis
was performed to assess the diagnostic utility of our prototype system.
Using a comprehensive system model, we simulated the AO transmission
ultrasound images expected for a 1-mm malignant lesion contained within a
dense breast consisting of 75% normal breast parenchyma and 25% fat tissue. A Gaussian noise model was assumed with SNRs ranging from 0 to 30.
For each unique SNR, an ROC curve was constructed and the area under the
curve (AUC) was computed to assess the lesion detectability of our system.
For SNRs in excess of 10, the analysis revealed AUCs greater than 0.8983,
thus demonstrating strong detectability. Our results indicate the potential for
using an imaging system of this kind to improve breast cancer screening
efforts by reducing the high false negative rate of mammography in premenopausal women.
1:45
2pBA2. 250-MHz quantitative acoustic microscopy for assessing human
lymph-node microstructure. Daniel Rohrbach (Lizzi Ctr. for Biomedical
Eng., Riverside Res., 156 William St., 9th Fl., New York City, NY 10038,
drohrbach@RiversideResearch.org), Emi Saegusa-Beecroft (Dept. of General Surgery, Univ. of Hawaii and Kuakini Medical Ctr., Honolulu, HI),
Eugene Yanagihara (Kuakini Medical Ctr., Dept. of Pathol., Honolulu, HI),
Junji Machi (Dept. of General Surgery, Univ. of Hawaii and Kuakini Medical Ctr., Honolulu, HI), Ernest J. Feleppa, and Jonathan Mamou (Lizzi Ctr.
for Biomedical Eng., Riverside Res., New York, NY)
We employed quantitative acoustic microscopy (QAM) to measure
acoustic properties of tissue microstructure. 32 QAM datasets were acquired
from 2, fresh and 11, deparaffinized, 12-mm-thick lymph-node samples
obtained from cancer patients. Our custom-built acoustic microscope was
equipped with an F-1.16, 250-MHz transducer having a 160-MHz bandwidth to acquire reflected signals from the tissue and a substrate that intimately contacted the tissue. QAM images with a spatial resolution of 7 mm
were generated of attenuation (A), speed of sound (SOS), and acoustic impedance (Z). Samples then were stained using hematoxylin and eosin,
imaged by light microscopy, and co-registered to QAM images. The spatial
resolution and contrast of QAM images were sufficient to distinguish tissue
regions consisting of lymphocytes, fat cells and fibrous tissue. Average
properties for lymphocyte-dominated tissue were 1552.6 6 30 m/s for SOS,
2157
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
9.53 6 3.6 dB/MHz/cm for A, and 1.58 6 0.08 Mrayl for Z. Values for Z
obtained from fresh samples agreed well with those obtained from 12-mm
sections from the same node. Such 2D images provide a basis for developing improved ultrasound-scattering models underlying quantitative ultrasound methods currently used to detect cancerous regions within lymph
nodes. [NIH Grant R21EB016117.]
2:00
2pBA3. Detection of sub-micron lipid droplets using transmission-mode
attenuation measurements in emulsion phantoms and liver. Wayne
Kreider, Ameen Tabatabai (CIMU, Appl. Phys. Lab., Univ. of Washington,
1013 NE 40th St., Seattle, WA 98105, wkreider@uw.edu), Adam D. Maxwell (Dept. of Urology, Univ. of Washington, Seattle, WA), and Yak-Nam
Wang (CIMU, Appl. Phys. Lab., Univ. of Washington, Seattle, WA)
In liver transplantation, donor liver viability is assessed by the both the
amount and type of fat present in the organ. General guidelines dictate that
livers with more than 30% fat should not be transplanted; however, a lack of
available donor organs has led to the consideration of livers with more fat.
As a part of this process, it is desirable to distinguish micro-vesicular fat
(< 1 mm droplets) from macro-vesicular fat (~10 mm droplets). A method of
evaluating the relative amounts of micro- and macro-fat is proposed based
on transmission-mode ultrasound attenuation measurements. For an emulsion of one liquid in another, attenuation comprises both intrinsic losses in
each medium and excess attenuation associated with interactions between
media. Using an established coupled-phase model, the excess attenuation
associated with a monodisperse population of lipid droplets was calculated
with physical properties representative of both liver tissue and dairy products. Calculations predict that excess attenuation can exceed intrinsic attenuation and that a well-defined peak in excess attenuation at 1 MHz should
occur for droplets around 0.8 mm in diameter. Such predictions are consistent with preliminary transmission-mode measurements in dairy products.
[Work supported by NIH grants EB017857, EB007643, EB016118, and T32
DK007779.]
2:15
2pBA4. Using speckle statistics to improve attenuation estimates for
cervical assessment. Viksit Kumar and Timothy Bigelow (Mech. Eng.,
Iowa State Univ., 4112 Lincoln Swing St., Unit 113, Ames, IA 50014, vkumar@iastate.edu)
Quantitative ultrasound parameters like attenuation can be used to
observe microchanges in the cervix. To give a better estimate of attenuation
we can use speckle properties to classify which attenuation estimates are
valid and conform to the theory. For fully developed and only one type of
scatterer, Rayleigh distribution models the signal envelope. But in tissues as
the number of scatterer type increases and the speckle becomes unresolved
Rayleigh model fails. Gamma distribution has been empirically shown to be
the best fit among all the distributions. Since more than one scatterer type is
present for our work we used a mixture of gamma distributions. EM algorithm was used to find the parameters of the mixture and on basis of that the
tissue types were segmented from each other based on the criteria of different scattering properties. Attenuation estimates were then calculated for tissues of the same scattering type only. 67 Women underwent Transvaginal
168th Meeting: Acoustical Society of America
2157
2p TUE. PM
1:30
scan and the attenuation estimates were calculated for them after segregation of tissues on scattering basis. Attenuation was seen to decrease as the
time of delivery came closer.
2:30
2pBA5. Using two-dimensional impedance maps to study weak scattering in isotropic random media. Adam Luchies and Michael Oelze (Elec.
and Comput. Eng., Univ. of Illinois at Urbana-Champaign, 405 N Matthews
Ave, Urbana, IL 61801, luchies1@illinois.edu)
An impedance map (ZM) is a computational tool for studying weak scattering in soft tissues. Currently, three-dimensional (3D) ZMs are created
from a series of adjacent histological slides that have been stained to emphasize acoustic scattering structures. The 3D power spectrum of the 3DZM
may be related to quantitative ultrasound parameters such as the backscatter
coefficient. However, constructing 3DZMs is expensive, both in terms of
computational time and financial cost. Therefore, the objective of this study
was to investigate using two-dimensional (2D) ZMs to estimate 3D power
spectra. To estimate the 3D power spectrum using 2DZMs, the autocorrelation of 2DZMs extracted from a volume were estimated and averaged. This
autocorrelation estimate was substituted into the 3D Fourier transform that
assumes radial symmetry to estimate the 3D power spectrum. Simulations
were conducted on sparse collections of spheres and ellipsoids to validate
the proposed method. Using a single slice that intersected approximately 75
particles, a mean absolute error was achieved of 1.1 dB and 1.5 dB for
sphere and ellipsoidal particles, respectively. The results from the simulations suggest that 2DZMs can provide accurate estimates of the power spectrum and are a feasible alternative to the 3DZM approach.
2:45
2pBA6. Backscatter coefficient estimation using tapers with gaps. Adam
Luchies and Michael Oelze (Elec. and Comput. Eng., Univ. of Illinois at
Urbana-Champaign, 405 N Matthews Ave., Urbana, IL 61801, luchies1@
illinois.edu)
When using the backscatter coefficient (BSC) to estimate quantitative
ultrasound (QUS) parameters such as the effective scatterer diameter (ESD)
and the effective acoustic concentration (EAC), it is necessary to assume
that the interrogated medium contains diffuse scatterers. Structures that invalidate this assumption can significantly affect the estimated BSC parameters in terms of increased bias and variance and decrease performance when
classifying disease. In this work, a method was developed to mitigate the
effects of non-diffuse echoes, while preserving as much signal as possible
for obtaining diffuse scatterer property estimates. Specially designed tapers
with gaps that avoid signal truncation were utilized for this purpose. Experiments from physical phantoms were used to evaluate the effectiveness of
the proposed BSC estimation methods. The mean squared error (MSE) for
BSC between measured and theoretical had an average value of approximately 1.0 and 0.2 when using a Hanning taper and PR taper, respectively,
with six gaps. The BSC error due to amplitude bias was smallest for PR
tapers with time-bandwidth product Nx = 1. The BSC error due to shape
bias was smallest for PR tapers with Nx = 4. These results suggest using
different taper types for estimating ESD versus EAC.
3:00
2pBA7. Application of the polydisperse structure function to the characterization of solid tumors in mice. Aiguo Han and William D. O’Brien
(Elec. and Comput. Eng., Univ. of Illinois at Urbana-Champaign, 405 N.
Mathews Ave., Urbana, IL 61801, han51@uiuc.edu)
A polydisperse structure function model has been developed for modeling ultrasonic scattering from dense scattering media. The polydisperse
structure function is incorporated to a fluid-filled sphere scattering model to
model the backscattering coefficient (BSC) of solid tumors in mice. Two
types of tumors were studied: a mouse sarcoma (Englebreth-Holm-Swarm
[EHS]) and a mouse carcinoma (4T1). The two kinds of tumors had significantly different microstructures. The carcinoma had a uniform distribution
of carcinoma cells. The sarcoma had cells arranged in groups usually containing less than 20 cells per group, causing an increased scatterer size and
size distribution. Excised tumors (13 EHS samples and 15 4T1 samples)
were scanned using single-element transducers covering the frequency
2158
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
range 11–105 MHz. The BSC was estimated using a planar reference technique. The model was fit to the experimental BSC using a least-square fit.
The mean scatterer radius and the Schulz width factor (which characterizes
the width of the scatterer size distribution) were estimated. The results
showed significantly higher scatterer size estimates and wider scatterer size
distribution estimates for EHS than for 4T1, consistent with the observed
difference in microstructure of the two types of tumors. [Work supported by
NIH CA111289.]
3:15–3:30 Break
3:30
2pBA8. Experimental comparison of methods for measuring backscatter coefficient using single element transducers. Timothy Stiles and
Andrew Selep (Phys., Monmouth College, 700 E Broadway Ave., Monmouth, IL 61462, tstiles@monmouthcollege.edu)
The backscatter coefficient (BSC) has promise as a diagnostic aid. However, measurements of the BSC of soft-tissue mimicking materials have proven difficult; results on the same samples by various laboratories have up to
two orders of magnitude difference. This study compares methods of data
analysis using data acquired from the same samples using single element
transducers, with a frequency range of 1 to 20 MHz and pressure focusing
gains between 5 and 60. The samples consist of various concentrations of
milk in agar with scattering from glass microspheres. Each method utilizes a
reference spectrum from a planar reflector but differ in the diffraction and
attenuation correction algorithms. Results from four methods of diffraction
correction and three methods of attenuation correction are compared to each
other and to theoretical predictions. Diffraction correction varies from no
correction to numerical integration of the beam throughout the data acquisition region. Attenuation correction varies from limited correction for the
attenuation up to the start of the echo acquisition window, to correcting for
attenuation within a numerical integration of the beam profile. Results indicate the best agreements with theory are the methods that utilize the numerical integration of the beam profile.
3:45
2pBA9. Numerical simulations of ultrasound-pulmonary capillary
interaction. Brandon Patterson (Mech. Eng., Univ. of Michigan, 626 Spring
St., Apt. #1, Ann Arbor, MI 48103-3200, awesome@umich.edu), Douglas
L. Miller (Radiology, Univ. of Michigan, Ann Arbor, MI), David R. Dowling, and Eric Johnsen (Mech. Eng., Univ. of Michigan, Ann Arbor, MI)
Although lung hemorrhage (LH) remains the only bioeffect of non-contrast, diagnostic ultrasound (DUS) proven to occur in mammals, a fundamental understanding of DUS-induced LH remains lacking. We hypothesize
that the fragile capillary beds near the lungs surface may rupture as a result
of ultrasound-induced strains and viscous stresses. We perform simulations
of DUS waves propagating in tissue (modeled as water) and impinging on a
planar lung surface (modeled as air) with hemispherical divots representing
individual capillaries (modeled as water). Experimental ultrasound pulse
waveforms of frequencies 1.5–7.5 MHz are used for the simulation. A highorder accurate discontinuity-capturing scheme solves the two-dimensional,
compressible Navier-Stokes equations to obtain velocities, pressures,
stresses, strains, and displacements in the entire domain. The mechanics of
the capillaries are studied for a range of US frequencies and amplitudes.
Preliminary results indicate a strong dependence of the total strain on the
capillary size relative to the wavelength.
4:00
2pBA10. Acoustic radiation force due to nonaxisymmetric sound beams
incident on spherical viscoelastic scatterers in tissue. Benjamin C. Treweek, Yurii A. Ilinskii, Evgenia A. Zabolotskaya, and Mark F. Hamilton
(Appl. Res. Labs., Univ. of Texas at Austin, 10000 Burnet Rd., Austin, TX
78758, btreweek@utexas.edu)
The theory for acoustic radiation force on a viscoelastic sphere of arbitrary size in tissue was extended recently to account for nonaxisymmetric
incident fields [Ilinskii et al., POMA 19, 045004 (2013)]. A spherical harmonic expansion was used to describe the incident field. This work was specialized at the spring 2014 ASA meeting to focused axisymmetric sound
168th Meeting: Acoustical Society of America
2158
4:15
2pBA11. Convergence of Green’s function-based shear wave simulations in models of elastic and viscoelastic soft tissue. Yiqun Yang (Dept.
of Elec. and Comput. Eng., Michigan State Univ., Michigan State University, East Lansing, MI, yiqunyang.nju@gmail.com), Matthew Urban (Dept.
of Physiol. and Biomedical Eng., Mayo Clinic College of Medicine, Rochester, MN), and Robert McGough (Dept. of Elec. and Comput. Eng., Michigan State Univ., East Lansing, MI)
Green’s functions effectively simulate shear waves produced by an
applied acoustic radiation force in elastic and viscoelastic soft tissue. In an
effort to determine the optimal parameters for these simulations, the convergence of Green’s function-based calculations is evaluated for realistic spatial distributions of the initial radiation force “push.” The input to these
calculations is generated by FOCUS, the “Fast Object-oriented C + + Ultrasound Simulator,” which computes the approximate intensity fields generated by a Phillips L7-4 ultrasound transducer array for both focused and
unfocused beams. The radiation force in the simulation model, which is proportional to the simulated intensity, is applied for 200 ls, and the resulting
displacements are calculated with the Green’s function model. Simulation
results indicate that, for elastic media, convergence is achieved when the intensity field is sampled at roughly one-tenth of the wavelength of the compressional component that delivers the radiation force “push.” Aliasing and
oscillation artifacts are observed in the model for an elastic medium at lower
sampling rates. For viscoelastic media, spatial sampling rates as low as two
samples per compressional wavelength are sufficient due to the low-pass filtering effects of the viscoelastic medium. [Supported in part by NIH Grants
R01EB012079 and R01DK092255.]
4:30
2pBA12. Quantifying mechanical heterogeneity of breast tumors using
quantitative ultrasound elastography. Tengxiao Liu (Dept. of Mech.,
Aerosp. and Nuclear Eng., Rensselaer Polytechnic Inst., Troy, NY), Olalekan A. Babaniyi (Mech. Eng., Boston Univ., Boston, MA), Timothy J. Hall
(Medical Phys., Univ. of Wisconsin, Wisconsin, WI), Paul E. Barbone
(Mech. Eng., Boston Univ., 110 Cummington St., Boston, MA 02215, barbone@bu.edu), and Assad A. Oberai (Dept. of Mech., Aerosp. and Nuclear
Eng., Rensselaer Polytechnic Inst., Troy, NY)
Heterogeneity is a hallmark of cancer whether one considers the genotype
of cancerous cells, the composition of their microenvironment, the distribution of blood and lymphatic microvasculature, or the spatial distribution of
the desmoplastic reaction. It is logical to expect that this heterogeneity in tumor microenvironment will lead to spatial heterogeneity in its mechanical
properties. In this study we seek to quantify the mechanical heterogeneity
within malignant and benign tumors using ultrasound based elasticity imaging. By creating in-vivo elastic modulus images for ten human subjects with
breast tumors, we show that Young’s modulus distribution in cancerous breast
tumors is more heterogeneous when compared with tumors that are not malignant, and that this signature may be used to distinguish malignant breast
tumors. Our results complement the view of cancer as a heterogeneous disease by demonstrating that mechanical properties within cancerous tumors
are also spatially heterogeneous. [Work supported by NIH, NSF.]
2159
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
4:45
2pBA13. Convergent field elastography. Michael D. Gray, James S. Martin,
and Peter H. Rogers (School of Mech. Eng., Georgia Inst. of Technol., 771
Ferst Dr. NW, Atlanta, GA 30332-0405, michael.gray@me.gatech.edu)
An ultrasound-based system for non-invasive estimation of soft tissue
shear modulus will be presented. The system uses a nested pair of transducers
to provide force generation and motion measurement capabilities. The outer
annular element produces a ring-like ultrasonic pressure field distribution.
This in turn produces a ring-like force distribution in soft tissue, whose
response is primarily observable as a shear wave field. A second ultrasonic
transducer nested inside the annular element monitors the portion of the shear
field that converges to the center of the force distribution pattern. Propagation
speed is estimated from shear displacement phase changes resulting from dilation of the forcing radius. Forcing beams are modulated in order to establish
shear speed frequency dependence. Prototype system data will be presented
for depths of 10–14 cm in a tissue phantom, using drive parameters within
diagnostic ultrasound safety limits. [Work supported by ONR and the Neely
Chair in Mechanical Engineering, Georgia Institute of Technology.]
5:00
2pBA14. Differentiation of benign and malignant breast lesions using
Comb-Push Ultrasound Shear Elastography. Max Denis, Mohammad
Mehmohammadi (Physiol. and Biomedical Eng., Mayo Clinic, 200 First St.
SW, Rochester, MN 55905, denis.max@mayo.edu), Duane Meixner, Robert
Fazzio (Radiology-Diagnostic, Mayo Clinic, Rochester, MN), Shigao Chen,
Mostafa Fatemi (Physiol. and Biomedical Eng., Mayo Clinic, Rochester,
MN), and Azra Alizad (Physiol. and Biomedical Eng., Mayo Clinic, Rochester, Missouri)
In this work, the results from our Comb Push Ultrasound Shear Elastography (CUSE) assessment of suspicious breast lesions are presented. The elasticity value of the breast lesions are correlated to histopathological findings to
evaluate their diagnostic value in differentiating between malignant and benign breast lesions. A total of 44 patients diagnosed with suspicious breast
lesions were evaluated using CUSE prior to biopsy. All patient study procedures were conducted according to the protocol approved by Mayo Clinic
Institutional Review Board (IRB). Our cohort consisted of 27 malignant and
17 benign breast lesions. The results indicate an increase in shear wave velocity in both benign and malignant lesions compared to normal breast tissue.
Furthermore, the Young’s modulus is significantly higher in malignant
lesions. An optimal cut-off value of the Young’s modulus 80 kPa was
observed for the receiver operating characteristic (ROC) curve. This is concordant with the published cut-off values of elasticity for suspicious breast
lesions. [This work is supported in part by the grant 3R01CA148994-04S1
and 5R01CA148994-04 from NIH.]
5:15
2pBA15. Comparison between diffuse infrared and acoustic transmission over the human skull. Qi Wang, Namratha Reganti, Yutoku Yoshioka,
Mark Howell, and Gregory T. Clement (BME, LRI, Cleveland Clinic, 9500
Euclid Ave., Cleveland, OH 44195, qiqiwang83@gmail.com)
Skull-induced distortion and attenuation present a challenge to both
transcranial imaging and therapy. Whereas therapeutic procedures have
been successful in offsetting aberration using from prior CTs, this approach
impractical for imaging. In effort to provide a simplified means for aberration correction, we have been investigating the use of diffuse infrared light
as an indicator of acoustic properties. Infrared wavelengths were specifically
selected for tissue penetration; however this preliminary study was performed through bone alone via a transmission mode to facilitate comparison
with acoustic measurements. The inner surface of a half human skull, cut
along the sagittal midline, was illuminated using an infrared heat lamp and
images of the outer surface were acquired with an IR-sensitive camera. A
range of source angles were acquired and averaged to eliminate source bias.
Acoustic measurement were likewise obtained over the surface with a
source (1 MHz, 12.7 mm-diam) oriented parallel to the skull surface and
hydrophone receiver (1 mm PVDF). Preliminary results reveal a positive
correlation between sound speed and optical intensity, whereas poor correlation is observed between acoustic amplitude and optical intensity. [Work
funded under NIH R01EB014296.]
168th Meeting: Acoustical Society of America
2159
2p TUE. PM
beams with various focal spot sizes and a scatterer located at the focus. The
emphasis of the present contribution is nonaxisymmetric fields, either
through moving the scatterer off the axis of an axisymmetric beam or
through explicitly defining a nonaxisymmetric beam. This is accomplished
via angular spectrum decomposition of the incident field, spherical wave
expansions of the resulting plane waves about the center of the scatterer,
Wigner D-matrix transformations to express these spherical waves in a coordinate system with the polar axis aligned with the desired radiation force
component, and finally integration over solid angle to obtain spherical wave
amplitudes as required in the theory. Various scatterer sizes and positions
relative to the focus are considered, and the effects of changing properties
of both the scatterer and the surrounding tissue are examined. [Work supported by the ARL:UT McKinney Fellowship in Acoustics.]
5:30
2pBA16. A computerized tomography system for transcranial ultrasound imaging. Sai Chun Tang (Dept. of Radiology, Harvard Med. School,
221 Longwood Ave., Rm. 521, Boston, MA 02115, sct@bwh.harvard.edu)
and Gregory T. Clement (Dept. of Biomedical Eng., Cleveland Clinic,
Cleveland, OH)
Hardware for tomographic imaging presents both challenge and opportunity for simplification when compared with traditional pulse-echo imaging
systems. Specifically, point diffraction tomography does not require simultaneous powering of elements, in theory allowing just a single transmit
channel and a single receive channel to be coupled with a switching or multiplexing network. In our ongoing work on transcranial imaging, we have
developed a 512-channel system designed to transmit and/or receive a high
voltage signal from/to arbitrary elements of an imaging array. The overall
design follows a hierarchy of modules including a software interface, microcontroller, pulse generator, pulse amplifier, high-voltage power converter,
switching mother board, switching daughter board, receiver amplifier, analog-to-digital converter, peak detector, memory, and USB communication.
Two pulse amplifiers are included, each capable producing up to 400 Vpp
via power MOSFETS. Switching is based around mechanical relays that
allow passage of 200 V, while still achieving switching times of under 2 ms,
with an operating frequency ranging from below 100 kHz to 10 MHz. The
system is demonstrated through ex vivo human skulls using 1 MHz transducers. The overall system design is applicable to planned human studies in
transcranial image acquisition, and may have additional tomographic applications for other materials necessitating a high signal output. [Work was
supported by NIH R01 EB014296.]
TUESDAY AFTERNOON, 28 OCTOBER 2014
INDIANA C/D, 2:45 P.M. TO 3:30 P.M.
Session 2pEDa
Education in Acoustics: General Topics in Education in Acoustics
Uwe J. Hansen, Chair
Chemistry& Physics, Indiana State University, 64 Heritage Dr., Terre Haute, IN 47803-2374
Contributed Papers
2:45
2pEDa1. @acousticsorg: The launch of the Acoustics Today twitter feed.
Laura N. Kloepper (Dept. of Neurosci., Brown Univ., 185 Meeting St. Box
GL-N, Providence, RI 02912, laura_kloepper@brown.edu) and Daniel Farrell
(Web Development office, Acoust. Society of America, Melville, NY)
Acoustics Today has recently launched our twitter feed, @acousticsorg.
Come learn how we plan to spread the mission of Acoustics Today, promote
the science of acoustics, and connect with acousticians worldwide! We will
also discuss proposed upcoming social media initiatives and how you, an
ASA member, can help contribute. This presentation will include an
extended question period in order to gather feedback on how Acoustics
Today can become more involved with social media.
3:00
2pEDa2. Using Twitter for teaching. William Slaton (Phys. & Astronomy,
The Univ. of Central Arkansas, 201 Donaghey Ave., Conway, AR 72034,
wvslaton@uca.edu)
new-found information with students. As a user discovers a network of science bloggers and journalists to follow the amount of science uncovered
grows. Conversations between science writers and scientists themselves
enhance this learning opportunity. Several examples of using twitter for
teaching will be presented.
3:15
2pEDa3. Unconventional opportunities to recruit future science, technology, engineering, and math scholars. Roger M. Logan (Teledyne,
12338 Westella, Houston, TX 77077, rogermlogan@sbcglobal.net)
Pop culture conventions provide interesting and unique opportunities to
inspire the next generation of STEM contributors. Literary, comic, and
anime are a few example of this type of event. This presentation will provide insights into these venues as well as how to get involved and help communicate that careers in STEM can be fun and rewarding.
The social media microblogging platform, Twitter, is an ideal avenue to
learn about new science in the field of acoustics as well as to share that
2160
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2160
TUESDAY AFTERNOON, 28 OCTOBER 2014
INDIANA C/D, 3:30 P.M. TO 4:00 P.M.
Session 2pEDb
Education in Acoustics: Take 5’s
Uwe J. Hansen, Chair
Chemistry& Physics, Indiana State University, 64 Heritage Dr., Terre Haute, IN 47803-2374
TUESDAY AFTERNOON, 28 OCTOBER 2014
INDIANA E, 1:55 P.M. TO 5:00 P.M.
Session 2pID
Interdisciplinary: Centennial Tribute to Leo Beranek’s Contributions in Acoustics
William J. Cavanaugh, Cochair
Cavanaugh Tocci Assoc. Inc., 3 Merifield Ln., Natick, MA 01760-5520
Carl Rosenberg, Cochair
Acentech Incorporated, 33 Moulton Street, Cambridge, MA 02138
Chair’s Introduction—1:55
Invited Papers
2:00
2pID1. Leo Beranek’s role in the Acoustical Society of America. Charles E. Schmid (10677 Manitou Pk. Blvd., Bainbridge Island,
WA 98110, cechmid@att.net)
Leo Beranek received the first 75th anniversary certificate issued by the Acoustical Society of America commemorating his longtime association with the Society at the joint ASA/ICA meeting in Montreal in 2013. Both the Society and Leo have derived mutual benefits from this long and fruitful association. Leo has held many important roles as leader in the ASA. He served as vice president (1949–
1950), president (1954–1955), Chair of the Z24 Standards Committee (1950–1953), meeting organizer (he was an integral part of the
Society’s 25th, 50th, and 75th Anniversary meetings), associate editor (1950–1959), author of three books sold via ASA, publisher of 75
peer-reviewed JASA papers, and presented countless papers at ASA meetings. Much of his work has been recognized by the Society
which presented him with the R. Bruce Lindsay Award (1944), the Wallace Clement Sabine Award (1961), the Gold Medal (1975), and
an Honorary Fellowship (1994). He has participated in the Acoustical Society Foundation and donated generously to it. He has been an
inspiration for younger Society members (which include all of us on this occasion celebrating his 100th birthday).
2:15
2pID2. Leo Beranek’s contributions to noise control. George C. Maling (INCE FOUNDATION, 60 High Head Rd., Harpswell, ME
04079, INCEUSA@aol.com) and William W. Lang (INCE FOUNDATION, Poughkeepsie, New York)
Leo Beranek has made contributions to noise control for many years, beginning with projects during World War II when he was a
Harvard University. Later, at MIT, he taught a course (6.35) which included noise control, and ran MIT summer courses on the subject.
His book, Noise Reduction, was published during that time. Additional books followed. Noise control became an important part of the
consulting work at Bolt Beranek and Newman. Two projects are of particular interest: The efforts to silence a wind tunnel in Cleveland,
2161
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2161
2p TUE. PM
For a Take-Five session no abstract is required. We invite you to bring your favorite acoustics teaching ideas. Choose from the following: short demonstrations, teaching devices, or videos. The intent is to share teaching ideas with your colleagues. If possible, bring a
brief, descriptive handout with enough copies for distribution. Spontaneous inspirations are also welcome. You sign up at the door for a
five-minute slot before the session starts. If you have more than one demo, sign-up for two consecutive slots.
Ohio, and the differences in noise emissions and perception as the country entered the jet age. Leo was one of the founders of the Institute of Noise Control Engineering, and served as its charter president. Much of the success of the Institute is due to his early leadership.
He has also played an important role in noise policy, beginning in the late 1960s and, in particular, with the passage of the Noise Control
Act of 1972. This work continued into the 1990s with the formation of the “Peabody Group,” and cooperation with the National Academy of Engineering in the formation of noise policy.
2:30
2pID3. Beranek’s porous material model: Inspiration for advanced material analysis and design. Cameron J. Fackler and Ning
Xiang (Graduate Program in Architectural Acoust., Rensselaer Polytechnic Inst., 110 8th St., Greene Bldg., Troy, NY 12180, facklc@
rpi.edu)
In 1942, Leo Beranek presented a model for predicting the acoustic properties of porous materials [J. Acoust. Soc. Am. 13, 248
(1942)]. Since then, research into many types of porous materials has grown into a broad field. In addition to Beranek’s model, many
other models for predicting the acoustic properties of porous materials in terms of key physical material parameters have been developed. Following a brief historical review, this work concentrates on studying porous materials and microperforated panels—pioneered
by one of Beranek’s early friends and fellow students, Dah-You Maa. Utilizing equivalent fluid models, porous material and microperforated panel theories have recently been unified. In this work, the Bayesian inference framework is applied to single- and multilayered porous and microperforated materials. Bayesian model selection and parameter estimation are used to guide the analysis and design of
innovative multilayer acoustic absorbers.
2:45
2pID4. Technology, business, and civic visionary. David Walden (retired from BBN, 12 Linden Rd., East Sandwich 02537, dave@
walden-family.com)
In high school and college, Leo Beranek was already developing the traits of an entrepreneur. At Bolt Beranek and Newman he built
a culture of innovation. He and his co-founders also pursued a policy of looking for financial returns, via diversification and exploitation
of intellectual property, beyond their initial acoustics-based professional services business. In particular, in 1956–1957 Leo recruited
J.C.R. Licklider to help BBN move into the domain of computers. In time, information sciences and computing became as significant a
business for BBN as acoustics. While BBN did innovative work in many areas of computing, perhaps the most visible area was with the
technology that became the Internet. In 1969, Leo left day-to-day management of BBN, although he remained associated with the company for more years. Beyond BBN, Leo worked, often in a leadership role, with a variety of institutions to improve civic life and culture
around Boston.
3:00
2pID5. Leo Beranek and concert hall acoustics. Benjamin Markham (Acentech Inc, 33 Moulton St., Cambridge, MA 02138, bmarkham@acentech.com)
Dr. Leo Beranek’s pioneering concert hall research and project work has left an indelible impression on the study and practice of
concert hall design. Working as both scientist and practitioner simultaneously for most of his 60 + years in the industry, his accomplishments include dozens of published papers on concert hall acoustics, several seminal books on the subject, and consulting credit for
numerous important performance spaces. This paper will briefly outline a few of his key contributions to the field of concert hall acoustics (including his work regarding audience absorption, the loudness parameter G, the system of concert hall metrics and ratings that he
developed, and other contributions), his project work (including the Tanglewood shed, Philharmonic Hall, Tokyo Opera City concert
hall, and others), and his role as an inspiration for other leaders in the field. His work serves as the basis, the framework, the inspiration,
or the jumping-off point for a great deal of current concert hall research, as evidenced by the extraordinarily high frequency with which
his work is cited; this paper will conclude with some brief remarks on the future of concert hall research that will build on Dr. Beranek’s
extraordinary career.
3:15
2pID6. Concert hall acoustics: Recent research. Leo L. Beranek (Retired, 10 Longwood Dr., Westwood, MA 02090, beranekleo@
ieee.org)
Recent research on concert hall acoustics is reviewed. Discussed are (1) ten top quality halls acoustically; (2) listeners acoustical
preferences; (3) how musical dynamics are enhanced by hall shape; (4) effect of seat upholstering on sound strength and hall dimensions; (5) recommended minimum and maximum hall dimensions and audience capacities in shoebox, surround, and fan shaped halls.
3:30
2pID7. Themes of thoughts and thoughtfulness. Carl Rosenberg (Acentech Inc., 33 Moulton St., Cambridge, MA 02138, crosenberg@acentech.com) and William J. Cavanaugh (Cavanaugh/Tocci, Sudbury, MA)
In preparing and compiling the background for the issue of Acoustics Today on Leo Beranek to commemorate his 100th birthday,
there were some consistent themes of Leo’s work and contribution to colleagues and scholars with whom he worked. This was particularly evident in the many “side-bars” solicited from over three dozen friends and colleagues. The authors discuss these patterns and share
insights on the manner in which Leo was most influential. There will be opportunities for audience participants to share their thoughts
and birthday greetings with Leo.
3:45–4:15 Panel Discussion
4:15–5:00 Celebration
2162
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2162
TUESDAY AFTERNOON, 28 OCTOBER 2014
SANTA FE, 1:00 P.M. TO 3:50 P.M.
Session 2pMU
Musical Acoustics: Synchronization Models in Musical Acoustics and Psychology
Rolf Bader, Chair
Institute of Musicology, University of Hamburg, Neue Rabenstr. 13, Hamburg 20354, Germany
Invited Papers
1:00
2pMU1. Models and findings of synchronization in musical acoustics and music psychology. Rolf Bader (Inst. of Musicology, Univ.
of Hamburg, Neue Rabenstr. 13, Hamburg 20354, Germany, R_Bader@t-online.de)
2p TUE. PM
Synchronization is a crucial mechanism in music tone production and perception. With wind instruments, the overtone series of
notes synchronize to nearly perfect harmonic relations due to nonlinear effects and turbulence at the driving mechanism although the
overblown pitches of flutes or horns may differ considerably from such a simple harmonic relation. Organ pipes close to each other synchronize in pitch by interaction of the sound pressures. With violins, the sawtooth motion appears because of a synchronization of the
stick/slip interaction with the string length. All these models are complex systems also showing bifurcations in terms of multiphonics,
biphonation or subharmonics. On the subjects perception and music production side models of synchronization, like the free-energy
principle modeling perception by minimizing surprise and adaptation to physical parameters of sound production, neural nets of timbre,
tone, or rhythm perception or synergetic models of rhythm production are generally suited much better to model music perception than
simplified linear models.
1:20
2pMU2. One glottal airflow—Two vocal folds. Ingo R. Titze (National Ctr. for Voice and Speech, Univ. of Utah, 156 South Main St.,
Ste. 320, Salt Lake City, UT 84101-3306, ingo.titze@utah.edu) and Ingo R. Titze (Dept. of Commun. Sci. and Disord., Univ. of Iowa,
Iowa City, IA)
Vocalization for speech and singing involves self-sustained oscillation between a stream of air and a pair of vocal folds. Each vocal
fold has its own set of natural frequencies (modes of vibration) governed by the viscoelastic properties of tissue layers and their boundary conditions. Due to asymmetry, the modes of left and right vocal folds are not always synchronized. The common airflow between
them can entrain the modes, but not always in a 1:1 ratio. Examples of bifurcations are given for human and animal vocalization, as well
as from computer simulation. Vocal artists may use desynchronization for special vocal effects. Surgeons who repair vocal folds make
decisions about the probability of regaining synchronization when one vocal fold is injured. Complete desynchronization, allowing only
one vocal fold to oscillate, may be a better strategy in some cases then attempting to achieve symmetry.
1:40
2pMU3. Synchronization of organ pipes—Experimental facts and theory. Markus W. Abel and Jost L. Fischer (Inst. for Phys. and
AstroPhys., Potsdam Univ., Karl/Liebknecht Str. 24-25, Potsdam 14469, Germany, markus.abel@physik.uni-potsdam.de)
Synchronization of musical instruments has raised attention due to the important implications on sound production in musical instruments and technological applications. In this contribution, we show new results on the interaction of two coupled organ pipes: we present a new experiment where the pipes were positioned in a plane with varying distance, further we briefly refer to a corresponding
description in terms of a reduced model, and eventually show numerical simulations which are in full agreement with the measurements.
Experimentally, the 2D setup allows for the observation of a new phenomenon: a synchronization/desynchronization transition at regular
distances of the pipes. The developed model basically consists of a self-sustained oscillator with nonlinear, delayed coupling. The nonlinearity reflects the complicated interaction of emitted acoustical waves with the jet exiting at the organ pipe mouth, and the delay term
takes care of the wave propagation. Synchronization is a clear evidence for the importance of nonlinearities in music and continues to be
a source of astonishing results.
2:00
2pMU4. Nonlinear coupling mechanisms in acoustic oscillator systems which can lead to synchronization. Jost Fischer (Dept. for
Phys. and Astronomy, Univ. of Potsdam, Karl-Liebknecht-Str 24/25, Potsdam, Brandenburg 14476, Germany, jost.fischer@uni-potsdam.de) and Markus Abel (Ambrosys GmbH, Potsdam, Germany)
We present results on the coupling mechanisms in wind-driven, self-sustained acoustic oscillators. Such systems are found in engineering applications, as gas burners, and—more beautiful—in musical instruments. As a result, we find that coupling and oscillators are
nonlinear in character, which can lead to synchronization. We demonstrate our ideas using one of the oldest and most complex musical
devices: organ pipes. Building up on the questions of preceding works, the elements of the sound generation are identified using detailed
2163
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2163
experimental and theoretical studies, as well as numerical simulations. From these results, we derive the nonlinear coupling mechanisms
of the mutual interaction of organ pipes. This leads to a nonlinear coupled acoustic oscillator model, which is based on the aeroacoustical
and fluid dynamical first principles. The model calculations are compared with the experimental results from preceding works. It appears
that the sound generation and the coupling mechanisms are properly described by the developed nonlinear coupled model of self-sustained oscillators. In particular, we can explain the unusual nonlinear shape Arnold tongues of the coupled two-pipe system. Finally, we
show the power of modern CFD simulations by a 2D simulation of two mutually interacting organ pipes, i.e., the compressible NavierStokes equations are numerically solved.
2:20–2:40 Break
2:40
2pMU5. Auditory-inspired pitch extraction using a synchrony capture filterbank. Kumaresan Ramdas, Vijay Kumar Peddinti
(Dept. of Elec., Comput. Eng., Univ. of Rhode Island, Kelley A216 4 East Alumni Ave., Kingston, RI 02881, kumar@ele.uri.edu), and
Peter Cariani (Hearing Res. Ctr. & Dept. of Biomedical Eng., Boston Univ., Boston, MA)
The question of how harmonic sounds in speech and music produce strong, low pitches at their fundamental frequencies, F0’s, has
been of theoretical and practical interest to scientists and engineers for many decades. Currently the best auditory models for F0 pitch,
(e.g., Meddis & Hewitt, 1991), are based on bandpass filtering (cochlear mechanics), half-wave rectification and low-pass filtering (hair
cell transduction, synaptic transmission), channel autocorrelations (all-order interspike interval distributions) aggregated into a summary
autocorrelation, followed by an analysis that determines the most prevalent interspike intervals. As a possible alternative to explicit autocorrelation computations, we propose an alternative model that uses an adaptive Synchrony Capture Filterbank (SCFB) in which channels in a filterbank neighborhood are driven exclusively (captured) by dominant frequency components closest to them. Channel outputs
are then adaptively phase aligned with respect to a common time reference to compute a Summary Phase Aligned Function (SPAF),
aggregated across all channels, from which F0 can then be easily extracted. Possible relations to brain rhythms and phase-locked loops
are discussed. [Work supported by AFSOR FA9550-09-1-0119, Invited to special session about Synchronization in Musical Acoustics
and Music Psychology.]
3:00
2pMU6. Identification of sound sources in soundscape using acoustic, psychoacoustic, and music parameters. Ming Yang and Jian
Kang (School of Architecture, Univ. of Sheffield, Western Bank, Sheffield S10 2TN, United Kingdom, arp08my@sheffield.ac.uk)
This paper explores the possibility of automatic identification/classification of environmental sounds, by analyzing sound with a
number of acoustic, psychoacoustic, and music parameters, including loudness, pitch, timbre, and rhythm. The sound recordings of single sound sources labeled in four categories, i.e., water, wind, birdsongs, and urban sounds including street music, mechanical sounds
and traffic noise, are automatically identified with machine learning and mathematic models, including artificial neural networks and discriminant functions, based on the results of the psychoacoustic/music measures. The accuracies of the identification are above 90% for
all the four categories. Moreover, based on these techniques, identification of construction noise sources from general urban background
noise is explored, using the large construction project of London Bridge Station redevelopment as a case study site.
Contributed Papers
3:20
3:35
2pMU7. Neuronal synchronization of musical large-scale form. Lenz Hartmann (Insitut for Systematic Musicology, Universit€at Hamburg, Feldstrasse
59, Hamburg, Hamburg 20357, Germany, lenz.hartmann@gmx.de)
2pMU8. Nonlinearities and self-organization in the sound production of
the Rhodes piano. Malte Muenster, Florian Pfeifle, Till Weinrich, and Martin Keil (Systematic Musicologie, Univ. of Hamburg, Pilatuspool, 19, Hamburg, Hamburg 20355, Germany, m.muenster@arcor.de)
Musical form in this study is taken as structural aspects of music ranging
over several bars as a combination of all elements that constitutes a piece of
music, like pitch, rhythm or timbre. In an EEG-study 25 participants listen
to the first about four minutes of a piece of electronic dance music for three
times each and ERP grand-averages were calculated. Correlations of a one
second time windows between the ERPs of all electrodes and therefore of
different brain regions is used as a measure of synchronization between
these areas. Local maxima corresponding to strong synchronization show up
at expectancy points of boundaries in the present musical form. A modified
FFT analysis of the ERPs of the mean of all trials and all channels that just
take the ten highest peaks in consideration show strongest brain activity at
frequencies in the gamma-band (about 40–60 Hz) and in the beta-band
(about 20–30 Hz).
2164
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Over the last five decades the Rhodes piano became a common keyboard
instrument. It is played in such diverse musical genres as Jazz, Funk, Fusion,
or Pop. The sound processing of the Rhodes has not been studied in detail
beforeIts sound is produced by a mechanical driven tuning fork like system
causing a change in the magnetic flux of an electromagnetic pick up system.
The mechanical part of the tone production consists of a small diameter tine
made of stiff spring steel, the tine, and a tone bar made of brass, which is
strongly coupled to the former and acts as a resonator. The system is an
example for strong generator-resonator coupling. The tine acts as a generator
forcing the tonebar to vibrate with its fundamental frequency. Despite of
extremely different and much lower eigenfrequencies the tonebar is enslaved
by the tine. The tine is of lower spatial dimension and less damped and acts
nearly linear. The geometry of the tonebar is much more complex and therefore of higher dimension and damped stronger. The vibration of these two
parts are perfectly in-phase or anti-phase pointing to a quasi-synchronization
behavior. Moreover, the tonebar is responsible for the timbre of the initial
transient. It adds the glockenspiel sound to the transient and extends the sustain. The sound production is discussed as synergetic, self-organizing system, leading to a very precise harmonic overtone structure and characteristic
initial transients enhancing the variety of musical performance.
168th Meeting: Acoustical Society of America
2164
TUESDAY AFTERNOON, 28 OCTOBER 2014
MARRIOTT 3/4, 1:25 P.M. TO 3:45 P.M.
Session 2pNSa
Noise and Psychological and Physiological Acoustics: New Frontiers in Hearing Protection II
Elliott H. Berger, Cochair
Occupational Health & Environmental Safety Division, 3M, 7911, Zionsville Rd., Indianapolis, IN 46268-1650
William J. Murphy, Cochair
Hearing Loss Prevention Team, Centers for Disease Control and Prevention, National Institute for Occupational Safety and
Health, 1090 Tusculum Ave., Mailstop C-27, Cincinnati, OH 45226-1998
Chair’s Introduction—1:25
2p TUE. PM
Invited Paper
1:30
2pNSa1. Comparison of impulse peak insertion loss measured with gunshot and shock tube noise sources. William J. Murphy
(Hearing Loss Prevention Team, Centers for Disease Control and Prevention, National Inst. for Occupational Safety and Health, 1090
Tusculum Ave., Mailstop C-27, Cincinnati, OH 45226-1998, wjm4@cdc.gov), Elliott H. Berger (Personal Safety Div., E-A-RCAL lab,
3M, Indianapolis, IN), and William A. Ahroon (US Army Aeromedical Res. Lab., US Army, Fort Rucker, AL)
The National Institute for Occupational Safety and Health in cooperation with scientists from 3M and the U.S. Army Aeromedical
Research Laboratory conducted a series of Impulse peak insertion loss (IPIL) tests of the acoustic test fixtures from the Institute de Saint
Louis (ISL) with a 0.223 caliber rifle and two different acoustic shock tubes. The Etymotic Research ETYPlugsTM earplug, 3MTM TacticalProTM communication headset and the dual protector combination were tested with all three impulse noise sources. The spectra, IPIL,
and the reduction of different damage risk criteria will be presented. The spectra from the noise sources vary considerably with the rifle
having peak energy at about 1000 Hz. The shock tubes had peak levels around 125 and 250 Hz. The IPIL values for the rifle were greater
than those measured with the two shock tubes. The shock tubes had comparable IPIL results except at 150 dB for the dual protector condition. The treatment of the double protection condition is complicated because the earmuff reduces the shock wave and reduces the
effective level experienced by the earplug. For the double protection conditions, bone conduction presents a potential limiting factor for
the effective attenuation that can be achieved by hearing protection.
Contributed Paper
1:50
important inconsistency between the test conditions and final application.
Shock tube test procedures are also very inflexible and provide only minimal insight into the function and performance of advanced electronic hearing protection devices that may have relatively complex response as a
function of amplitude and frequency content. To address the issue of measuring the amplitude-dependent attenuation provided by a hearing protection
device, a method using a compression driver attached to an enclosed waveguide was developed. The hearing protection device is placed at the end of
the waveguide and the response to exposure to impulsive and frequency-dependent signals at calibrated levels is measured. Comparisons to shock tube
and standard frequency response measurements will be discussed.
2pNSa2. Evaluation of level-dependent performance of in-ear hearing
protection devices using an enclosed sound source. Theodore F. Argo and
G. Douglas Meegan (Appl. Res. Assoc., Inc., 7921 Shaffer Parkway, Littleton, CO 80127, targo@ara.com)
Hearing protection devices are increasingly designed with the capability
to protect against impulsive sound. Current methods used to test protection
from impulsive noise, such as blasts and gunshots, suffer from various drawbacks and complex, manual experimental procedures. For example, the use
of a shock tube to emulate blast waves typically produces a blast wind of a
much higher magnitude than that generated by an explosive, a specific but
Invited Papers
2:05
2pNSa3. Exploration of flat hearing protector attenuation and sound detection in noise. Christian Giguere (Audiology/SLP Program, Univ. of Ottawa, 451 Smyth Rd., Ottawa, ON K1H8M5, Canada, cgiguere@uottawa.ca) and Elliott H. Berger (Personal Safety
Div., 3M, Indianapolis, IN)
Flat-response devices are a class of hearing protectors with nearly uniform attenuation across frequency. These devices can protect
the individual wearer while maintaining the spectral balance of the surrounding sounds. This is typically achieved by reducing the muffling effect of conventional hearing protectors which provide larger attenuation at higher than lower frequencies, especially with
2165
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2165
earmuffs. Flat hearing protectors are often recommended when good speech communication or sound perception is essential, especially
for wearers with high-frequency hearing loss, to maintain audibility at all frequencies. However, while flat-response devices are
described in some acoustical standards, the tolerance limits for the definition of flatness are largely unspecified and relatively little is
known on the exact conditions when such devices can be beneficial. The purpose of this study is to gain insight into the interaction
between the spectrum of the noise, the shape of the attenuation-frequency response, and the hearing loss configuration on detection
thresholds using a psychoacoustic model of sound detection in noise.
2:25
2pNSa4. Electronic sound transmission hearing protectors and horizontal localization: Training/adaptation effects. John Casali
(Auditory Systems Lab, Virginia Tech, 250 Durham Hall, Blacksburg, VA 24061, jcasali@vt.edu) and Martin Robinette (U.S. Army
Public Health Command, U.S. Army, Aberdeen Proving Ground, MD)
Auditory situation awareness is known to be affected by some hearing protectors, even advanced electronic devices. A horizontal
localization task was employed to determine how use/training with electronic sound transmission hearing protectors affected auditory
localization ability, as compared to open-ear. Twelve normal-hearing participants performed baseline localization testing in a hemianechoic field in three listening conditions: open-ear, in-the-ear (ITE) device (Etymotic EB-15), and over-the-ear (OTE) device (Peltor
ComTac II). Participants then wore either the ITE or OTE protector for 12, almost daily, one-hour training sessions. Post-training, participants again underwent localization testing with all three conditions. A computerized custom software-hardware interface presented
localization sounds and collected accuracy and timing measures. ANOVA and post hoc statistical tests revealed that pre-training localization performance with either the ITE or OTE protector was significantly worse (p<0.05) than open-ear performance. After training
with any given listening condition, performance in that condition improved, in part from a practice effect. However, post-training localization showed near equal performance between the open-ear and the protector on which training occurred. Auditory learning, manifested as significant localization accuracy improvement, occurred for the training device, but not for the non-training device, i.e., no
crossover benefit from the training device to the non-training device occurred.
Contributed Papers
2:45
2pNSa5. Measuring effective detection and localization performance of
hearing protection devices. Richard L. McKinley (Battlespace Acoust.,
Air Force Res. Lab., 2610 Seventh St., AFRL/711HPW/RHCB, Wright-Patterson AFB, OH 45433-7901, richard.mckinley.1@us.af.mil), Eric R.
Thompson (Ball Aerosp. and Technologies, Air Force Res. Lab., WrightPatterson AFB, OH), and Brian D. Simpson (Battlespace Acoust., Air Force
Res. Lab., Wright-Patterson AFB, OH)
Awareness of the surrounding acoustic environment is essential to the
safety of persons. However, the use of hearing protection devices can degrade the ability to detect and localize sounds, particularly quiet sounds.
There are ANSI/ASA standards describing methods for measuring attenuation, insertion loss, and speech intelligibility in noise for hearing protection
devices, but currently there are no standard methods to measure the effects
of hearing protection devices on localization and/or detection performance.
A method for measuring the impact of hearing protectors on effective detection and localization performance has been developed at AFRL. This
method measures the response time in an aurally aided visual search task
where the presentation levels are varied. The performance with several
level-dependent hearing protection devices will be presented.
3:00
2pNSa6. Personal alert safety system localization field tests with firefighters. Joelle I. Suits, Casey M. Farmer, Ofodike A. Ezekoye (Dept. of
Mech. Eng., The Univ. of Texas at Texas, 204 E Dean Keaton St., Austin,
TX 78712, jsuits@utexas.edu), Mustafa Z. Abbasi, and Preston S. Wilson
(Dept. of Mech. Eng. and Appl. Res. Labs., The Univ. of Texas at Austin,
Austin, TX)
When firefighters get lost or incapacitated on the fireground, there is little time to find them. This project has focused on a contemporary device
used in this situation, the Personal Alert Safety System. We have studied the
noises on the fireground (i.e., chainsaws, gas powered ventilation fans,
pumper trucks) [J. Acoust. Soc. Am. 134, 4221 (2013)], how the fire environment affects sound propagation [J. Acoust. Soc. Am. 134, 4218 (2013)],
and how firefighter personal protective equipment (PPE) affects human
hearing [POMA 19, 030054 (2013)]. To put all these pieces together, we
2166
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
have traveled to several fire departments across the country conducting tests
to investigate how certain effects manifest themselves when firefighters
search for the source of a sound. We tasked firefighters to locate a target
sound in various acoustic environments while their vision was obstructed
and while wearing firefighting PPE. We recorded how long it took them to
find the source, what path they took, when they first heard the target sound,
and the frequency content and sound pressure level of the acoustic environment. The results will be presented in this talk. [Work supported by U.S.
Department of Homeland Security Assistance to Firefighters Grants
Program.]
3:15
2pNSa7. Noise level from burning articles on the fireground. Mustafa Z.
Abbasi, Preston S. Wilson (Appl. Res Lab and Dept. of Mech. Eng., Univ.
of Texas at Austin, 204 E Dean Keeton St., Austin, TX 78751, mustafa_
abbasi@utexas.edu), and Ofodike A. Ezekoye (Dept. of Mech. of Eng., The
Univ. of Texas at Austin, Austin, TX)
Firefighters encounter an extremely difficult environment due to the
presence of heat, smoke, falling debris etc. If one of them needs rescue, an
audible alarm is used to alert others of their location. This alarm, known as
the Personal Alert Safety System (PASS) alarm, has been part of firefighter
gear since the early 1980s. The PASS has been enormously successful, but a
review of The National Institute for Occupational Safety and Health
(NIOSH) firefighter fatality report suggests that there are instances when the
alarm is not heard or not localized. In the past, we have studied fireground
noise from various pieces of gear such as chainsaws and fans, etc., to understand the soundscape present during a firefighting operation. However, firefighters, and other interested parties have raised the issue of noise caused by
the fire itself. The literature shows that buoyancy-controlled, non-premixed
flames aerodynamically oscillate in the 10–16 Hz range, depending on the
diameter of the fuel base. Surprisingly, few acoustic measurements have
been made even for these relatively clean fire conditions. However, experience suggests burning items do create sounds. Most likely these sound are
from the decomposition of the material as it undergoes pyrolysis (turns in
gaseous fuel and char). This paper will present noise measurements from
various burning articles as well as characterization of the fire to understand
this noise source.
168th Meeting: Acoustical Society of America
2166
3:30
The insertion losses were evaluated by GRAS 45CB Acoustic Test Fixture.
We detected greater number of viable counts in the foam earplugs than in
the premolded earplugs. Staphylococcus aureus was detected in 10 foam
earplugs (5.1%). The deterioration of insertion loss was found only in the
deformed earplugs. The condition of work environment such as presence of
dust or use of oily liquid might cause the deterioration. Both the condition
of bacterial attachment and the insertion loss were not correlated with the
duration of use. We observed no correlation between the condition of bacterial attachment and the insertion loss of earplugs and neither of them was
related to the duration of long-term use of the earplugs.
2pNSa8. Bacterial attachment and insertion loss of earplugs used longtime in the noisy workplace. Jinro Inoue, Aya Nakamura, Yumi Tanizawa,
and Seichi Horie (Dept. of Health Policy and Management, Univ. of Occupational and Environ. Health, Japan, 1-1 Iseigaoka, Yahatanishi-ku, Kitakyushu, Fukuoka 807-8555, Japan, j-inoue@med.uoeh-u.ac.jp)
In the real noisy workplace, workers often use earplugs for a longtime.
We assessed the condition of bacterial attachment and the insertion loss of
197 pairs of earplugs collected from 6 companies. The total viable counts
and the presence of Staphylococcus aureus were examined by 3M Petrifilm.
TUESDAY AFTERNOON, 28 OCTOBER 2014
MARRIOTT 9/10, 1:00 P.M. TO 4:20 P.M.
Session 2pNSb
2p TUE. PM
Noise and Structural Acoustics and Vibration: Launch Vehicle Acoustics II
R. Jeremy Kenny, Cochair
Marshall Flight Center, NASA, Huntsville, AL 35812
Tracianne B. Neilsen, Cochair
Brigham Young University, N311 ESC, Provo, UT 84602
Chair’s Introduction—1:00
Invited Papers
1:05
2pNSb1. Comparison of the acoustical emissions of multiple full-scale rocket motors. Michael M. James, Alexandria R. Salton
(Blue Ridge Res. and Consulting, 29 N Market St., Ste. 700, Asheville, NC 28801, michael.james@blueridgeresearch.com), Kent L.
Gee, and Tracianne B. Neilsen (Phys. and Astronomy, Brigham Young Univ., Provo, UT)
Development of the next-generation space flight vehicles has prompted a renewed focus on rocket sound source characterization and
near-field propagation modeling. Improved measurements of the sound near the rocket plume are critical for direct determination of the
acoustical environment both in the near and far-fields. They are also crucial inputs to empirical models and to validate computational
aeroacoustics models. Preliminary results from multiple measurements of static horizontal firings of Alliant Techsystems motors including the GEM-60, Orion 50S XLG, and the Reusable Solid Rocket Motor (RSRM) performed in Promontory, UT, are analyzed and compared. The usefulness of scaling by physical parameters such as nozzle diameter, velocity, and overall sound power is demonstrated.
The sound power spectra, directional characteristics, distribution along the exhaust flow, and pressure statistical metrics are examined
over the multiple motors. These data sets play an important role in formulating more realistic sound source models, improving acoustic
load estimations, and aiding in the development of the next generation space flight vehicles via improved measurements of sound near
the rocket plume.
1:25
2pNSb2. Low-dimensional acoustic structures in the near-field of clustered rocket nozzles. Andres Canchero, Charles E. Tinney
(Aerosp. Eng. and Eng. Mech., The Univ. of Texas at Austin, 210 East 24th St., WRW-307, 1 University Station, C0600, Austin, TX
78712-0235, andres.canchero@utexas.edu), Nathan E. Murray (National Ctr. for Physical Acoust., Univ. of MS, Oxford, MS), and
Joseph H. Ruf (NASA Marshall Space Flight Ctr., Huntsville, AL)
The plume and acoustic field produced by a cluster of two and four rocket nozzles is visualized by way of retroreflective shadowgraphy. Both steady state and transient operations of the nozzles (start-up and shut-down) were conducted in the fully-anechoic chamber
and open jet facility of The University of Texas at Austin. The laboratory scale rocket nozzles comprise thrust-optimized parabolic
(TOP) contours, which during start-up, experience free shock separated flow, restricted shock separated flow, and an “end-effects
regime” prior to flowing full. Shadowgraphy images are first compared with several RANS simulations during steady operations. A
proper orthogonal decomposition (POD) of various regions in the shadowgraphy images is then performed to elucidate the prominent
features residing in the supersonic annular flow region, the acoustic near field and the interaction zone that resides between the nozzle
2167
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2167
plumes. Synchronized surveys of the acoustic loads produced in close vicinity to the rocket clusters are compared to the low-order shadowgraphy images in order to identify the various mechanisms within the near-field that are responsible for generating sound.
1:45
2pNSb3. Experimental study on lift-off acoustic environments of launch vehicles by scaled cold jet. Hiroki Ashida, Yousuke Takeyama (Integrated Defence & Space Systems, Mitsubishi Heavy Industries, Ltd., 10, Oye-cho, Minato-ku, Nagoya City, Aichi 455-8515,
Japan, hiroki1_ashida@mhi.co.jp), Kiyotaka Fujita, and Aki Azusawa (Technol. & Innovation Headquarters, Mitsubishi Heavy Industries, Ltd., Aichi, Japan)
Mitsubishi Heavy Industries (MHI) have been operating the current Japanese flagship launch vehicle H-IIA and H-IIB, and developing the next flagship launch vehicle H-X. The concept of H-X is affordable, convenient, and comfortable for payloads including mitigation of acoustic environment during launch. Acoustic measurements were conducted using scaled GN2 cold jet and aperture plate to
facilitate understanding of lift-off acoustic source and to take appropriate measures against it without use of water injection. It was seen
that the level of vehicle acoustics in high frequency range depends on the amount of interference between the jet and the plate, and
enlargement of the aperture is effective for acoustic mitigation.
2:05
2pNSb4. Detached-Eddy simulations of rocket plume noise at lift-off. A. Lyrintzis, V. Golubev (Aerosp. Eng., Embry-Riddle Aeronutical Univ., Daytona Beach, FL), K. Kurbatski (Aerosp. Eng., Embry-Riddle Aeronutical Univ., Lebanon, New Hampshire), E. Osman
(Aerosp. Eng., Embry-Riddle Aeronutical Univ., Denver, Colorado), and Reda Mankbadi (Aerosp. Eng., Embry-Riddle Aeronutical
Univ., 600 S. Clyde Morris Blvd, Daytona Beach, FL 32129, Reda.Mankbadi@erau.edu)
The three-dimensional turbulent flow and acoustic field of a supersonic jet impinging on a solid plate at different inclination angles
is studied computationally using the general-purpose CFD code ANSYS Fluent. A pressure-based coupled solver formulation with the second-order weighted central-upwind spatial discretization is applied. Hot jet thermal condition is considered. Acoustic radiation of
impingement tones is simulated using a transient time-domain formulation. The effects of turbulence in steady state are modeled by the
SST k- turbulence model. The Wall-Modeled Large-Eddy Simulation (WMLES) model is applied to compute transient solutions. The
near-wall mesh on the impingement plate is fine enough to resolve the viscosity-affected near-wall region all the way to the laminar sublayer. Inclination angle of the impingement plate is parameterized in the model for automatic re-generation of the mesh and results. The
transient solution reproduces the mechanism of impingement tone generation by the interaction of large-scale vortical structures with
the impingement plate. The acoustic near field is directly resolved by the Computational Aeroacoustics (CAA) to accurately propagate
impingement tone waves to near-field microphone locations. Results show the effect of the inclination angle on sound level pressure
spectra and overall sound pressure level directivities.
2:25
2pNSb5. Large-eddy simulations of impinging over-expanded supersonic jet noise for launcher applications. Julien Troyes, François Vuillot (DSNA, Onera, BP72, 29 Ave. de la Div. Leclerc, Ch^atillon Cedex 92322, France, julien.troyes@onera.fr), and Hadrien
Lambare (DLA, CNES, Paris, France)
During the lift-off phase of a space launcher, powerful rocket motors generate harsh acoustic environment on the launch pad. Following the blast waves created at ignition, jet noise is a major contributor to the acoustic loads received by the launcher and its payload.
Recent simulations performed at ONERA to compute the noise emitted by solid rocket motors at lift-off conditions are described.
Far-field noise prediction is achieved by associating a LES solution of the jet flow with an acoustics surface integral method. The computations are carried out with in-house codes CEDRE for the LES solution and KIM for Ffowcs Williams & Hawkings porous surface
integration method. The test case is that of a gas generator, fired vertically onto a 45 degrees inclined flat plate which impingement point
is located 10 diameters from nozzle exit. Computations are run for varied numerical conditions, such as turbulence modeling along the
plate and different porous surfaces location and type. Results are discussed and compared with experimental acoustic measurements
obtained by CNES at MARTEL facility.
2:45–3:05 Break
3:05
2pNSb6. Scaling metrics for predicting rocket noise. Gregory Mack, Charles E. Tinney (Ctr. for AeroMech. Res., The Univ. of Texas
at Austin, ASE/EM, 210 East 24th St., Austin, TX 78712, cetinney@utexas.edu), and Joseph Ruf (Combustion and Flow Anal. Team,
ER42, NASA Marshal Space Flight Ctr., Huntsville, AL)
Several years of research at The University of Texas at Austin concerning the sound field produced by large area-ratio rocket nozzles
is presented [Baars et al., AIAA J. 50(1), (2012); Baars and Tinney, Exp. Fluids, 54 (1468), (2013); Donald et al., AIAA J. 52(7),
(2013)]. The focus of these studies is on developing an in-depth understanding of the various acoustic mechanisms that form during
start-up of rocket engines and how they may be rendered less efficient in the generation of sound. The test articles comprise geometrically scaled replicas of large area ratio nozzles and are tested in a fully anechoic chamber under various operating conditions. A framework for scaling laboratory-scale nozzles is presented by combining established methods with new methodologies [Mayes, NASA TN
D-21 (1959); Gust, NASA TN-D-1999 (1964); Eldred, NASA SP-8072 (1972); Sutherland AIAA Paper 1993–4383 (1993); Varnier,
AIAA J. 39:10 (2001); James et al. Proc. Acoust. Soc. Amer. 18(3aNS), (2012)]. In particular, both hot and cold flow tests are reported
which comprise single, three and four nozzle clusters. An effort to correct for geometric scaling is also presented.
2168
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2168
3:25
2pNSb7. Acoustic signature characterization of a sub-scale rocket launch. David Alvord and K. Ahuja (Aerosp. & Acoust. Technologies Div., Georgia Tech Res. Inst., 7220 Richardson Rd., Smyrna, GA 30080, david.alvord@gtri.gatech.edu)
Georgia Tech Research Institute (GTRI) conducted a flight test of a sub-scale rocket in 2013 outside Talladega, Alabama, to acquire
the launch acoustics produced. The primary objective of the test was to characterize the acquired data during a sub-scale launch and
compare it with heritage launch data from the STS-1 Space Shuttle flight. Neither launch included acoustic suppression; however, there
were differences in the ground geometry. STS-1 launched from the Mobile Launch Platform at Pad 39B with the RS-25 liquid engines
and Solid Rocket Boosters (SRBs) firing into their respective exhaust ducts and flame trench, while the GTRI flight test vehicle launched
from a flat reflective surface. The GTRI launch vehicle used a properly scaled Solid Rocket Motor (SRM) for propellant; therefore, primary analysis will focus on SRM/SRB centric acoustic events. Differences in the Ignition Overpressure (IOP) wave signature between
both due to this will be addressed. Additionally, the classic liftoff acoustics “football shape” is preserved between both full and sub-scale
flights. The launch signatures will be compared, with note taken of specific launch acoustic events more easily investigated with subscale launch data or supplement current sub-scale static hotfire testing.
3:45
Large, heavy-lift rockets have significant acoustic and infrasonic energy that can often be detected from a considerable distance.
These sounds, under certain environmental conditions, can propagate hundreds of kilometers from the launch location. Thus, groundbased infrasound arrays can be used to monitor the low frequencies emitted by these large rocket launches. Multiple launches and static
engine tests have been successfully recorded over many years using small infrasound arrays at various distances from the launch location. Infrasonic measurements using a 20 m array and parabolic equation modeling of a recent launch of an Aries V rocket at Wallops
Island, Virginia, will be discussed.
Contributed Paper
4:05
2pNSb9. Influence of source level, peak frequency, and atmospheric
absorption on nonlinear propagation of rocket noise. Michael F. Pearson
(Phys., Brigham Young Univ., 560 W 700 S, Lehi, UT 84043, m3po22@
gmail.com), Kent L. Gee, Tracianne B. Neilsen, Brent O. Reichman (Phys.,
Brigham Young Univ., Provo, UT), Michael M. James (Blue Ridge Res.
and Consulting, Asheville, NC), and Alexandira R. Salton (Blue Ridge Res.
and Consulting, Asheville, Utah)
Nonlinear propagation effects in rocket noise have been previously
shown to be significant [M. B. Muhlestein et al. Proc. Mtgs. Acoust.
(2013)]. This paper explores the influence of source level, peak frequency,
2169
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
and ambient atmospheric conditions on predictions of nonlinear propagation. An acoustic pressure waveform measured during a full-scale solid
rocket motor firing is numerically propagated via generalized Burgers equation model for atmospheric conditions representative of plausible spaceport
locations. Cases are explored where the overall sound pressure level and
peak frequency has been scaled to model engines of different scale or thrust.
The predicted power spectral densities and overall sound pressure levels,
both flat and A-weighted, are compared for nonlinear and linear propagation
for distances up to 30 km. The differences in overall level suggest that further research to appropriately include nonlinear effects in launch vehicle
noise models is worthwhile.
168th Meeting: Acoustical Society of America
2169
2p TUE. PM
2pNSb8. Infrasonic energy from orbital launch vehicles. W. C. Kirkpatrick Alberts, John M. Noble, and Stephen M. Tenney (US
Army Res.Lab., 2800 Powder Mill, Adelphi, MD 20783, kirkalberts@verizon.net)
TUESDAY AFTERNOON, 28 OCTOBER 2014
INDIANA C/D, 1:00 P.M. TO 2:40 P.M.
Session 2pPA
Physical Acoustics and Education in Acoustics: Demonstrations in Acoustics
Uwe J. Hansen, Cochair
Chemistry & Physics, Indiana State University, 64 Heritage Dr., Terre Haute, IN 47803-2374
Murray S. Korman, Cochair
Physics Department, U.S. Naval Academy, 572 C Holloway Road, Chauvenet Hall Room 295, Annapolis, MD 21402
Chair’s Introduction—1:00
Invited Papers
1:05
2pPA1. Sharing experiments for home and classroom demonstrations. Thomas D. Rossing (Stanford Univ., Music, Stanford, CA
94305, rossing@ccrma.stanford.edu)
In the third edition of The Science of Sound, we included a list of “Experiments for Home, Laboratory, and Classroom Demonstrations” at the end of each chapter. Some of the demonstrations are done by the instructor in class, some are done by students for extra
credit, some are intended to be done at home. We describe a representative number of these, many of which can be done without special
equipment.
1:30
2pPA2. A qualitative demonstration of the behavior of the human cochlea. Andrew C. Morrison (Dept. of Natural Sci., Joliet Junior
College, 1215 Houbolt Rd., Joliet, IL 60431, amorrison@jjc.edu)
Demonstrations of the motion of the basilar membrane in the human cochlea designed by Keolian [J. Acoust. Soc. Am. 101, 11991201 (1997)], Tomlinson et. al. [J. Acoust. Soc. Am. 121, 3115 (2007)], and others provide a way for students in a class to visualize the
behavior of the basilar membrane and explore the physical mechanisms leading to many auditory phenomena. The designs of Keolian
and Tomlinson are hydrodynamic. A non-hydrodynamic apparatus has been designed that can be constructed with commonly available
laboratory supplies and items readily available at local hardware stores. The apparatus is easily set up for demonstration purposes and is
compact for storing between uses. The merits and limitations of this design will be presented.
1:55
2pPA3. Nonlinear demonstrations in acoustics. Murray S. Korman (Phys. Dept., U.S. Naval Acad., 572 C Holloway Rd., Chauvenet
Hall Rm. 295, Annapolis, MD 21402, korman@usna.edu)
The world is nonlinear, and in presenting demonstrations in acoustics, one often has to consider the effects of nonlinearity. In this
presentation the nonlinear effects are made to be very pronounced. The nonlinear effects of standing waves on a lightly stretched string
(which is also very elastic) lead to wave shape distortion, mode jumping and hysteresis effects in the resonant behavior of a tuning curve
near a resonance. The effects of hyperelasticity in a rubber string are discussed. A two dimensional system like a vibrating rectangular
or circular drum-head are well known. The nonlinear effects of standing waves on a lightly stretched hyperelastic membrane make an
interesting and challenging study. Here, tuning curve behavior demonstrates that there is softening of the system for slightly increasing
vibration amplitudes followed by stiffening of the system at larger vibration amplitudes. The hysteretic behavior of the tuning curve for
sweeping from lower to higher frequencies and then from higher to lower frequencies (for the same drive amplitude) is demonstrated.
Lastly, the nonlinear effects of a column of soil or fine granular material loading a thin elastic circular clamped plate are demonstrated
near resonance. Here again, the nonlinear highly asymmetric tuning curve behavior is demonstrated.
2:20–2:40 Audience Interaction
2170
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2170
TUESDAY AFTERNOON, 28 OCTOBER 2014
MARRIOTT 1/2, 2:00 P.M. TO 3:50 P.M.
Session 2pSA
Structural Acoustics and Vibration, Signal Processing in Acoustics, and Engineering Acoustics: Nearfield
Acoustical Holography
Sean F. Wu, Chair
Mechanical Engineering, Wayne State University, 5050 Anthony Wayne Drive, College of Engineering Building, Rm 2133,
Detroit, MI 48202
Chair’s Introduction—2:00
Invited Papers
2p TUE. PM
2:05
2pSA1. Transient nearfield acoustical holography. Sean F. Wu (Mech. Eng., Wayne State Univ., 5050 Anthony Wayne Dr., College
of Eng. Bldg., Rm. 2133, Detroit, MI 48202, sean_wu@wayne.edu)
Transient Nearfield Acoustical Holography Sean F. Wu Department of Mechanical Engineering, Wayne State University, Detroit,
MI, 48202 This paper presents the general formulations for reconstructing the transient acoustic field generated by an arbitrary object
with a uniformly distributed surface velocity in free space. These formulations are derived from the Kirchhoff-Helmholtz integral theory
that correlates the transient acoustic pressure at any field point to those on the source surface. For a class of acoustic radiation problems
involving an arbitrarily oscillating object with a uniformly distributed surface velocity, for example, a loudspeaker membrane, the normal surface velocity is frequency dependent but is spatially invariant. Accordingly, the surface acoustic pressure is expressible as the
product of the surface velocity and the quantity that can be solved explicitly by using the Kirchhoff-Helmholtz integral equation. This
surface acoustic pressure can be correlated to the field acoustic pressure using the Kirchhoff-Helmholtz integral formulation. Consequently, it is possible to use nearfield acoustic holography to reconstruct acoustic quantities in entire three-dimensional space based on a
single set of acoustic pressure measurements taken in the near field of the target object. Examples of applying these formulations to
reconstructing the transient acoustic pressure fields produced by various arbitrary objects are demonstrated.
2:30
2pSA2. A multisource-type representation statistically optimized near-field acoustical holography method. Alan T. Wall (Battlespace Acoust. Branch, Air Force Res. Lab., Bldg. 441, Wright-Patterson AFB, OH 45433, alantwall@gmail.com), Kent L. Gee, and Tracianne B. Neilsen (Dept. of Phys. and Astronomy, Brigham Young Univ., Provo, UT)
A reduced-order approach to near-field acoustical holography (NAH) that accounts for sound fields generated by multiple spatially
separated sources of different types is presented. In this method, an equivalent wave model (EWM) of a given field is formulated based
on rudimentary knowledge of source types and locations. The statistically optimized near-field acoustical holography (SONAH) algorithm is utilized to perform the NAH projection after the formulation of the multisource EWM. The combined process is called multisource-type representation SONAH (MSTR SONAH). This method is used to reconstruct simulated sound fields generated by
combinations of multiple source types. It is shown that MSTR SONAH can successfully reconstruct the near field pressures in multisource environments where other NAH methods result in large errors. The MSTR SONAH technique can be extended to general sound
fields where the shapes and locations of sources and scattering bodies are known.
2:55
2pSA3. Bayesian regularization applied to real-time near-field acoustic holography. Thibaut Le Magueresse (MicrodB, 28 chemin
du petit bois, Ecully 69131, France, thibaut-le-magueresse@microdb.fr), Jean-Hugh Thomas (Laboratoire d’Acoustique de l’Universite
ome Antoni (Laboratoire Vibrations Acoustique, Villeurbanne, France), and Sebasien Paillasseur
du Maine, Le Mans, France), Jer^
(MicrodB, Ecully, France)
Real-Time Near-field Acoustic Holography is used to recover non stationary acoustic sound sources using a planar microphone
array. In the direct way, describing propagation requires the convolution of the spatial spectrum of the source under study with a known
impulse response. When the convolution operator is replaced with a matrix product, the propagation operator is re-written in a Toeplitz
matrix form. Solving the inverse problem is based on a Singular value decomposition of this propagator and Tikhonov regularization is
used to stabilize the solution. The purpose here is to study the regularization process. The formulation of this problem in the Tikhonov
sense estimates the solution from the knowledge of the propagation model, the measurements and the regularization parameter. This parameter is calculated by making a compromise between the fidelity to the real measured data and the fidelity to available a priori information. A new regularization parameter is introduced based on a Bayesian approach to maximize the information taken into account.
Comparisons of the results are proposed, using the L-Curve and the generalized cross validation. The superiority of the Bayesian parameter is observed for the reconstruction of a non stationary experimental source using real-time near-field acoustic holography.
2171
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2171
Contributed Papers
3:20
3:35
2pSA4. Acoustic building infiltration measurement system. Ralph T.
Muehleisen, Eric Tatara (Decision and Information Sci., Argonne National
Lab., 9700 S. Cass Ave., Bldg. 221, Lemont, IL 60439, rmuehleisen@anl.
gov), Ganesh Raman, and Kanthasamy Chelliah (Mater., Mech., and
Aerosp. Eng., Illinois Inst. of Technol., Chicago, IL)
2pSA5. Reversible quasi-holographic line-scan processing for acoustic
imaging and feature isolation of transient scattering. Daniel Plotnick,
Philip L. Marston, David J. Zartman (Phys. and Astronomy, Washington
State Univ., 1510 NW Turner DR, Apartment 4, Pullman, WA 99163,
dsplotnick@gmail.com), and Timothy M. Marston (Appl. Phys. Lab., Univ.
of Washington, Seattle, WA)
Building infiltration is a significant portion of the heating and cooling
load of buildings and accounts for nearly 4% of the total energy use in the
United States. Current measurement methods for locating and quantifying
infiltration in commercial buildings to apply remediation are very limited.
In this talk, the development of a new measurement system, the Acoustic
Building Infiltration Measurement System (ABIMS) is presented. ABIMS
uses Nearfield Acoustic Holography (NAH) to measure the sound field
transmitted through a section of the building envelope. These data are used
to locate and quantify the infiltration sites of a building envelope section.
The basic theory of ABIMS operation and results from computer simulations are presented.
Transient acoustic scattering data from objects obtained using a onedimensional line scan or two-dimensional raster scan can be processed via a
linear quasi-holographic method [K. Baik, C. Dudley, and P. L. Marston, J.
Acoust. Soc. Am. 130, 3838–3851 (2011)] in a way that is reversible, allowing isolation of spatially or temporally dependent features [T. M. Marston et
al., in Proc. IEEE Oceans 2010]. Unlike nearfield holography the subsonic
wavenumber components are suppressed in the processing. Backscattering
data collected from a collocated source/receiver (monostatic scattering) and
scattering involving a stationary source and mobile receiver (bistatic) may
be processed in this manner. Distinct image features such as those due to
edge diffraction, specular reflection, and elastic effects may be extracted in
the image domain and then reverse processed to allow examination of those
features in time and spectral domains. Multiple objects may also be isolated
in this manner and clutter may be removed [D. J. Zartman, D. S. Plotnick,
T. M. Marston, and P. L. Marston, Proceedings of Meetings on Acoustics
19, 055011 (2013) http://dx.doi.org/10.1121/1.4800881]. Experimental
examples comparing extracted features with physical models will be discussed and demonstrations of signal enhancement in an at sea experiment,
TREX13, will be shown. [Work supported by ONR.]
TUESDAY AFTERNOON, 28 OCTOBER 2014
MARRIOTT 5, 1:00 P.M. TO 5:00 P.M.
Session 2pSC
Speech Communication: Segments and Suprasegmentals (Poster Session)
Olga Dmitrieva, Chair
Purdue University, 640 Oval Drive, Stanley Coulter 166, West Lafayette, IN 47907
All posters will be on display from 1:00 p.m. to 5:00 p.m. To allow contributors an opportunity to see other posters, contributors of oddnumbered papers will be at their posters from 1:00 p.m. to 3:00 p.m. and contributors of even-numbered papers will be at their posters
from 3:00 p.m. to 5:00 p.m.
Contributed Papers
2pSC1. Interactions among lexical and discourse characteristics in
vowel production. Rachel S. Burdin, Rory Turnbull, and Cynthia G. Clopper (Linguist, The Ohio State Univ., 1712 Neil Ave., 222 Oxely Hall,
Columbus, OH 43210, burdin@ling.osu.edu)
Various factors are known to affect vowel production, including word
frequency, neighborhood density, contextual predictability, mention in the
discourse, and audience. This study explores interactions between all five of
these factors on vowel duration and dispersion. Participants read paragraphs
that contained target words which varied in predictability, frequency, and
density. Each target word appeared twice in the paragraph. Participants read
each paragraph twice: as if they were talking to a friend (“plain speech”)
2172
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
and as if they were talking to a hearing-impaired or non-native interlocutor
(“clear speech”). Measures of vowel duration and dispersion were obtained.
Results from the plain speech passages revealed that second mention and
more predictable words were shorter than first mention and less predictable
words, and that vowels in second mention and low density words were less
peripheral than in first mention and high density words. Interactions
between frequency and mention, and density and mention, were also
observed, with second mention reduction only occurring in low density and
low frequency words. We expect to observe additional effects of speech
style, with clear speech vowels being longer and more disperse than plain
speech vowels, and that these effects will interact with frequency, density,
predictability, and mention.
168th Meeting: Acoustical Society of America
2172
We investigated vowel quantity in Yakut (Sakha), a Turkic language spoken in Siberia by over 400,000 speakers in the Republic of Sakha (Yakutia)
in the Russian Federation. Yakut is a quantity language; all vowel and consonant phonemes have short and long contrastive counterparts. The study aims
at revealing acoustic characteristics of the binary quantity distinction in vowels. We used two sets of data: (1) A female native Yakut speaker read a 200word list containing disyllabic nouns and verbs with four different combinations of vowel length in the two syllables (short–short, short–long, long–short,
and long–long) and a list of 50 minimal pairs differing only in vowel length;
(2) Spontaneous speech data from 9 female native Yakut speakers (aged 19–
77), 200 words with short vowels and 200 words with long vowels, were
extracted for analysis. Acoustic measurements of the short and long vowels’
f0-values, duration and intensity were done. Mixed-effects models showed a
significant durational difference between long and short vowels for both data
sets. However, the preliminary results indicated that, unlike in quantity languages like Finnish and Estonian, there was no consistent effect of f0 as the
phonetic correlate in Yakut vowel quantity distinction.
2pSC3. Acoustic and perceptual characteristics of vowels produced by
self-identified gay and heterosexual male speakers. Keith Johnson (Linguist, Univ. of California, Berkeley, Berkeley, CA) and Erik C. Tracy
(Psych., Univ. of North Carolina Pembroke, PO Box 1510, Pembroke, NC
28372, erik.tracy@uncp.edu)
Prior research (Tracy & Satariano, 2011) investigated the perceptual characteristics of gay and heterosexual male speech; it was discovered that listeners primarily relied on vowels to identify sexual orientation. Using singleword utterances produced by those same speakers, the current study examined
both the acoustic characteristics of vowels, such as pitch, duration, and the
size of the vowel space, and how these characteristics relate to the perceived
sexual orientation of the speaker. We found a correlation between pitch and
perceived sexual identity for vowels produced by heterosexual speakers—
higher f0 was associated with perceptual “gayness.” We did not find this correlation for gay speakers. Vowel duration did not reliably distinguish gay and
heterosexual speakers, but speakers who produced longer vowels were perceived as gay and speakers who produced shorter vowels were perceived as
heterosexual. The size of the vowel space did not reliably differ between gay
and heterosexual speakers. However, speakers who produced a larger vowel
space were perceived as more gay-sounding than speakers who produced a
smaller vowel space. The results suggest that listeners rely on these acoustic
characteristics when asked to determine a male speaker’s sexual orientation,
but that the stereotypes that they seem to rely upon are inaccurate.
2pSC4. Acoustic properties of the vowel systems of Bolivian Quechua/
Spanish bilinguals. Nicole Holliday (Linguist, New York Univ., 10 Washington Pl., New York, NY 10003, nrh245@nyu.edu)
This paper describes the vowel systems of Quechua/Spanish bilinguals in
Cochabamba, Bolivia, and examines these systems to illuminate variation
between phonemic and allophonic vowels in this Quechua variety. South Bolivian Quechua is described as phonemically trivocalic, and Bolivian Spanish
is described as pentavocalic (Cerr
on-Palomino 1994). Although South Bolivian Quechua has three vowel categories, Quechua uvular stop consonants promote high vowel lowering, effectively lowering /i/ and /u/ towards space
otherwise occupied by /e/ and /o/ respectively, producing a system with five
surface vowels but three phonemic vowels (Buckley 2000). The project was
conducted with eleven Quechua/Spanish bilinguals from the Cochabamba
department in Bolivia. Subjects participated in a Spanish to Quechua oral
translation task and a word list task in Spanish. Results indicate that Quechua/
Spanish bilinguals maintain separate vowel systems. In the Spanish vowel
systems, each vowel occupies its own space and backness. A one-way
ANOVA reveals that /i/ is higher and fronter than /e/, and /u/ is higher than /
o/ (p<0.05). The Quechua vowel systems are somewhat more variable, with
substantial overlap between /i/ and /e/, and between /u/ and /o/. Potential
explanations for this result include lexical conditioning, speaker literacy
effects, and differences in realizations of phonemic versus allophonic vowels.
2173
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
2pSC5. Cue integration in the perception of fricative-vowel coarticulation in Korean. Goun Lee and Allard Jongman (Linguist, The Univ. of
Kansas, 1541 Lilac Ln., Blake Hall, Rm. 427, Lawrence, KS 66045-3129,
cconni@ku.edu)
Korean distinguishes two fricatives—fortis [s’] and non-fortis [s]. Perception of this distinction was tested in two different vowel contexts, with
three types of stimuli (consonant-only, vowel-only, or consonant-vowel
sequences) (Experiment 1). The relative contribution of consonantal and
vocalic cues was also examined with cross-spliced stimuli (Experiment 2).
Listeners’ weighting of 7 perceptual cues—spectral mean (initial 60%, final
40%), vowel duration, H1-H2* (onset, mid), and cepstral peak prominence
(onset, mid)—was examined. The data demonstrate that identification performance was heavily influenced by vowel context and listener performance
was more accurate in the /a/ vowel context than in the /i/ vowel context. In
addition, the type of stimulus presented changed the perceptual cue weighting. When presented with conflicting cues, listener decisions were driven by
the vocalic cues in the /a/ vowel context. These results suggest that perceptual cues associated with breathy phonation are the primary cues for fricative identification in Korean.
2pSC6. Voicing, devoicing, and noise measures in Shanghainese voiced
and voiceless glottal fricatives. Laura L. Koenig (Haskins Labs and Long
Island Univ., 300 George St., New Haven, CT 06511, koenig@haskins.yale.
edu) and Lu-Feng Shi (Haskins Labs and Long Island Univ., Brooklyn, New
York)
Shanghainese has a rather rare voicing distinction between the glottal
fricatives /h/ and /¨/. We evaluate the acoustic characteristics of this contrast in ten male and ten female speakers of urban Shanghainese dialect. Participants produced 20 CV words with a mid/low central vowel in a short
carrier phrase. All legal consonant-tone combinations were used: /h/ preceded high, low, and short tones whereas /¨/ preceded low and short tones.
Preliminary analyses suggested that the traditional “voiced” and “voiceless”
labels for these sounds are not always phonetically accurate; hence we measure the duration of any voicing break relative to the entire phrase, as well
as the harmonics-to-noise ratio (HNR) over the time. We expect longer relative voiceless durations and lower HNR measures for /h/ compared to /¨/. A
question of interest is whether any gender differences emerge. A previous
study on American English [Koenig, 2000, JSLHR 43, 1211–1228] found
that men phonated through their productions of /h/ more often than women,
and interpreted that finding in terms of male-female differences in vocal
fold characteristics. A language that contrasts /h/ and /¨/ might minimize
any such gender variation. Alternatively, the contrast might be realized in
slightly different ways in men and women.
2pSC7. Incomplete neutralization of sibilant consonants in Penang
Mandarin: A palatographic case study. Ting Huang, Yueh-chin Chang,
and Feng-fan Hsieh (Graduate Inst. of Linguist, National Tsing Hua Univ.,
Rm. B306, HSS Bldg., No. 101, Section 2, Kuang-Fu Rd., Hsinchu City
30013, Taiwan, funting.huang@gmail.com)
It has been anecdotally observed that the three-way contrasts in Standard
Chinese are reduced to two-way contrasts in Penang Mandarin (PM). PM is
a variety of Mandarin Chinese spoken in Penang of Malaysia, which is
influenced by Penang Hokkien. This work shows that the alleged neutralization of contrasts is incomplete (10 consonants x 3 vowel contexts x 5
speakers). More specifically, alveopalatal [ˆ] may range from postalveolar
zone (73.33%) to alveolar zone (26.67%), and so does retroflex [] (46.67%
vs. 46.67%). [s] and [n] are apical (or [ + anterior]) coronals. The goal of
this study is three-fold: (i) to describe the places of articulation of PM coronals and the patterns of ongoing sound changes, (ii) to show the neutralization of place contrasts is incomplete whereby constriction length remains
distinct for these sibilant sounds, and (iii) to demonstrate different coarticulatory patterns of consonants in variant vowel contexts. The intricate division of coronal consonants does not warrant a precise constriction location
on the upper palate. This PM data lend support to Ladefoged and Wu’s
(1984) observation that it is not easy to pin down a clear-cut boundary
between dental and alveolar stops, and between alveolar and palatoalveolar
fricatives.
168th Meeting: Acoustical Society of America
2173
2p TUE. PM
2pSC2. Phonetic correlates of phonological quantity of Yakut. Lena
Vasilyeva, Juhani J€arvikivi, and Anja Arnhold (Dept. of Linguist, Univ. of
AB, Edmonton, AB T6G2E7, Canada, lvasilye@ualberta.ca)
2pSC8. Final voicing and devoicing in American English. Olga Dmitrieva (Linguistics/School of Lang. and Cultures, Purdue Univ., 100 North
University St., Beering Hall, Rm. 1289, West Lafayette, IN 47907, odmitrie@purdue.edu)
strident fricatives /v/ and /dh/, and 40–50% of /t/ closures and releases. Further quantification of landmark modification patterns will provide useful information about the processing of surface phonetic variation.
English is typically described as a language in which voicing contrast is
not neutralized in word-final position. However, a tendency towards devoicing (at least partial) of final voiced obstruents in English has been reported
by the previous studies (e.g., Docherty (1992) and references therein). In the
present study, we examine a number of acoustic correlates of obstruent voicing and the robustness with which each one is able to differentiate between
voiced and voiceless obstruents in the word-final position in the speech
recorded by twenty native speakers of the Mid-Western dialect of American
English. The examined acoustic properties include preceding vowel duration, closure or frication duration, duration of the release portion, and duration of voicing during the obstruent closure, frication, and release. Initial
results indicate that final voiced obstruents are significantly different from
the voiceless ones in terms of preceding vowel duration and closure/frication duration. However, release duration for stops does not appear to correlate with voicing in an equally reliable fashion. A well-pronounced
difference in terms of closure voicing between voiced and voiceless final
stops is significantly reduced in fricative consonants, which indicates a tendency towards neutralization of this particular correlate of voicing in the
word-final fricatives of American English.
2pSC11. Age- and gender-related variation in voiced stop prenasalization in Japanese. Mieko Takada (Aichi Gakuin Univ., Nisshin, Japan), Eun
Jong Kong (Korea Aerosp. Univ., Goyang-City, South Korea), Kiyoko
Yoneyama (Daito Bunka Univ., 1-9-1 Takashimadaira, Itabashi-ku, Tokyo
175-8571, Japan, yoneyama@ic.daito.ac.jp), and Mary E. Beckman (Ohio
State Univ., Columbus, OH)
2pSC9. An analysis of the singleton-geminate contrast in Japanese fricatives and stops. Christopher S. Rourke and Zack Jones (Linguist, The Ohio
State Univ., 187 Clinton St., Columbus, OH 43202, rourke.16@osu.edu)
Previous acoustic analyses of the singleton-geminate contrast in Japanese have focused primarily on read speech. The present study instead analyzed the lengths of singleton and geminate productions of word-medial
fricatives and voiceless stops in spontaneous monologues from the Corpus
of Spontaneous Japanese (Maekawa, 2003). The results of a linear mixed
effects regression model mirrored previous findings in read speech that the
geminate effect (the durational difference between geminate and singletons)
of stops is significantly larger than that of fricatives. This study also found a
large range of variability in the geminate effect size between talkers. The
size of the geminate effect between fricatives and voiceless stops was found
to be slightly correlated, suggesting that they might be related to other rateassociated production differences between individuals. This suggestion was
evaluated by exploring duration differences associated with talker age and
gender. While there was no relationship between age and duration, males
produced shorter durations than females for both fricatives and stops. However, the size of the geminate effect was not related to the gender of the
speaker. The cause of these individual differences may be related to sound
perception. Future research will investigate the cause of these individual differences in geminate effect size.
2pSC10. Quantifying surface phonetic variation using acoustic landmarks as feature cues. Jeung-Yoon Choi and Stefanie Shattuck-Hufnagel
(Res. Lab. of Electronics, MIT, 50 Vassar St., Rm. 36-523, Cambridge, MA
02139, sshuf@mit.edu)
Acoustic landmarks, which are abrupt spectral changes associated with
certain feature sequences in spoken utterances, are highly informative and
have been proposed as the initial analysis stage in human speech perception,
and for automatic speech recognition (Stevens, JASA 111(4), 2002, 1872–
1891). These feature cues and their parameter values also provide an
effective tool for quantifying systematic context-governed surface phonetic
variation (Shattuck-Hufnagel and Veilleux, ICPhS XVI, 2007, 925–928).
However, few studies have provided landmark-based information about the
full range of variation in continuous communicative speech. The current
study examines landmark modification patterns in a corpus of maptask-elicited speech, hand annotated for whether the landmarks were realized as
predicted from the word forms or modified in context. Preliminary analyses
of a single conversation (400 s, one speaker) show that the majority of landmarks (about 84%) exhibited the canonical form predicted from their lexical
specifications, and that modifications were distributed systematically across
segment types. For example, 90% of vowel landmarks (at amplitude/F1
peaks) were realized as predicted, but only 70% of the closures for non2174
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Modern Japanese is generally described as having phonologically voiced
(versus voiceless) word-initial stops. However, phonetic details vary across
dialects and age groups; in Takada’s (2011) measurements of recordings of
456 talkers from multiple generations of talkers across five dialects, Osakaarea speakers and older speakers in the Tokyo area (Tokyo, Chiba, Saitama,
and Kanagawa prefectures) typically show pre-voicing (lead VOT), but
younger speakers show many “devoiced” (short lag VOT) values, a tendency that is especially pronounced among younger Tokyo-area females.
There is also variation in the duration of the voice bar, with very long values
(up to -200 ms lead VOT) observed in the oldest female speakers. Spectrograms of such tokens show faint formants during the stop closure, suggesting a velum-lowering gesture to vent supra-glottal air pressure to sustain
vocal fold vibration. Further evidence of pre-nasalization in older Tokyoarea females comes from comparing amplitude trajectories for the voice bar
to amplitude trajectories during nasal consonants, adapting a method proposed by Burton, Blumstein, and Stevens (1972) for exploring phonemic
pre-nasalization contrasts. Differences in trajectory shape patterns between
the oldest males and females and between older and younger females are
like the differences that Kong, Syrika, and Edwards (2012) observed across
Greek dialects.
2pSC12. An acoustic comparison of dental and retroflex sibilants in
Chinese Mandarin and Taiwan Mandarin. Hanbo Yan and Allard Jongman (Linguist, Univ. of Kansas, 1732 Anna Dr., Apt. 11, Lawrence, KS
66044, yanhanbo@ku.edu)
Mandarin has both dental and retroflex sibilants. While the Mandarin
varieties spoken in China and Taiwan are often considered the same, native
speakers of Mandarin can tell the difference between the two. One obvious
difference is that between the retroflex ([], [t], [th]) and dental sibilants
([s], [ts], [tsh]). This study investigates the acoustic properties of the sibilants of Chinese Mandarin and Taiwan Mandarin. Eight native speakers
each of Chinese and Taiwan Mandarin produced the six target sibilants in
word-initial position. A number of acoustic parameters, including spectral
moments and duration, were analyzed to address two research questions: (a)
which parameters distinguish the dental and retroflex in each type of Mandarin; (b) is there a difference between Chinese and Taiwan Mandarin?
Results show that retroflex sibilants have a lower M1 and M2, and a higher
M3 than dental sibilants in each language. Moreover, Chinese Mandarin has
significantly larger M1, M2, and M3 differences than Taiwan Mandarin.
This pattern suggests that, in contrast to Chinese Mandarin, Taiwan Mandarin is merging the retroflex sibilants in a dental direction.
2pSC13. Statistical relationships between phonological categories and
acoustic-phonetic properties of Korean consonants. Noah H. Silbert
(Commun. Sci. & Disord., Univ. of Cincinnati, 3202 Eden Ave., 344 French
East Bldg., Cincinnati, OH 45267, noah.silbert@uc.edu) and Hanyong Park
(Linguist, Univ. of Wisconsin, Milwaukee, WI)
The mapping between segmental contrasts and acoustic-phonetic properties is complex and many-to-many. Contrasts are often cued by a multiple
acoustic-phonetic properties, and acoustic-phonetic properties typically provide information about multiple contrasts. Following the approach of de
Jong et al. (2011, JASA 129, 2455), we analyze multiple native speakers’
repeated productions of Korean obstruents using a hierarchical multivariate
statistical model of the relationship between multidimensional acoustics and
phonological categories. Specifically, we model the mapping between categories and multidimensional acoustic measurements from multiple repetitions of 14 Korean obstruent consonants produced by 20 native speakers (10
168th Meeting: Acoustical Society of America
2174
2pSC14. Corpus testing a fricative discriminator: Or, just how invariant is this invariant? Philip J. Roberts (Faculty of Linguist, Univ. of
Oxford, Ctr. for Linguist and Philology, Walton St., Oxford OX1 2HG,
United Kingdom, philip.roberts@ling-phil.ox.ac.uk), Henning Reetz (Institut fuer Phonetik, Goethe-Universitaet Frankfurt, Frankfurt-am-Main, Germany), and Aditi Lahiri (Faculty of Linguist, Univ. of Oxford, Oxford,
United Kingdom)
Acoustic cues to the distinction between sibilant fricatives are claimed
to be invariant across languages. Evers et al. (1998) present a method for
distinguishing automatically between [s] and [S], using the slope of regression lines over separate frequency ranges within a DFT spectrum. They
report accuracy rates in excess of 90% for fricatives extracted from recordings of minimal pairs in English, Dutch and Bengali. These findings are
broadly replicated by Maniwa et al. (2009), using VCV tokens recorded in
the lab. We tested the algorithm from Evers et al. (1998) against tokens of
fricatives extracted from the TIMIT corpus of American English read
speech, and the Kiel corpora of German. We were able to achieve similar
accuracy rates to those reported in previous studies, with the following caveats: (1) the measure relies on being able to perform a DFT for frequencies
from 0 to 8 kHz, so that a minimum sampling rate of 16 kHz is necessary
for it to be effective, and (2) although the measure draws a similarly clear
distinction between [s] and [S] to those found in previous studies, the threshold value between the two sounds is sensitive to the dynamic range of the
input signal.
2pSC15. Discriminant variables for plosive- and fricative-type single
and geminate stops in Japanese. Shigeaki Amano and Kimiko Yamakawa
(Faculty of Human Informatics, Aichi Shukutoku Univ., 2-9 Katahira, Nagakute, Aichi 480-1197, Japan, psy@asu.aasa.ac.jp)
Previous studies suggested that a plosive-type geminate stop in Japanese
is discriminated from a single stop with variables of stop closure duration
and subword duration that spans from the mora preceding the geminate stop
to the vowel following the stop. However, this suggestion does not apply to
a fricative-type geminate stop that does not have a stop closure. To overcome this problem, this study proposes Inter-Vowel Interval (IVI) and Successive Vowel Interval (SVI) as discriminant variables. IVI is the duration
between the end of the vowel preceding the stop and the beginning of the
vowel following the stop. SVI is the duration between the beginning of the
vowel preceding the stop and the end of the vowel following the stop. When
discriminant analysis was conducted between single and geminate stops of
plosive and fricative types using IVI and SVI as independent variables, the
discriminant ratio was very high (99.5%, n = 368). This result indicates that
IVI and SVI are the general variables that represent acoustic features distinguishing Japanese single and geminate stops of both plosive and fricative
types. [This study was supported by JSPS KAKENHI Grant Numbers
24652087, 25284080, 26370464 and by Aichi-Shukutoku University Cooperative Research Grant 2013-2014.]
2pSC16. Perceptual affinity of Mandarin palatals and retroflexes. Yunghsiang Shawn Chang (Dept. of English, National Taipei Univ. of Technol.,
Zhongxiao E. Rd., Sec. 3, No. 1, Taipei 106, Taiwan, shawnchang@ntut.
edu.tw)
Mandarin palatals [tˆ, tˆh, ˆ], which only occur before [i, y] vowels, are in
complementary distribution with the alveolars [ts, tsh, s], the velars [k, kh, x],
2175
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
and the retroflexes [t, th, ]. Upon investigating perceptually motivated
accounts for the phonological representation of the palatals, Wan (2010)
reported that Mandarin palatals were considered more similar to the alveolars
than the velars, whereas Lu (2014) found categorical results for the palatal-alveolar discrimination. The current study furthered the investigation to the perceptual affinity between Mandarin palatals and retroflexes by having 15 native
listeners identify two 8-step resynthesized [-s] continua (adapted from Chang
et al. (2013)) cross-spliced with [i, y] vowels, respectively. To avoid phonotactic restrictions from biasing perception, all listeners were trained on identifying
the [çi, i, si] and [çy, y, sy] syllables produced by a phonetician before the
experiment. The results showed that all resynthesized stimuli, though lacking
palatal-appropriate vocalic transitions, were subject to palatal perception. Particularly, two intermediate steps along the [i-si] continuum and five along the
[y-sy] continuum were identified as palatal syllables by over 70% of the listeners. The results suggest that Mandarin palatals could be identified with both
the retroflexes and alveolars based on perceptual affinity.
2pSC17. Perceptual distinctiveness of dental vs. palatal sibilants in different vowel contexts. Mingxing Li (Linguist, The Univ. of Kansas, 1541
Lilac Ln., Blake Hall 427, Lawrence, KS 66045-3129, mxlistar@gmail.
com)
This paper reports a similarity rating experiment and a speeded AX discrimination experiment to test the perceptual distinctiveness of dental vs.
palatal sibilants in different vowel contexts. The stimuli were pairs of CV
sequences where the onsets were [s, ts, tsh] vs. [ˆ, tˆ, tˆh] as in Mandarin
Chinese and the vowels were [i, a, o]; the durations of the consonants and
vowels were set to values close to those in natural speech; the inter-stimulus-interval was set at 100ms to facilitate responses based on psychoacoustic
similarity. A significant effect of vowel contexts was observed in the similarity rating by 20 native American English speakers, whereby the dental vs.
palatal sibilants were judged to be the least distinct in the [i] context. A similar pattern was observed in the speeded AX discrimination, whereby the [i]
context introduced slower “different” responses than other vowels. In general, this study supports the view that the perceptual distinctiveness of a
consonant pair may vary with different vowel contexts. Moreover, the
experiment results match the typological pattern of dental vs. palatal sibilants across Chinese dialects, where contrasts like /si, tsi, tshi/ vs. /ˆi, tˆi,
tˆhi/ are often enhanced by vowel allophony.
2pSC18. Phonetic correlates of stance-taking. Valerie Freeman, Richard
Wright, Gina-Anne Levow (Linguist, Univ. of Washington, Box 352425,
Seattle, WA 98195, valerief@uw.edu), Yi Luan (Elec. Eng., Univ. of Washington, Seattle, WA), Julian Chan (Linguist, Univ. of Washington, Seattle,
WA), Trang Tran, Victoria Zayats (Elec. Eng., Univ. of Washington, Seattle, WA), Maria Antoniak (Linguist, Univ. of Washington, Seattle, WA),
and Mari Ostendorf (Elec. Eng., Univ. of Washington, Seattle, WA)
Stance, or a speaker’s attitudes or opinions about the topic of discussion,
has been investigated textually in conversation- and discourse analysis and
in computational models, but little work has focused on its acoustic-phonetic properties. This is a difficult problem, given that stance is a complex
activity that must be expressed along with several other types of meaning
(informational, social, etc.) using the same acoustic channels. In this presentation, we begin to identify some acoustic indicators of stance in natural
speech using a corpus of collaborative conversational tasks which have been
hand-annotated for stance strength (none, weak, moderate, and strong) and
polarity (positive, negative, and neutral). A preliminary analysis of 18 dyads
completing two tasks suggests that increases in stance strength are correlated with increases in speech rate and pitch and intensity medians and
ranges. Initial results for polarity also suggest correlations with speech rate
and intensity. Current investigations center on local modulations in pitch
and intensity, durational and spectral differences between stressed and
unstressed vowels, and disfluency rates in different stance conditions. Consistent male/female differences are not yet apparent but will also be examined further.
168th Meeting: Acoustical Society of America
2175
2p TUE. PM
male, 10 female) in onset position in monosyllables. The statistical model
allows us to analyze distinct within- and between-speaker sources of variability in consonant production, and model comparisons allow us to assess
the utility of complexity in the assumed underlying phonological category
system. In addition, by using the same set of acoustic measurements for the
current project’s Korean consonants and the English consonants analyzed
by de Jong et al., we can model the within- and between-language acoustic
similarity of phonological categories, providing a quantitative basis for predictions about cross-language phonetic perception.
2pSC19. Compounds in modern Greek. Angeliki Athanasopoulou and
Irene Vogel (Linguist and Cognit. Sci., Univ. of Delaware, 125 East Main
St., Newark, DE 19716, angeliki@udel.edu)
2pSC22. The role of prosody in English sentence disambiguation. Taylor
L. Miller (Linguist & Cognit. Sci., Univ. of Delaware, 123 E Main St., Newark, DE 19716, tlmiller@udel.edu)
The difference between compounds and phrases has been studied extensively in English (e.g., Farnetani, Torsello, & Cosi, 1988; Plag, 2006;
Stekauer,
Zimmermann, & Gregova, 2007). However, little is known about
the analogous difference in Modern Greek (Tzakosta, 2009). Greek compounds (Ralli, 2003) form a single phonological word, and thus, they only contain one primary stress. That means that the individual words lose their
primary stress. The present study is the first acoustic investigation of the stress
properties of Greek compounds and phrases. Native speakers of Greek produce
ten novel adjective + noun compounds and their corresponding phrases (e.g.,
phrase: [kocino dodi] “a red tooth” vs. compound: [kocinod
odis] “someone
with red teeth”) in the sentence corresponding to “The XXX is at the top/bottom of the screen.” Preliminary results confirm the earlier descriptive claims
that compounds only have a single stress, while phrases have one on each
word. Specifically, the first word (i.e., adjective) in compounds is reduced in
F0 (101 Hz), duration (55 ms), and intensity (64 dB) compared to phrases
(F0 = 117Hz, duration = 85 ms, and intensity = 67 dB). Also, both words are
very similar for all of the measures in phrases. The second word (i.e., noun) is
longer than the first word, possibly indicating phrase-final lengthening.
Only certain ambiguous sentences are perceptually disambiguable. Some
researchers argue that this is due to syntactic structure (Lehiste 1973, Price
1991, Kang & Speer 2001), while others argue prosodic structure is responsible (Nespor & Vogel 1986 = N&V, Hirshberg & Avesani 2000). The present
study further tests the role of prosodic constituents in sentence disambiguation
in English. Target sentences were recorded in disambiguating contexts;
twenty subjects listened to the recordings and chose one of two meanings.
Following N&V’s experimental design with Italian, the meanings of each target structurally corresponded to different syntactic constituents and varied
with respect to phonological phrases (/) and intonational phrases (I). The
results confirm N&V’s Italian findings: listeners are only able to disambiguate
sentences with different prosodic constituent structures (p < 0.05); those differing in (I) but not (/) have the highest success rate—86% (e.g., [When danger threatens your children]I [call the police]I vs. [When danger threatens]I
[your children call the police]I ). As reported elsewhere (e.g., Lehiste 1973),
we also observed a meaning bias in some cases (e.g., in “Julie ordered some
large knife sharpeners,” listeners preferred “large [knife sharpeners]” but in
“Jill owned some gold fish tanks,” they preferred “[goldfish] tanks”).
2pSC20. Evoked potentials during voice error detection at register
boundaries. Anjli Lodhavia (Dept. of Commun. Disord. and Sci., Rush
Univ., 807 Reef Court, Wheeling, IL 60090, alodhavia1@gmail.com), Sona
Patel (Dept. of Speech-Lang. Pathol., Seton Hall Univ., South Orange, NJ),
Saul Frankford (Dept. of Commun. Sci. and Disord., Northwestern Univ.,
Tempe, Arizona), Oleg Korzyukov, and Charles R. Larson (Dept. of Commun. Sci. and Disord., Northwestern Univ., Evanston, IL)
2pSC23. Perceptual isochrony and prominence in spontaneous speech.
Tuuli Morrill (Linguist, George Mason Univ., 4400 University Dr., 3E4,
Fairfax, VA, tmorrill@msu.edu), Laura Dilley (Commun. Sci. and Disord.,
Michigan State Univ., East Lansing, MI), and Hannah Forsythe (Linguist,
Michigan State Univ., East Lansing, MI)
Singers require great effort to avoid vocal distortion at register boundaries, as they are trained to diminish the prominence of register breaks. We
examined neural mechanisms underlying voice error detection in singers at
their register boundaries. We hypothesized that event-related potentials
(ERPs), reflecting brain activity, would be larger if a singer’s pitch was
unexpectedly shifted toward, rather than away, from their register break.
Nine trained singers sustained a musical note for ~3 seconds near their
modal register boundaries. As the singers sustained these notes, they heard
their voice over headphones shift in pitch ( + /- 400 cents, 200 ms) either toward or away from the register boundary. This procedure was repeated for
200 trials. The N1 and P2 ERP amplitudes for three central electrodes (FCz,
Cz, Fz) were computed from the EEGs of all participants. Results of a multivariate analysis of variance for shift direction ( + 400c, -400c) and register
(low, high) showed significant differences in N1 and P2 amplitude for direction at the low boundary of modal register, but not the high register boundary. These results may suggest increased neural activity in singers when
trying to control the voice when crossing the lower register boundary.
2pSC21. The articulatory tone-bearing unit: Gestural coordination of
lexical tone in Thai. Robin P. Karlin and Sam Tilsen (Linguist, Cornell
Univ., 103 W Yates St., Ithaca, NY 14850, karlin.robin@gmail.com)
Recently, tones have been analyzed as articulatory gestures that can coordinate with segmental gestures. In this paper, we show that the tone gestures
that make up a HL contour tone are differentially coordinated with articulatory gestures in Thai syllables, and that the coordinative patterns are influenced by the segments and moraic structure of the syllables. The
autosegmental approach to lexical tone describes tone as a suprasegment that
must be associated to some tone-bearing unit (TBU); in Thai, the language of
study, the proposed TBU is the mora. Although the autosegmental account
largely describes the phonological patterning of tones, it remains unclear how
the abstract representation of tone is implemented. An electromagnetic articulograph (EMA) study of four speakers of Thai was conducted to examine the
effects of segment type and moraic structure on the coordination of tone gestures. In a HL contour tone, tone gestures behave similarly to consonant gestures, and show patterns of coordination with gestures that correspond to
moraic segments. However, there is also a level of coordination between the
H and L tone gestures. Based on these results, a model of TBUs is proposed
within the Articulatory Phonology framework that incorporates tone-segment
coordination as well as tone-tone coordination.
2176
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
While it has been shown that stressed syllables do not necessarily occur
at equal time intervals in speech (Cummins, 2005; Dauer, 1983), listeners
frequently perceive stress as occurring regularly, a phenomenon termed perceptual isochrony (Lehiste, 1977). A number of studies have shown that in
controlled experimental materials, a perceptually isochronous sequence of
stressed syllables generates expectations which affect word segmentation
and lexical access in subsequent speech (e.g., Dilley & McAuley, 2008).
The present research used the Buckeye Corpus of Conversational Speech
(Pitt et al., 2007) to address two main questions (1) What acoustic and linguistic factors are associated with the occurrence of perceptual isochrony?
and (2) What are the effects of perceptually isochronous speech passages on
the placement of prominence in subsequent speech? In particular, we investigate the relationship between perceptual isochrony and lexical items traditionally described as “unstressed” (e.g., grammatical function words),
testing whether these words are more likely to be perceived and/or produced
as prominent when they are preceded and/or followed by a perceptually isochronous passage. These findings will contribute to our understanding of the
relationship between acoustic correlates of phrasal prosody and lexical perception. [Research partially supported by NSF CAREER Award BCS
0874653 to L. Dilley.]
2pSC24. French listeners’ processing of prosodic focus. Jui Namjoshi
(French, Univ. of Illinois at Urbana-Champaign, 2090 FLB, MC-158, S.
Mathews Ave, Urbana, IL 61801, namjosh2@illinois.edu)
Focus in French, typically conveyed by syntax (e.g., clefting) with prosody, can be signaled by prosody alone (contrastive pitch accents on the first
syllable of focused constituents, cf. nuclear pitch accents, on the last nonreduced syllable of the Accentual Phrase) (Fery, 2001; Jun & Fougeron,
2000). Do French listeners, like L1-English listeners (Ito & Speer, 2008)
use contrastive accents to anticipate upcoming referents? 20 French listeners
completed a visual-world eye-tracking experiment. Cross-spliced, amplitude-neutralized stimuli included context (1) and critical (2) sentences in a
2x2 design, with accent on object (nuclear/ contrastive) and person’s information status (new/ given) as within-subject variables (see (1)-(2)). Average
amplitudes and durations for object words were 67 dB and 0.68 s for contrastive accents, and 63.8 dB and 0.56 s for nuclear accents, respectively.
Mixed-effects models showed a significant effect of accent-by-informationstatus interaction on competitor fixation proportions in the post-disambiguation time window (p<0.05). Contrastive accents yielded lower competitor
fixation proportions with a given person than with a new person, suggesting
that contrastive accents constrain lexical competition in French. (1) Clique
168th Meeting: Acoustical Society of America
2176
2pSC25. Prominence, contrastive focus and information packaging in
Ghanaian English discourse. Charlotte F. Lomotey (Texas A&M University-Commerce, 1818D Hunt St., Commerce, TX 75428, cefolatey@yahoo.
com)
Contrastive focus refers to the coding of information that is contrary to
the presuppositions of the interlocutor. Thus, in everyday speech, speakers
employ prominence to mark contrastive focus such that it gives an alternative answer to an explicit or implicit statement provided by the previous discourse or situation (Rooth, 1992), and plays an important role in facilitating
language understanding. Even though contrastive focus has been investigated in native varieties of English, there is little or no knowledge of similar
studies as far as non-native varieties of English, including that of Ghana, are
concerned. The present study investigates how contrastive focus is marked
with prosodic prominence in Ghanaian English, and how such a combination creates understanding among users of this variety. To achieve this, data
consisting of 61/2 hours of English conversations from 200 Ghanaians were
analyzed using both auditory and acoustic means. Results suggest that Ghanaians tend to shift the contrastive focus from the supposed focused syllable
onto the last syllable of the utterance, especially when that syllable ends the
utterance. Although such tendencies may shift the focus of the utterance, the
data suggest that listeners do not seem to have any problem with speakers’
packaging of such information.
2pSC26. The representation of tone 3 sandhi in Mandarin: A psycholinguistic study. Yu-Fu Chien and Joan Sereno (Linguist, The Univ. of Kansas, 1407 W 7th St., Apt. 18, Lawrence, KS 66044-6716, whouselefthand@
gmail.com)
In Mandarin, tone 3 sandhi is a tonal alternation phenomenon in which a
tone 3 syllable changes to a tone 2 syllable when it is followed by another
tone 3 syllable. Thus, the initial syllable of Mandarin bisyllabic sandhi
words is tone 3 underlyingly but becomes tone 2 on the surface. An auditory-auditory priming lexical decision experiment was conducted to
2177
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
investigate how Mandarin tone 3 sandhi words are processed by Mandarin
native listeners. The experiment examined prime-target pairs, with monosyllabic primes and bisyllabic Mandarin tone 3 sandhi targets. Each tone sandhi
target word was preceded by one of three corresponding monosyllabic
primes: a tone 2 prime (Surface-Tone overlap) (chu2-chu3li3), a tone 3
prime (Underlying-Tone overlap) (chu3-chu3li3), or a control prime (Baseline condition) (chu1-chu3li3). In order to assess the contribution of frequency of occurrence, 15 High Frequency and 15 Low Frequency sandhi
target words were used. Thirty native speakers of Mandarin participated.
Results showed that tone 3 sandhi targets elicited significantly stronger
facilitation effects in the Underlying-Tone condition than in the SurfaceTone condition, with little effect of frequency of occurrence. The data will
be discussed in terms of lexical access and the nature of the representation
of Mandarin words.
2pSC27. Perception of sound symbolism in mimetic stimuli: The voicing
contrast in Japanese and English. Kotoko N. Grass (Linguist, Univ. of
Kansas, 9953 Larsen St., Overland Park, KS 66214, nakata.k@ku.edu) and
Joan Sereno (Linguist, Univ. of Kansas, Lawrence, KS)
Sound symbolism is a concept in which the sound of a word and the
meaning of the word are systematically related. The current study investigated whether the voicing contrast between voiced /d, g, z/ and voiceless /t,
k, s/ consonants systematically affects categorization of Japanese mimetic
stimuli along a number of perceptual and evaluative dimensions. For the
nonword stimuli, voicing of consonants was also manipulated, creating a
continuum from voiced to voiceless endpoints (e.g., [gede] to [kete]), in
order to examine the categorical nature of the perception. Both Japanese
native speakers and English native speakers, who had no knowledge of Japanese, were examined. Stimuli were evaluated on size (big–small) and shape
(round–spiky) dimensions as well as two evaluative dimensions (good–bad,
graceful–clumsy). In the current study, both Japanese and English listeners
associated voiced sounds with largeness, badness, and clumsiness and voiceless sounds with smallness, goodness, and gracefulness. For the shape
dimension, however, English and Japanese listeners showed contrastive categorization, with English speakers associating voiced stops with roundness
and Japanese listeners associating voiced stops with spikiness. Interestingly,
sound symbolism was very categorical in nature. Implications of the current
data for theories of sound symbolism will be discussed.
168th Meeting: Acoustical Society of America
2177
2p TUE. PM
sur le macaRON de Marie-Helène. (2) Puis clique sur le chocoLAT/ CHOcolat de Marie-Helène/ Jean-Sebastien. (nuclear/contrastive accent, given/
new person) ‘(Then) Click on the macaron/chocolate of Marie-Helène/JeanSebastien.’
TUESDAY AFTERNOON, 28 OCTOBER 2014
INDIANA F, 1:00 P.M. TO 4:30 P.M.
Session 2pUW
Underwater Acoustics: Propagation and Scattering
Megan S. Ballard, Chair
Applied Research Laboratories, The University of Texas at Austin, P.O. Box 8029, Austin, TX 78758
Contributed Papers
1:00
1:30
2pUW1. Low frequency propagation experiments in Currituck Sound.
Richard D. Costley (GeoTech. and Structures Lab., U.S. Army Engineer
Res. & Development Ctr., 3909 Halls Ferry Rd., Vicksburg, MS 39180,
dan.costley@usace.army.mil), Kent K. Hathaway (Coastal & Hydraulics
Lab., US Army Engineer Res. & Development Ctr., DC, NC), Andrew
McNeese, Thomas G. Muir (Appl. Res. Lab., Univ. of Texas at Austin, Austin, TX), Eric Smith (GeoTech. and Structures Lab., U.S. Army Engineer
Res. & Development Ctr., Vicksburg, Texas), and Luis De Jesus Diaz (GeoTech. and Structures Lab., U.S. Army Engineer Res. & Development Ctr.,
Vicksburg, MS)
2pUW3. Results from a scale model acoustic propagation experiment
over a translationally invariant wedge. Jason D. Sagers (Environ. Sci.
Lab., Appl. Res. Labs., The Univ. of Texas at Austin, 10000 Burnet Rd.,
Austin, TX 78758, sagers@arlut.utexas.edu)
In water depths on the order of a wavelength, sound propagates with
considerable involvement of the bottom, whose velocities and attenuation
vary with depth into the sediment. In order to study propagation in these
types of environments, experiments were conducted in Currituck Sound on
the Outer Banks of North Carolina using a Combustive Sound Source (CSS)
and bottom mounted hydrophones and geophones as receivers. The CSS
was deployed at a depth of approximately 1 meter and generated transient
signals, several wavelengths long, at frequencies around 300 Hz. The results
are used to determine transmission loss in water depths of approximately 3
meters, as well as to examine the generation and propagation of Sholte type
interface waves. The measurements are compared to numerical models generated with a two-dimensional finite-element code. [Work supported by the
U.S. Army Engineer Research and Development Center. Permission to publish was granted by Director, Geotechnical & Structures Laboratory.]
1:15
2pUW2. Three-dimensional acoustic propagation effect in subaqueous
sand dune field. Andrea Y. Chang, Chi-Fang Chen (Dept. of Eng. Sci. and
Ocean Eng., National Taiwan Univ., No. 1, Sec. 4, Roosevelt Rd., Taipei
10617, Taiwan, yychang@ntu.edu.tw), Linus Y. Chiu (Inst. of Undersea
Technol., National Sun Yat-sen Univ., Kaohsiung, Taiwan), Emily Liu
(Dept. of Eng. Sci. and Ocean Eng., National Taiwan Univ., Taipei, Taiwan), Ching-Sang Chiu, and Davis B. Reeder (Dept. of Oceanogr., Naval
Postgrad. School, Monterey, CA)
Very large subaqueous sand dunes are discovered on the upper continental slope of the northern South China Sea in water depth of 160–600 m,
which composed of fine to medium sand. The amplitude and the crest-tocrest wavelength of sand dunes are about 5–15 m and 200–400 m, respectively. This topographic feature could causes strong acoustic scattering,
mode coupling, and out-of- plane propagation effects, which consequently
result in sound energy redistribution within ocean waveguide. This research
focus on the three-dimensional propagation effects (e.g., horizontal refraction) induced by the sand dunes in the South China Sea, which are expected
as the angle of propagation relative to the bedform crests decreases. The
three-dimensional propagation effects are studied by numerical modeling
and model-data comparison. For numerical modeling, the in-situ topographic data of subaqueous sand dune and sound speed profiles were inputted to calculate the acoustic fields, which were further decomposed into
mode fields to show the modal horizontal refraction effects. The modeling
results were manifested by data observations. [This work is sponsored by
the Ministry of Science and Technology of Taiwan.]
2178
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
A 1:7500 scale underwater acoustic propagation experiment was conducted in a laboratory tank to investigate three-dimensional (3D) propagation effects, with the objective of providing benchmark quality data for
comparison with numerical models. A computer controlled positioning system accurately moves the receiving hydrophone in 3D space while a stationary source hydrophone emits band-limited pulse waveforms between 200
kHz and 1 MHz. The received time series can be post-processed to estimate
travel time, transmission loss, and vertical and horizontal arrival angle. Experimental results are shown for a 1.22 2.13 m bathymetric part possessing both a flat bottom bathymetry and a translationally invariant wedge with
a 10 slope. Comparisons between the experimental data and numerical
models are also shown. [Work supported by ONR.]
1:45
2pUW4. Numerical modeling of measurements from an underwater
scale-model tank experiment. Megan S. Ballard and Jason D. Sagers
(Appl. Res. Labs., The Univ. of Texas at Austin, P.O. Box 8029, Austin, TX
78758, meganb@arlut.utexas.edu)
Scale-model tank experiments are beneficial because they offer a controlled environment in which to make underwater acoustic propagation
measurements, which is helpful when comparing measured data to calculations from numerical propagation models. However, to produce agreement
with the measured data, experimental details must be carefully included in
the model. For example, the frequency-dependent transmitting and receiving
sensitivity and vertical directionality of both hydrophones must be included.
In addition, although it is possible to measure the geometry of the tank
experiment, including water depth and source and receiver positions, positional uncertainty exists due to the finite resolution of the measurements.
The propagated waveforms from the experiment can be used to resolve
these parameters using inversion techniques. In this talk, model-data comparisons of measurements made in a 1:7500 scale experiment are presented.
The steps taken to produce agreement between the measured and modeled
data are discussed in detail for both range-independent and range-dependent
configurations.
2:00
2pUW5. A normal mode inner product to account for acoustic propagation over horizontally variable bathymetry. Charles E. White, Cathy Ann
Clark (Naval Undersea Warfare Ctr., 1176 Howell St., Newport, RI 02841,
charlie.e.white@navy.mil), Gopu Potty, and James H. Miller (Ocean Eng.,
Univ. of Rhode Island, Narragansett, RI)
This talk will consider the conversion of normal mode functions over
local variations in bathymetry. Mode conversions are accomplished through
an inner product, which enables the modes compromising the field at each
range-dependent step to be written as a function of those in the preceding
step. The efficiency of the method results from maintaining a stable number
168th Meeting: Acoustical Society of America
2178
2:15
2pUW6. An assessment of the effective density fluid model for backscattering from rough poroelastic interfaces. Anthony L. Bonomo, Nicholas P.
Chotiros, and Marcia J. Isakson (Appl. Res. Labs., The Univ. of Texas at Austin, 10000 Burnet Rd., Austin, TX 78713, anthony.bonomo@gmail.com)
The effective density fluid model (EDFM) was developed to approximate the behavior of sediments governed by Biot’s theory of poroelasticity.
Previously, it has been shown that the EDFM predicts reflection coefficients
and backscattering strengths that are in close agreement with those of the
full Biot model for the case of a homogeneous poroelastic half-space. However, it has not yet been determined to what extent the EDFM can be used in
place of the full Biot model for other cases. In this work, the finite element
method is used to compare the backscattering strengths predicted using the
EDFM with the predictions of the full Biot model for three cases: a homogeneous poroelastic half-space with a rough interface, a poroelastic layer
overlying an elastic half-space with both interfaces rough, and an inhomogeneous poroelastic half-space consisting of a shear modulus gradient with a
rough interface. [Work supported by ONR, Ocean Acoustics.]
2:30
2pUW7. Scattering by randomly rough surfaces. I. Analysis of slope
approximations. Patrick J. Welton (Appl. Res. Lab., The Univ. of Texas at
Austin, 1678 Amarelle St., Thousand Oaks, CA 91320-5971, patrickwelton@verizon.net)
Progress in numerical methods now allows scattering in two dimensions
to be computed without resort to approximations. However, scattering by
three-dimensional random surfaces is still beyond the reach of current numerical techniques. Within the restriction of the Kirchhoff approximation
(single scattering) some common approximations used to predict scattering
by randomly rough surfaces will be examined. In this paper, two widely
used approximate treatments for the surface slopes will be evaluated and
compared to the exact slope treatment.
2:45–3:00 Break
3:00
2pUW8. Scattering by randomly rough surfaces. II. Spatial spectra
approximations. Patrick J. Welton (Appl. Res. Lab., The Univ. of Texas at
Austin, 1678 Amarelle St., Thousand Oaks, CA 91320-5971, patrickwelton@verizon.net)
The spatial spectrum describing a randomly rough surface is crucial to
the theoretical analysis of the scattering behavior of the surface. Most of the
models assume that the surface displacements are a zero-mean process. It is
shown that a zero-mean process requires that the spatial spectrum vanish
when the wavenumber is zero. Many of the spatial spectra models used in
the literature do not meet this requirement. The impact of the zero-mean
requirement on scattering predictions will be discussed, and some spectra
models that meet the requirement will be presented.
3:15
2pUW9. Scattering by randomly rough surfaces. III. Phase approximations. Patrick J. Welton (Appl. Res. Lab., The Univ. of Texas at Austin,
1678 Amarelle St., Thousand Oaks, CA 91320-5971, patrickwelton@verizon.net)
solution. Approximate image solutions for an infinite, pressure-release plane
surface are studied for an omnidirectional source using the 2nd, 3rd, and 4th
order phase approximations. The results are compared to the exact image solution to examine the effects of the phase approximations. The result based
on the 2nd order (Fresnel phase) approximation reproduces the image solution for all geometries. Surprisingly, the results for the 3rd and 4th order
phase approximations are never better than the Fresnel result, and are substantially worse for most geometries. This anomalous behavior is investigated and the cause is found to be the multiple stationary phase points
produced by the 3rd and 4th order phase approximations.
3:30
2pUW10. Role of binding energy (edge-to-face contact of mineral platelets) in the acoustical properties of oceanic mud sediments. Allan D.
Pierce (Retired, PO Box 339, 399 Quaker Meeting House Rd., East Sandwich, MA 02537, allanpierce@verizon.net) and William L. Siegmann
(Mathematical Sci., Rensselaer Polytechnic Inst., Troy, NY)
A theory for mud sediments presumes a card-house model, where the
platelets arrange themselves in a highly porous configuration; electrostatic
forces prevent face-to-face contacts. The primary type of contact is where
the edge of one platelet touches a face of another. Why such is not also prevented by electrostatic forces is because of van der Waals (vdW) forces
between the molecular structures within the two platelets. A quantitative
assessment is given of such forces, taking into account the atomic composition and crystalline structure of the platelets, proceeding from the London
theory of interaction between non-polar molecules. Double-integration over
both platelets leads to a quantitative and simple prediction for the potential
energy of vdW interaction as a function of the separation distance, edgefrom-face. At moderate nanoscale distances, the resulting force is attractive
and is much larger than the electrostatic repulsion force. But, at very close
(touching) distances, the intermolecular force becomes also repulsive, so
that there is a minimum potential energy, which is identified as the binding
energy. This finite binding energy, given a finite environmental temperature,
leads to some statistical mechanical theoretical implications. Among the
acoustical implications is a relaxation mechanism for the attenuation of
acoustic waves propagating through mud.
3:45
2pUW11. Near bottom self-calibrated measurement of normal reflection coefficients by an integrated deep-towed camera/acoustical system.
Linus Chiu, Chau-Chang Wang, Hsin-Hung Chen (Inst. of Undersea Technol., National Sun Yat-sen Univ., No. 70, Lienhai Rd., Kaohsiung 80424,
Taiwan, linus@mail.nsysu.edu.tw), Andrea Y. Chang (Asia-Pacific Ocean
Res. Ctr., National Sun Yat-sen Univ., Kaohsiung, Taiwan), and Chung-Ray
Chu (Inst. of Undersea Technol., National Sun Yat-sen Univ., Kaohsiung,
Taiwan)
Normal incidence echo data (bottom reflection) can provide acoustic
reflectivity estimates used to predict sediment properties with using seabed
sediment models. Accuracy of normal reflection coefficient measurement
thus become very significant to the bottom inversion result. A deep-towed
camera platform with acoustical recording system, developed by the Institution of Undersea Technology, National Sun Yat-sen University, Taiwan, is
capable of photographically surveying the seafloor in near scope and acquiring sound data. The real time data transference, including photography
(optics) and reflection measurement (acoustics) can be implemented in the
same site simultaneously. The deep-towed camera near the bottom was used
in several experiments in the southwestern sea off Taiwan in 2014 to acquire
acoustic LFM signal sent by surface shipboard source as incident signal as
well as the seafloor reflections at frequency bands within 4–6 kHz. The error
produced by compensating the roll-off of altitude of vehicle (propagation
loss) can be eliminated, which is considered as near bottom self-calibrated
measurement for normal reflection coefficient. The collected reflection coefficients were used to inverting the sediment properties with using the Effective Density Fluid model (EDFM), manifested by the coring and camera
images. [This work is sponsored by the Ministry of Science and Technology
of Taiwan.]
In the limit as the roughness vanishes, the solution for the pressure scattered by a rough surface of infinite extent should reduce to the image
2179
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2179
2p TUE. PM
of modes throughout the calculation of the acoustic field. A verification of
the inner product is presented by comparing results from its implementation
in a simple mode model to that of a closed-form solution for the acoustic
wedge environment. A solution to the more general problem of variable bottom slope, which involves a decomposition of bathymetric profiles into a
sequence of wedge environments, will also be discussed. The overall goal of
this research is the development and implementation of a rigorous shallow
water acoustic propagation solution which executes in a time window to
support tactical applications.
4:00
4:15
2pUW12. Backscattering from an obstacle immersed in an oceanic
waveguide covered with ice. Natalie S. Grigorieva (St. Petersburg State
Electrotech. Univ., 5 Prof. Popova Str., St. Petersburg 197376, Russian Federation, nsgrig@natalie.spb.su), Daria A. Mikhaylova, and Dmitriy B.
Ostrovskiy (JSC “Concern Oceanpribor”, St. Petersburg, Russian
Federation)
2pUW13. Emergence of striation patterns in acoustic signals reflected
from dynamic surface waves. Youngmin Choo, Woojae Seong (Seoul
National Univ., 1, Gwanak-ro, Gwanak-gu, Seoul, Seoul 151 - 744, South
Korea, sks655@snu.ac.kr), and Heechun Song (Scripps Inst. of Oceanogr.,
Univ. of California, San Diego, CA)
The presentation describes the theory and implementation issues of
modeling of the backscattering from an obstacle immersed in a homogeneous, range-independent waveguide covered with ice. An obstacle is assumed
to be spherical, rigid or fluid body. A bottom of the waveguide and an ice
cover are fluid, attenuating half-space. The properties of an ice cover and a
scatterer may coincide. To calculate the scattering coefficients of a sphere
[R. M. Hackman et al., J. Acoust. Soc. Am. 84, 1813–1825 (1988)], the normal mode evaluation is applied. A number of normal modes forming the
backscattering field is determined by a given directivity of the source. The
obtained analytical expression for the backscattered field is applied to evaluate its dependence on source frequency, depth of a water layer, bottom and
ice properties, and distance between the source and obstacle. Two cases are
analyzed and compared: when the upper boundary of a waveguide is soundsoft and when a water layer is covered with ice. Computational results are
obtained in a wide frequency range 8–12 kHz for conditions of a shallow
water testing area. [Work supported by Russian Ministry of Educ. and Sci.,
Grant 02.G25.31.0058.]
A striation pattern can emerge in high-frequency acoustic signals interacting with dynamic surface waves. The striation pattern is analyzed using a
ray tracing algorithm for both a sinusoidal and a rough surface. With a
source or receiver close to the surface, it is found that part of the surface on
either side of the specular reflection point can be illuminated by rays, resulting in time-varying later arrivals in channel impulse response that form the
striation pattern. In contrast to wave focusing associated with surface wave
crests, the striation occurs due to reflection off convex sections around
troughs. Simulations with a sinusoidal surface show both an upward
(advancing) and downward (retreating) striation patterns that depend on the
surface-wave traveling direction and the location of the illuminated area. In
addition, the striation length is determined mainly by the depth of the source
or receiver, whichever is closer in range to the illuminated region. Even
with a rough surface, the striation emerges in both directions. However,
broadband (7–13 kHz) simulations in shallow water indicate that the longer
striation in one direction is likely pronounced against a quiet noise background, as observed from at-sea experimental data. The simulation is
extended for various surface wave spectra and it shows consistent patterns.
TUESDAY EVENING, 28 OCTOBER 2014
8:00 P.M. TO 9:30 P.M.
OPEN MEETINGS OF TECHNICAL COMMITTEES
The Technical Committees of the Acoustical Society of America will hold open meetings on Tuesday, Wednesday, and Thursday
evenings. On Tuesday the meetings will begin at 8:00 p.m., except for Engineering Acoustics which will hold its meeting starting at
4:30 p.m. On Thursday evening, the meetings will begin at 7:30 p.m.
These are working, collegial meetings. Much of the work of the Society is accomplished by actions that originate and are taken in these
meetings including proposals for special sessions, workshops, and technical initiatives. All meeting participants are cordially invited to
attend these meetings and to participate actively in the discussion.
Committees meeting on Tuesday are as follows:
Engineering Acoustics (4:30 p.m.)
Acoustical Oceanography
Architectural Acoustics
Physical Acoustics
Speech Communication
Structural Acoustics and Vibration
2180
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Santa Fe
Indiana G
Marriott 7/8
Indiana C/D
Marriott 3/4
Marriott 1/2
168th Meeting: Acoustical Society of America
2180
WEDNESDAY MORNING, 29 OCTOBER 2014
MARRIOTT 7/8, 8:20 A.M. TO 11:45 A.M.
Session 3aAA
Architectural Acoustics and Noise: Design and Performance of Office Workspaces in High Performance
Buildings
Kenneth P. Roy, Chair
Armstrong World Industries, 2500 Columbia Ave., Lancaster, PA 17604
Chair’s Introduction—8:20
Invited Papers
8:25
3aAA1. Architecture and acoustics … form and function—What comes 1st? Kenneth Roy (Armstrong World Industries, 2500
Columbia Ave., Lancaster, PA 17604, kproy@armstrong.com)
3a WED. AM
When I first studied architecture, it was expected that “form fits function” was pretty much a mantra to design. But is that the case
today or has it ever been when acoustics are concerned? Numerous post occupancy studies of worker satisfaction with office IEQ indicate that things are not as they should be. And, as a matter of fact, high performance green buildings seem to fair much worse than normal office buildings when acoustic quality is considered. So what are we doing wrong—maybe the Gensler Workplace Study and other
related studies could shed light on what is wrong, and how we might think differently about office design. From an acousticians viewpoint it’s all about “acoustic comfort” meaning the right amount of intelligibility, privacy, and distraction for the specific work function.
Times change and work functions change, so maybe we should be looking for a new mantra … like “function drives form.” We may
also want to consider that office space may need to include a “collaboration zone” where teaming takes place, a “focus zone” where concentrated thought can take place, and a “privacy zone” where confidential discussions can take place. Each of these requires different
architecture and acoustic performance.
8:45
3aAA2. Acoustics in collaborative open office environments. John J. LoVerde, Samantha Rawlings, and David W. Dong (Veneklasen
Assoc., 1711 16th St., Santa Monica, CA 90404, jloverde@veneklasen.com)
Historically, acoustical design for open office environments focuses on creating workspaces that maximize speech privacy and minimize aural distractions. Hallmark elements of the traditional open office environment include barriers, sound-absorptive surfaces, and
consideration of workspace orientation, size, and background sound level. In recent years, development of “collaborative” office environments has been desired, which creates an open work setting, allowing immediate visual and aural communication between team
members. This results in reducing the size of workstations, lowering barriers, and reducing distance between occupants. Additionally,
group meeting areas have also become more open, with the popularization of “huddle zones” where small groups hold meetings in an
open space adjacent to workstations rather than within enclosed conference rooms. Historically, this type of office environment would
have poor acoustical function, with limited speech privacy between workstations and minimal attenuation of distracting noises, leading
to occupant complaints. However, these collaborative open office environments function satisfactorily and seem to be preferred by occupants and employers alike. This paper investigates the physical acoustical parameters of collaborative open office spaces.
9:05
3aAA3. Lessons learned in reconciling high performance building design with acoustical comfort. Valerie Smith and Ethan Salter
(Charles M. Salter Assoc., 130 Sutter St., Fl. 5, San Francisco, CA 94104, valerie.smith@cmsalter.com)
In today’s diverse workplace, “the one size fits all” approach to office design is becoming less prevalent. The indoor environmental
quality of the workplace is important to owners and occupants. Architects are developing innovative ways to encourage interaction and
collaboration while also increasing productivity. Many of these ideas are at odds with the traditional acoustical approaches used for
office buildings. Employees are asking for, and designers are incorporating, amenities such as kitchens, game rooms, and collaboration
spaces into offices. Architects and end users are becoming increasingly aware of acoustics in their environment. The U.S. General Services Administration (GSA) research documents, as well as those from other sources, discusses the importance of acoustics in the workplace. Private companies are also creating acoustical standards documents for use in design of new facilities. As more buildings strive to
achieve sustainable benchmarks (whether corporate, common green building rating systems such as LEED, or code-required) the understanding of the need for acoustical items (such as sound isolation, speech privacy, and background noise) also become critical. The challenge is how to reconcile sustainable goals with acoustical features. This presentation discusses several of the approaches that our firm
has recently used in today’s modern office environment.
2181
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2181
Contributed Papers
9:25
9:40
3aAA4. I can see clearly now, but can I also hear clearly now too? Patricia Scanlon, Richard Ranft, and Stephen Lindsey (Longman Lindsey, 1410
Broadway, Ste. 508, New York, NY 10018, patricias@longmanlindsey.
com)
3aAA5. Acoustics in an office building. Sergio Beristain (IMA, ESIME,
IPN, P.O.Box 12-1022, Narvarte, Mexico City 03001, Mexico, sberista@
hotmail.com)
The trend in corporate workplace has been away from closed plan gypsum board offices to open plan workstations and offices with glass fronts,
sliding doors, and clearstories or glass fins in the wall between offices.
These designs are often a kit of parts supplied by manufacturers, who offer
minimal information on the sound transmission that will be achieved in
practice. This results in end users who are often misled into believing they
will enjoy a certain level of speech privacy in their offices. Our presentation
will discuss the journey from benchmarking the NIC rating of an existing
office construction, reviewing the STC ratings for various glass front
options, evaluating details including door frame, door seals, intersection
between office demising walls and front partition systems. We will then
present how this information is transferred to the client, allowing them to
make an informed decision on the construction requirements for their new
space. We will highlight the difference in acoustical environment between
what one might expect from reading manufacturer’s literature, and what is
typically achieved in practice.
New building techniques tend to make better use of materials, temperature, and energy, besides costs. A building company had to plan the adaptation of a very old building with the purpose to install private offices of
different sizes in each floor, in order to take advantage of a large solid construction, reducing building time, total weight, etc., while at the same time
fulfilling new requirements related with comfort, general quality, functionality, and economy. Among several other topics, sound and vibrations had to
be considered during the process, including noise control and speech privacy, because a combination of private rooms and open plan offices were
needed, as well as limiting environmental vibrations. Aspects such as the
use of light weight materials and the installation of many climate conditioning systems were needed, which were dealt along the project in the search
for a long lasting and low maintenance costs construction.
9:55–10:10 Break
Invited Papers
10:10
3aAA6. A case history in architectural acoustics: Security, acoustics, the protection of personally identifiable information (PII),
and accessibility for the disabled. Donna A. Ellis (The Div. of Architecture and Eng., The Social Security Administration, 415 Riggs
Ave., Severna Park, MD 21146, Donna.a.ellis@ssa.gov)
This paper discusses the re-design of a field office to enhance the protection of Personally Identifiable Information (PII), physical security, and accessibility for the disabled at the Social Security Administration (SSA) field office in Roxbury, MA. The study, and its
results can be used at federal, civil, and private facilities where transaction window type public interviews occur. To protect the public
and its staff, the SSA has mandated heightened security requirements in all field offices. The increased security measures include: Installation of barrier walls to provide separation between the public and private zones; maximized lines of sight, and increased speech privacy for the protection of PII. This paper discusses the use of the Speech Transmission Index (STI) measurement method used to
determine the post construction intelligibility of speech through the transaction window, the acoustical design of the windows and their
surrounding area, how appropriate acoustic design helps safeguard personal and sensitive information so that it may be securely communicated verbally, as well as improved access for the disabled community, especially the hearing impaired.
10:30
3aAA7. High performance medical clinics: Evaluation of speech privacy in open-plan offices and examination rooms. Steve Pettyjohn (The Acoust. & Vib. Group, Inc., 5765 9th Ave., Sacramento, CA CA, spettyjohn@acousticsandvibration.com)
Speech privacy evaluations of open plan doctors’ offices and examination rooms were done at two clinics. One was in Las Vegas
and the other in El Dorado Hills. The building were designed to put doctors closer to patients and for a cost savings. ASTM E1130,
ASTM E336, and NRC guidelines were used to evaluate these spaces. For E1130, sound is produced at the source location with calibrated speakers, then measurements are made at receiver positions. The speaker faces the receiver. Only open plan furniture separated
the source from the receiver. The examination rooms used partial height walls with a single layer of gypsum board on each face. Standard doors without seals were used. CAC 40 rated ceiling tile were installed. The cubicle furniture included sound absorption and was 42
to 60 in. tall. The Privacy Index was quite low, ranging from 30 to 66%. The NIC rating of the walls without doors ranged from 38 to
39, giving PI ratings of 83 to 84%. With a door, the NIC rating was 30 to 31 with PI ratings of 72. These results do not meet the requirements of the Facility Guideline Institute or ANSI 12 Working Group 44.
2182
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2182
10:50
3aAA8. Exploring the impacts of consistency in sound masking. Niklas Moeller and Ric Doedens (K.R. Moeller Assoc. Ltd., 1050
Pachino Court, Burlington, ON L7L 6B9, Canada, rdoedens@logison.com)
Electronic sound masking systems control the noise side of the signal-to-noise ratio in interior environments. Their effectiveness
relates directly to how consistently the specified masking curve is achieved. Current system specifications generally allow a relatively
wide range in performance, in large part reflecting expectations set by legacy technologies. This session presents a case study of sound
masking measurements and speech intelligibility calculations conducted in office spaces. These are used as a foundation to discuss the
impacts of local inconsistencies in the masking sound and to begin a discussion of appropriate performance requirements for masking
systems.
11:10
3aAA9. Evaluating the effect of prominent tones in noise on human task performance. Joonhee Lee and Lily M. Wang (Durham
School of Architectural Eng. and Construction, Univ. of Nebraska - Lincoln, 1110 S. 67th St., Omaha, NE 68182-0816, joonhee.lee@
huskers.unl.edu)
Current noise guidelines for the acoustic design of offices generally specify limits on loudness and sometimes spectral shape, but do
not typically address the presence of tones in noise as may be generated by building services equipment. Numerous previous studies
indicate that the presence of prominent tones is a significant source of deteriorating indoor environmental quality. Results on how prominent tones in background noise affect human task performance, though, are less conclusive. This paper presents results from recent studies at Nebraska on how tones in noise may influence task performance in a controlled office-like environment. Participants were asked
to complete digit span tasks as a measure of working memory capacity, while exposed to assorted noise signals with tones at varying frequencies and tonality levels. Data on the percent correct and reaction time in which participants responded to the task are analyzed statistically. The results can provide guidance for setting limits on the tonality levels in offices and other spaces in which building users must
be task-productive.
11:30
3aAA10. Optimal design of multi-layer microperforated sound absorbers. Nicholas Kim, Yutong Xue, and J. S. Bolton (Ray W. Herrick Labs.,
School of Mech. Eng., Purdue Univ., 177 S. Russell St., West Lafayette, IN,
kim505@purdue.edu)
Microperforated polymer films can offer and effective solution when it
is desired to design fiber-free sound absorption systems. The acoustic performance of the film is determined by hole size and shape, by the surface
porosity, by the mass per unit area of the film, and by the depth of the backing air layer. Single sheets can provide good absorption over a one of two
2183
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
octave range, but if absorption over a broader range is desired, it is necessary to use multilayer treatments. Here the design of a multilayer sound
absorption system is described, where the film is considered to have a finite
mass per unit area and also to have conical perforations. It will be shown
that it is possible to design compact absorbers that yield good performance
over the whole speech interference range. In the course of the optimization
it has been found that there is a tradeoff between cone angle and surface porosity. The design of lightweight, multilayer functional absorbers will also
be described, and it will be shown, for example, that it is possible to design
systems that simultaneously possess good sound absorption and barrier
characteristics.
168th Meeting: Acoustical Society of America
2183
3a WED. AM
Contributed Paper
WEDNESDAY MORNING, 29 OCTOBER 2014
LINCOLN, 8:25 A.M. TO 11:45 A.M.
Session 3aAB
Animal Bioacoustics: Predator–Prey Relationships
Simone Baumann-Pickering, Cochair
Scripps Institution of Oceanography, University of California San Diego, 9500 Gilman Dr, La Jolla, CA 92093
Ana Sirovic, Cochair
Scripps Institution of Oceanography, 9500 Gilman Drive MC 0205, La Jolla, CA 92093-0205
Chair’s Introduction—8:25
Invited Papers
8:30
3aAB1. Breaking the acoustical code of ants: The social parasite’s pathway. Francesca Barbero, Luca P. Casacci, Emilio Balletto,
and Simona Bonelli (Life Sci. and Systems Biology, Univ. of Turin, Via Accademia Albertina 13, Turin 10123, Italy, francesca.barbero@unito.it)
Ant colonies represent a well-protected and stable environment (temperature, humidity) where essential resources are stored (e.g.,
the ants themselves, their brood, stored food). To maintain their social organization, ants use a variety of communication channels, such
as the exchange of chemical and tactile signals, as well as caste specific stridulations (Casacci et al. 2013 Current Biology 23, 323–327).
By intercepting and manipulating their host’s communication code, about 10,000 arthropod species live as parasites and exploit ant
nests. Here, we review results of our studies on Maculinea butterflies, a group of social parasites which mimic the stridulations produced
by their host ants to promote (i) their retrieval into the colony (adoption: Sala et al. 2014, PLoS ONE 9(4), e94341), (ii) their survival
inside the nest/brood chambers (integration: Barbero et al. 2009 J. Exp. Biol. 218, 4084–4090), or (iii) their achievement of the highest
possible social status within the colony’s hierarchy (full integration: Barbero et al. 2009, Science 323, 782–785). We strongly believe
that the study of acoustic communication in ants will bring about significant advances in our understanding of the complex mechanisms
underlying the origin, evolution, and stabilization of many host–parasite relationships.
8:50
3aAB2. How nestling birds acoustically monitor parents and predators. Andrew G. Horn and Martha L. Leonard (Biology, Dalhousie Univ., Life Sci. Ctr., 1355 Oxford St., PO Box 15000, Halifax, NS B3H 4R2, Canada, aghorn@dal.ca)
The likelihood that nestling songbirds survive until leaving the nest depends largely on how well they are fed by parents and how well
they escape detection by predators. Both factors in turn are affected by the nestling’s begging display, a combination of gaping, posturing,
and calling that stimulates feedings from parents but can also attract nest predators. If nestlings are to be fed without being eaten themselves, they must beg readily to parents but avoid begging when predators are near the nest. Here we describe experiments to determine
how nestling tree swallows, Tachycineta bicolor, use acoustic cues to detect the arrival of parents with food and to monitor the presence of
predators, in order to beg optimally relative to their need for food. We also discuss how their assessments vary in relation to two constraints:
their own poor perceptual abilities and ambient background noise. Together with similar work on other species, our research suggests that
acoustically conveyed information on predation risk has been an important selective force on parent-offspring communication. More generally, how birds acoustically monitor their environment to avoid predation is an increasingly productive area of research.
9:10
3aAB3. Acoustic preferences of frog-biting midges in response to intra- and inter-specific signal variation. Ximena Bernal (Dept.
of Biological Sci., Purdue Univ., 915 W. State St., West Lafayette, IN 47906, xbernal@purdue.edu)
Eavesdropping predators and parasites intercept mating signals emitted by their prey and host gaining information that increases the
effectiveness of their attack. This kind of interspecific eavesdropping is widespread across taxonomic groups and sensory modalities. In
this study, sound traps and a sound imaging device system were used to investigate the acoustic preferences of frog-biting midges, Corethrella spp. (Corethrellidae). In these midges, females use the advertisement call produced by male frogs to localize them and obtained
a blood meal. As in mosquitoes (Culicidae), a closely related family, female midges require blood from their host for egg production.
The acoustic preferences of the midges were examined in the wild in response to intra- and interspecific call variation. When responding
ungara frogs (Engystomops pustulosus), frogs producing vocalizations with higher call complexity and call rate were
to call variation in t
preferentially attacked by the midges. T
ungara frog calls were also preferred by frog-biting midges over the calls produced by a sympatric frog of similar size, the hourglass frog (Dendrosophus ebbracatus). The role of call site selection in multi-species aggregations is
explored in relation to the responses of frog-biting midges. In addition, the use of acoustic traps and sound imaging devices to investigate
eavesdropper-victim interactions are discussed.
2184
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2184
9:30
9:45
3aAB4. Foraging among acoustic clutter and competition: Vocal behavior of paired big brown bats. Michaela Warnecke (Psychol. and Brain
Sci., The Johns Hopkins Univ., 3400 N Charles St., Baltimore, MD 21218,
michaela.warnecke@jhu.edu), Chen Chiu, Wei Xian (Psychol. and Brain
Sci., The Johns Hopkins Univ., Baltimore, MD), Clement Cechetto
(AGROSUP, Inst. Nationale Superieur des Sci. Agronomique, Dijon,
France), and Cynthia F. Moss (Psychol. and Brain Sci., The Johns Hopkins
Univ., Baltimore, MD)
3aAB5. Sensory escape from a predator–prey arms race: Low amplitude biosonar beats moth hearing. Holger R. Goerlitz (Acoust. and Functional Ecology, Max Planck Inst. for Ornithology, Eberhard-Gwinner-Str,
Seewiesen 82319, Germany, hgoerlitz@orn.mpg.de), Hannah M. ter Hofstede (Biological Sci., Dartmouth College, Hanover, NH), Matt Zeale, Gareth Jones, and Marc W. Holderied (School of Biological Sci., Univ. of
Bristol, Bristol, United Kingdom)
In their natural environment, big brown bats forage for small insects in
open spaces, as well as in the presence of acoustic clutter. While searching
and hunting for prey, these bats experience sonar interference not only from
densely cluttered environments, but also through calls from other conspecifics foraging close-by. Previous work has shown that when two bats fly in
a relatively open environment, one of them may go silent for extended periods of time (Chiu et al. 2008), which may serve to minimize such sonar interference between conspecifics. Additionally, big brown bats have been
shown to adjust frequency characteristics of their vocalizations to avoid
acoustic interference from conspecifics (Chiu et al., 2009). It remains an
open question, however, in what way environmental clutter and the presence
of conspecifics influence the bat’s call behavior. By recording multichannel
audio and video data of bats engaged in insect capture in an open and a cluttered space, we quantified the bats’ vocal behavior. Bats were flown individually and in pairs in an open and cluttered room, and the results of this study
shed light on the strategies animals employ to negotiate a complex and
dynamic environment.
Ultrasound-sensitive ears evolved in many nocturnal insects, including
some moths, to detect bat echolocation calls and evade capture. Although
there is evidence that some bats emit echolocation calls that are inconspicuous to eared moths, it is difficult to determine whether this was an adaptation to moth hearing or originally evolved for a different purpose. Here we
present the first example of an echolocation counterstrategy to overcome
prey hearing at the cost of reduced detection distance, providing an example
of a predator outcompeting its prey despite the life-dinner-principle. Aerialhawking bats generally emit high-amplitude echolocation calls to maximize
detection range. Using comparative acoustic flight-path tracking of free-flying bats, we show that the barbastelle, Barbastella barbastellus, emits calls
that are 10 to 100 times lower in amplitude than those of other aerial hawking bats. Model calculations demonstrate that only bats emitting such lowamplitude calls hear moth echoes before their calls are conspicuous to
moths. We confirm that the barbastelle remains undetected by moths until
close and preys mainly on eared moths, using moth neurophysiology in the
field and fecal DNA analysis. This adaptive stealth echolocation allows the
barbastelle to access food resources that are difficult to catch for high-intensity bats.
10:00–10:20 Break
Invited Papers
10:20
3aAB6. Cues, creaks, and decoys: Using underwater acoustics to study sperm whale interactions with the Alaskan black cod
longline fishery. Aaron Thode (SIO, UCSD, 9500 Gilman Dr, MC 0238, La Jolla, CA 92093-0238, athode@ucsd.edu), Janice Straley
(Univ. of Alaska, Southeast, Sitka, AK), Lauren Wild (Sitka Sound Sci. Ctr., Sitka, AK), Jit Sarkar (SIO, UCSD, La Jolla, CA), Victoria
O’Connell (Sitka Sound Sci. Ctr., Sitka, AK), and Dan Falvey (Alaska Longline Fisherman’s Assoc., Sitka, AK)
For decades off SE Alaska, sperm whales have located longlining fishing vessels and removed, or “depredated,” black cod from the
hauls. In 2004, the Southeast Alaska Sperm Whale Avoidance Project (SEASWAP) began deploying passive acoustic recorders on longline fishing gear in order to identify acoustic cues that may alert whales to fishing activity. It was found that when hauling, longlining
vessels generate distinctive cavitation sounds, which served to attract whales to the haul site. The combined use of underwater recorders
and video cameras also confirmed that sperm whales generated “creak/buzz” sounds while depredating, even under good visual conditions. By deploying recorders with federal sablefish surveys over two years, a high correlation was found between sperm whale creak
rate detections and visual evidence for depredation. Thus passive acoustics is now being used as a low-cost, remote sensing method to
quantify depredation activity in the presence and absence of various deterrents. Two recent developments will be discussed in detail: the
development and field testing of acoustic “decoys” as a potential means of attracting animals away from locations of actual fishing activity, and the use of “TadPro” cameras to provide combined visual and acoustic observations of longline deployments. [Work supported
by NPRB, NOAA, and BBC.]
10:40
3aAB7. Follow the food: Effects of fish and zooplankton on the behavioral ecology of baleen whales. Joseph Warren (Stony Brook
Univ., 239 Montauk Hwy, Southampton, NY 11968, joe.warren@stonybrook.edu), Susan E. Parks (Dept. of Biology, Syracuse Univ.,
Syracuse, NY), Heidi Pearson (Univ. of Alaska, Southeast, Juneau, AK), and Kylie Owen (Univ. of Queensland, Gatton, QLD,
Australia)
Active acoustics were used to collect information on the type, distribution, and abundance of baleen whale prey species such as zooplankton and fish at fine spatial (sub-meter) and temporal (sub-minute) scales. Unlike other prey measurement methods, scientific
echosounder surveys provide prey data at a resolution similar to what a predator would detect in order to efficiently forage. Data from
2185
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2185
3a WED. AM
Contributed Papers
several studies around the world shows that differences in prey type or distribution result in distinctly different baleen whale foraging
behaviors. Humpback whales in coastal waters of Australia altered their foraging pattern depending on the presence and abundance of
baitfish or krill. In Southeast Alaska, humpback whales foraged cooperatively or independently depending on prey type and abundance.
Humpback whales in the Northwest Atlantic with multiple prey species available foraged on an energetically costly (and presumably
rewarding) species. The vertical and horizontal movements of North Atlantic right whales in Cape Cod Bay were strongly correlated
with very dense aggregations of copepods. In all of these cases, active acoustics were used to estimate numerical densities of the prey,
which provides quantitative information about the energy resource available to foraging animals.
Contributed Papers
11:00
3aAB8. Association of low oxygen waters with the depths of acoustic
scattering layers in the Gulf of California and implications for the success of Humboldt squid (Dosidicus gigas). David Cade (BioSci., Stanford
Univ., 120 Oceanview Boulevard, Pacific Grove, CA 93950, davecade@
stanford.edu) and Kelly J. Benoit-Bird (CEOAS, Oregon State Univ., Corvallis, OR)
The ecology in the Gulf of California has undergone dramatic changes
over the past century as Humboldt squid (Dosidicus gigas) have become a
dominant predator in the region. The vertical overlap between acoustic scattering layers, which consist of small pelagic organisms that make up the
bulk of D. gigas prey, and regions of severe hypoxia have led to a hypothesis linking the shoaling of oxygen minimum zones over the past few decades
to compression of acoustic scattering layers, which in turn would promote
the success of D. gigas. We tested this hypothesis by looking for links
between specific oxygen values and acoustic scattering layer boundaries.
We applied an automatic layer detection algorithm to shipboard
echosounder data from four cruises in the Gulf of California. We then used
CTD data and a combination of logistic modeling, contingency tables, and
linear correlations with parameter isolines to determine which parameters
had the largest effects on scattering layer boundaries. Although results were
inconsistent, we found scattering layer depths to be largely independent of
the oxygen content in the water column, and the recent success of D. gigas
in the Gulf of California is therefore not likely to be attributable to the
effects of shoaling oxygen minimum zones on acoustic scattering layers.
11:15
3aAB9. Understanding the relationship between ice, primary producers,
and consumers in the Bering Sea. Jennifer L. Miksis-Olds (Appl. Res.
Lab, Penn State, PO Box 30, Mailstop 3510D, State College, PA 16804,
jlm91@psu.edu) and Stauffer A. Stauffer (Office of Res. and Development,
US Environ. Protection Agency, Washington, DC)
Technology has progressed to the level of allowing for investigations of
trophic level interactions over time scales of months to years which were
previously intractable. A combination of active and passive acoustic technology has been integrated into sub-surface moorings on the Eastern Bering
Sea shelf, and seasonal transition measurements were examined to better
understand how interannual variability of hydrographic conditions, phytoplankton biomass, and acoustically derived consumer abundance and community structure are related. Ocean conditions were significantly different in
2012 compared to relatively similar conditions in 2009, 2010, and 2011.
2186
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Differences were largely associated with variations in sea ice extent, thickness, retreat timing, and water column stratification. There was a high
degree of variability in the relationships between different classes of consumers and hydrographic condition, and evidence for intra-consumer interactions and trade-offs between different size classes was apparent.
Phytoplankton blooms in each year stimulated different components of the
consumer population. Acoustic technology now provides the opportunity to
explore the ecosystem dynamics in a remote, ice-covered region that was
previously limited to ship-board measurements during ice-free periods. The
new knowledge we are gaining from remote, long-term observations is
resulting in a re-examination of previously proposed ecosystem theories
related to the Bering Sea.
11:30
3aAB10. Temporal and spatial patterns of marine soundscape in a
coastal shallow water environment. Shane Guan (Office of Protected
Resources, National Marine Fisheries Service, 1315 East-West Hwy.,
SSMC-3, Ste. 13728, Silver Spring, MD 20910, shane.guan@noaa.gov),
Tzu-Hao Lin (Inst. of Ecology & Evolutionary Biology, National Taiwan
Univ., Taipei, Taiwan), Joseph F. Vignola (Dept. of Mech. Eng., The Catholic Univ. of America, Washington, MD), LIen-Siang Chou (Inst. of Ecology
& Evolutionary Biology, National Taiwan Univ., Taipei, Taiwan), and John
A. Judge (Dept. of Mech. Eng., The Catholic Univ. of America, Washington, DC)
Underwater acoustic recordings were made at two coastal shallow water
locations, Yunlin (YL) and Waishanding (WS), off Taiwan between June
and December 2012. The purpose of the study was to establish soundscape
baselines and characterize the acoustic habitat of the critically endangered
Eastern Taiwan Strait Chinese white dolphin by investigating: (1) major
contributing sources that dominant the soundscape, (2) temporal, spatial,
and spectral patterns of the soundscape, and (3) correlations of known sources and their potential effects on dolphins. Results show that choruses from
croaker fish (family Sciaenidae) were dominant sound sources in the 1.2–
2.4 kHz frequency band for both locations at night, and noises from container ships in the 150–300 Hz frequency band define the relative higher
broadband sound levels at YL. In addition, extreme temporal variation in
the 150–300 Hz frequency band were observed at WS, which was shows to
be linked to the tidal cycle and current velocity. Furthermore, croaker choruses are found to be most intense around the time of high tide at night, but
not so around the time of low tide. These results illustrate interrelationships
among different biotic, abiotic, and anthropogenic environmental elements
that shape the unique fine-scale soundscape in a coastal environment.
168th Meeting: Acoustical Society of America
2186
WEDNESDAY MORNING, 29 OCTOBER 2014
INDIANA E, 8:00 A.M. TO 11:55 A.M.
Session 3aAO
Acoustical Oceanography, Underwater Acoustics, and Education in Acoustics: Education in Acoustical
Oceanography and Underwater Acoustics
Andone C. Lavery, Cochair
Applied Ocean Physics and Engineering, Woods Hole Oceanographic Institution, 98 Water Street, MS 11, Bigelow 211,
Woods Hole, MA 02536
Preston S. Wilson, Cochair
Mech. Eng., Univ. of Texas at Austin, 1 University Station, C2200, Austin, TX 78712-0292
Arthur B. Baggeroer, Cochair
Mechanical and Electrical Engineering, Massachusetts Inst. of Technology, Room 5-206, MIT, Cambridge, MA 02139
Chair’s Introduction—8:00
Invited Papers
3a WED. AM
8:05
3aAO1. Ocean acoustics education—A perspective from 1970 to the present. Arthur B. Baggeroer (Mech. and Elec. Eng., Massachusetts Inst. of Technol., Rm. 5-206, MIT, Cambridge, MA 02139, abb@boreas.mit.edu)
A very senior ocean acoustician is attributed with the quote to the effect “one does not start in ocean acoustics, but rather ends up in
it.” This may well summarize the issues confronting education in ocean acoustics. Acoustics were part of the curriculum in physics
departments, whereas now it is spread across many departments. Acoustics and perhaps ocean acoustics are most often found in mechanical or ocean engineering departments, but seldom in physics. Almost all our pioneers from the WWII era were educated in physics and
some more recently in engineering departments. Yet, only a few places maintained in depth curricula in ocean acoustics. Most education
was done by one on one mentoring. Now the number of students is diminishing, whether because of perception of employment opportunities or the number of available assistantships is uncertain. ONR is the major driver in ocean acoustics for supporting graduate students.
The concern about this is hardly new. Twenty plus years ago this was codified as part of the so called “Lackie Report” establishing ocean
acoustics as “Navy unique” giving it a priority as a “Navy National Need” (NNR). With fewer students enrolled in ocean acoustics
administrators at universities are really balking at sponsoring faculty slots, so there are very significant issues arising for an education in
ocean acoustics. Perhaps, reverting to the original model of fundamental training in a related discipline followed by on the job training
may be the only option for the future.
8:25
3aAO2. Joint graduate education program: Massachusetts Institute of Technology and Woods Hole Oceanographic Institution.
Timothy K. Stanton (Dept. Appl. Ocean. Phys. & Eng., Woods Hole Oceanographic Inst., Woods Hole, MA 02543, tstanton@whoi.edu)
The 40 + year history of this program will be presented, with a focus on the underwater acoustics and signal processing component.
Trends in enrollment will be summarized.
8:35
3aAO3. Graduate studies in underwater acoustics at the University of Washington. Peter H. Dahl, Robert I. Odom, and Jeffrey A.
Simmen (Appl. Phys. Lab. and Mech. Eng. Dept., Univ. of Washington, Mech. Eng., 1013 NE 40th St., Seattle, WA 98105, dahl@apl.
washington.edu)
The University of Washington through its Departments of Mechanical and Electrical Engineering (College of Engineering), Department of Earth and Space Sciences, and School of Oceanography (College of the Environment), and by way of its Applied Physics Laboratory, which links all four of these academic units, offers a diverse graduate education experience in underwater acoustics. A summary
is provided of the research infrastructure, primarily made available through Applied Physics Laboratory, which allows for ocean going
and arctic field opportunities, and course options offered through the four units that provide the multi-disciplinary background essential
for graduate training in the field of underwater acoustics. Students in underwater acoustics can also mingle in or extend their interests
into medical acoustics research. Degrees granted include both the M.S and Ph.D.
2187
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2187
8:45
3aAO4. Acoustical Oceanography and Underwater Acoustics; their role in the Pennsylvania State University Graduate Program
in Acoustics. David Bradley (Penn State Univ., PO Box 30, State College, PA 16870, dlb25@psu.edu) and Victor Sparrow (Penn State
Univ., University Park, PA)
The Pennsylvania State University Graduate Program in Acoustics has a long and successful history in Acoustics Education. A brief
history together with the current status of the program will be discussed. An important aspect of the program has been the strong role of
the Applied Research Laboratory, both in support of the program as well as for the graduate students enrolled. Presentation includes
details of course content, variability to fit student career goals and program structure, including resident and distance education opportunities. The future of the program at Penn State will also be addressed.
8:55
3aAO5. Ocean acoustics at the University of Victoria. Ross Chapman (School of Earth and Ocean Sci., Univ. of Victoria, 3800 Finnerty Rd., Victoria, BC V8P5C2, Canada, chapman@uvic.ca)
This paper describes the academic program in Ocean Acoustics and Acoustical Oceanography at the University of Victoria in Canada. The program was established when a Research Chair in Ocean Acoustics consisting of two faculty members was funded in 1995 by
the Canadian Natural Sciences and Engineering Research Council (NSERC). The Research Chair graduate program offered two courses
in Ocean Acoustics, and courses in Time Series Analysis and Inverse Methods. Funding for students was obtained entirely through partnership research programs with Canadian marine industry, the Department of National Defence in Canada and the Office of Naval
Research. The program was successful in graduating around 30 M.Sc. and Ph.D. students to date, about half of whom were Canadians.
Notably, all the students obtained positions in marine industry, government, or academia after their degrees. The undergraduate program
consisted of one course in Acoustical Oceanography at the senior level (3rd year) that was designed to appeal to students in physics,
biology, and geology. The course attracted about 30 students each time, primarily from biology. The paper concludes with perspectives
on difficulties in operating an academic program with a low critical mass of faculty and in isolation from colleagues in the research
field.
9:05
3aAO6. Ocean acoustics away from the ocean. David R. Dowling (Mech. Eng., Univ. of Michigan, 1231 Beal Ave., Ann Arbor, MI
48109-2133, drd@umich.edu)
Acoustics represents a small portion of the overall educational effort in engineering and science, and ocean acoustics is one of many
topic areas in the overall realm of acoustics. Thus, maintaining teaching and research efforts involving ocean acoustics is challenging
but not impossible, even at a university that is more than 500 miles from the ocean. This presentation describes the author’s two decades
of experience in ocean acoustics education and research. Success is possible by first attracting students to acoustics, and then helping
them wade into a research topic in ocean acoustics that overlaps with their curiosity, ambition, or both. The first step occurs almost naturally since college students’ experience with their ears and voice provides intuition and motivation that allows them to readily grasp
acoustic concepts and to persevere through mathematical courses. The second step is typically no more challenging since ocean acoustics is a leading and fascinating research area that provides stable careers. Plus, there are even some advantages to studying ocean acoustics away from the ocean. For example, matched-field processing, a common ocean acoustic remote sensing technique, appears almost
magical to manufacturing or automotive engineers when applied to assembly line and safety problems involving airborne sound.
9:15
3aAO7. Office of Naval Research special research awards in ocean acoustics. Robert H. Headrick (Code 32, Office of Naval Res.,
875 North Randolph St., Arlington, VA 22203, bob.headrick@navy.mil)
The Ocean Acoustics Team of the Office of Naval Research manages the Special Research Awards that support graduate traineeship,
postdoctoral fellowship, and entry-level faculty awards in ocean acoustics. The graduate traineeship awards provide for study and
research leading to a doctoral degree and are given to individuals who have demonstrated a special aptitude and desire for advanced
training in ocean acoustics or the related disciplines of undersea signal processing, marine structural acoustics and transducer materials
science. The postdoctoral fellowship and entry-level faculty awards are similarly targeted. These programs were started as a component
of the National Naval Responsibility in Ocean Acoustics to help ensure a stable pipeline of talented individuals would be available to
support the needs of the Navy in the future. They represent only a fraction of the students, postdocs, and early faculty researchers that
are actively involved the basic research supported by the Ocean Acoustics Program. A better understanding of the true size of the pipeline and the capacity of the broader acoustics related Research and Development community to absorb the output is needed to maintain
a balance in priorities for the overall Ocean Acoustics Program.
9:25
3aAO8. Underwater acoustics education at the University of Texas at Austin. Marcia J. Isakson (Appl. Res. Labs., The Univ. of
Texas at Austin, 10000 Burnet Rd., Austin, TX 78713, misakson@arlut.utexas.edu), Mark F. Hamilton (Mech. Eng. Dept. and Appl.
Res. Labs., The Univ. of Texas at Austin, Austin, TX), Clark S. Penrod, Frederick M. Pestorius (Appl. Res. Labs., The Univ. of Texas at
Austin, Austin, TX), and Preston S. Wilson (Mech. Eng. Dept. and Appl. Res. Labs., The Univ. of Texas at Austin, Austin, TX)
The University of Texas at Austin has supported education and research in acoustics since the 1930s. The Cockrell School of Engineering currently offers a wide range of graduate courses and two undergraduate courses in acoustics, not counting the many courses in
hearing, speech, seismology, and other areas of acoustics at the university. An important adjunct to the academic program in acoustics
has been the Applied Research Laboratories (ARL). Spun off in 1945 from the WW II Harvard Underwater Sound Laboratory (1941–
1949) and founded as the Defense Research Laboratory, ARL is one of five University Affiliated Research Centers formally recognized
by the US Navy for their prominence in underwater acoustics research and development. ARL is an integral part of UT Austin, and this
2188
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2188
symbiotic combination of graduate and undergraduate courses, and laboratory and field work, provides one of the leading underwater
acoustics education programs in the nation. In this talk, the underwater acoustics education program will be described with special emphasis on the underwater acoustics course and its place in the larger acoustics program. Statistics on education, funding, and placement
of graduate students in the program will also be presented.
9:35
3aAO9. Acoustical Oceanography and Underwater Acoustics Graduate Programs at the Scripps Institution of Oceanography of
the University of California, San Diego. William A. Kuperman (Scripps Inst. of Oceanogr., Univ. of California, San Diego, Marine
Physical Lab., La Jolla, CA 92093-0238, wkuperman@ucsd.edu)
The Scripps Institution of Oceanography (SIO) of the University of California, San Diego (UCSD), has graduate programs in all
areas of acoustics that intersect oceanography. These programs are associated mostly with internal SIO divisions that include the Marine
Physical Laboratory, Physical Oceanography, Geophysics, and Biological Oceanography as well as SIO opportunities for other UCSD
graduate students in the science and engineering departments. Course work includes basic wave physics, graduate mathematics, acoustics and signal processing, oceanography and biology, digital signal processing, and geophysics/seismology. Much of the emphasis at
SIO includes at-sea experience. Recent examples of thesis research has been in marine mammal acoustics, ocean tomography and seismic/acoustic inversion methodology, acoustical signal processing, ocean ambient noise inversion, ocean/acoustic exploration, and acoustic sensing of the air-sea interaction. An overview of the SIO/UCSD graduate program is presented.
9:45
3aAO10. Underwater acoustics education at Portland State University. Martin Siderius and Lisa M. Zurk (Elec. and Comput. Eng.,
Portland State Univ., 1900 SW 4th Ave., Portland, OR 97201, siderius@pdx.edu)
3a WED. AM
The Northwest Electromagnetics and Acoustics Research Laboratory (NEAR-Lab) is in the Electrical and Computer Engineering
Department at Portland State University (PSU) in Portland, Oregon. The NEAR-Lab was founded in 2005 and is co-directed by Lisa M.
Zurk and Martin Siderius. A primary interest is underwater acoustics, and students at undergraduate and graduate levels (occasionally
also high school students) regularly participate in research. This is synergistic with underwater acoustics education at PSU, which
includes a course curriculum that provides opportunities for theoretical and experimental research and multiple course offerings at both
the undergraduate and graduate level. The research generally involves modeling and analysis of acoustic propagation and scattering,
acoustic signal processing, algorithm development, environmental acoustics, and bioacoustics. The lab maintains a suite of equipment
for experimentation including hydrophone arrays, sound projectors, a Webb Slocum glider, an electronics lab, and an acoustic tank.
Large-scale experiments that include student participation have been routinely conducted by successful collaborations such as with the
APL-University of Washington, NATO Centre for Maritime Research and Experimentation, and the University of Hawaii. In this talk,
the state of the PSU underwater acoustics program will be described along with the courses offered, research activities, experimental
program, collaborations, and student success.
9:55
3aAO11. Underwater acoustics education in Harbin Engineering University. Desen Yang, Xiukun Li, and Yang Li (Acoust. Sci.
and Technol. Lab., Harbin Eng. Univ., Harbin, Heilongjiang Province, China, dsyang@hrbeu.edu.cn)
College of Underwater Acoustic Engineering in Harbin Engineering University is the earliest institution engaging in underwater
acoustics education in Chinese universities, which has complete types of high education training levels and subject directions. There are
124 teachers in the college engaging in underwater acoustics research, of which there are 30 professors and 36 associate professors. The
developments of underwater acoustic transducer technology, underwater positioning and navigation technology, underwater target
detecting technology, underwater acoustic communication technique, multi-beam echo sounding technique, and high resolution image
sonar technique new theory and technology of underwater acoustic reach the leading level in China. Every year, the college attracts
more than 200 excellent students whose entrance examination marks is 80 points higher than the key fraction stroke. There are three education program levels in this specialty (undergraduate-level, graduate-level, and Ph.D.-level), and students may study underwater acoustics within any of our three programs, besides which, the college has special education programs for foreign students. Graduate
employments are underwater acoustic institution, electronic institution, communication company, and IT enterprise. In this paper,
descriptions of underwater acoustics education programs, curriculum systems, and teaching contents of acoustics courses will be
introduced.
10:05–10:20 Break
Contributed Papers
10:20
3aAO12. Graduate education in underwater acoustics, transduction,
and signal processing at UMass Dartmouth. David A. Brown (Elec. and
Comput. Eng., Univ. of Massachusetts Dartmouth, 151 Martine St., Fall
River, MA 02723, dbAcoustics@cox.net), John Buck, Karen Payton, and
Paul Gendron (Elec. and Comput. Eng., Univ. of Massachusetts Dartmouth,
Dartmouth, MA)
The University of Massachusetts Dartmouth established a Ph.D. degree
in Electrical Engineering with a specialization in Marine Acoustics in 1996,
building on the strength of the existing M.S. program. Current enrollment in
2189
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
these programs include 26 M.S. students and 16 Ph.D. students. The program offers courses and research opportunities in the area of underwater
acoustics, transduction, and signal processing. Courses include the Fundamentals of Acoustics, Random Signals, Underwater Acoustics, Introduction
to Transducers, Electroacoustic Transduction, Digital Signal Processing,
Detection Theory, and Estimation Theory. The university’s indoor underwater acoustic test and calibration facility is one of the largest in academia
and supports undergraduate and graduate thesis and sponsored research. The
university also owns three Iver-2 fully autonomous underwater vehicles.
The graduate program capitalizes on collaborations with many marine technology companies resident at the university’s Advanced Technology and
168th Meeting: Acoustical Society of America
2189
Manufacturing Center (ATMC) and the nearby Naval Undersea Warfare
Center in Newport, RI. The presentation will highlight recent theses and dissertations, course offerings, and industry and government collaborations
that support underwater acoustics research.
10:30
3aAO13. Ocean acoustics at the University of Rhode Island. Gopu R.
Potty and James H. Miller (Dept. of Ocean Eng., Univ. of Rhode Island, 115
Middleton Bldg., Narragansett, RI 02882, potty@egr.uri.edu)
The undergraduate and graduate program in Ocean Engineering at the
University of Rhode Island is one of the oldest such programs in the United
States. This program offers Bachelors, Masters (thesis and non-thesis
options), and Ph.D. degrees. At the undergraduate level, students are
exposed to ocean acoustics through a number of required and elective
courses, laboratory and field work, and capstone projects. Examples of student projects will be presented. At the graduate level, students can specialize
in several areas including geoacoustic inversion, propagation modeling, marine mammal acoustics, ocean acoustic instrumentation, transducers, etc. A
historical review of the evolution of ocean acoustics education in the department will be presented. This will include examples of some of the research
carried out by different faculty and students over the years, enrollment
trends, collaborations, external funding, etc. Many graduates from the program hold faculty positions at a number of universities in the US and
abroad. In addition, graduates from the ocean acoustics program at URI are
key staff at many companies and organizations. A number of companies
have spun off the program in the areas of forward-looking sonar, sub-bottom
profiling, and other applications. The opportunities and challenges facing
the program will be summarized.
10:40
3aAO14. An underwater acoustics program far from the ocean: The Georgia Tech case. Karim G. Sabra (Mech. Eng., Georgia Inst. of Technol., 771
Ferst Dr., NW, Atlanta, GA 30332-0405, karim.sabra@me.gatech.edu)
The underwater acoustics education program at the Georgia Institute of
Technology (Georgia Tech) is run by members of the Acoustics and Dynamics research area group from the School of Mechanical Engineering.
2190
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
We will briefly review the scope of this program in terms of education and
research activities as well as discuss current challenges related to the future
of underwater acoustics education.
10:50
3aAO15. Graduate education in ocean acoustics at Rensselaer Polytechnic Institute. William L. Siegmann (Dept. of Mathematical Sci., Rensselaer
Polytechnic Inst., 110 Eighth St., Troy, NY 12180-3590, siegmw@rpi.
edu)
Doctoral and master’s students in Rensselaer’s Department of Mathematical Sciences have had opportunities for research in Ocean Acoustics
since 1957. Since then only one or two faculty members at any time were
directly involved with OA education. Consequently, collaboration with colleagues at other centers of OA research has been essential. The history will
be briefly reviewed, focusing on the education of a small group of OA doctoral students in an environment with relatively limited institutional resources. Graduate education in OA at RPI has persisted because of sustained
support by the Office of Naval Research.
11:00–11:45 Panel Discussion
11:45
3aAO16. Summary of panel discussion on education in Acoustical
Oceanography and Underwater Acoustics. Andone C. Lavery (Appl.
Ocean Phys. and Eng., Woods Hole Oceanographic Inst., 98 Water St., MS
11, Bigelow 211, Woods Hole, MA 02536, alavery@whoi.edu)
Following the presentations by the speakers in session, a panel discussion will offer the platform for those in the audience, particularly those from
Institutions and Universities that did not formally participate in the session
but have active education programs in Acoustical Oceanography and/or
Underwater Acoustics, to ask relevant questions and contribute to the
assessment of the national health of education in the fields of Acoustical
Oceanography and Underwater Acoustics. A summary of the key points presented in the special sessions and panel discussion is provided.
168th Meeting: Acoustical Society of America
2190
WEDNESDAY MORNING, 29 OCTOBER 2014
INDIANA A/B, 8:00 A.M. TO 11:30 A.M.
Session 3aBA
Biomedical Acoustics: Kidney Stone Lithotripsy
Tim Colonius, Cochair
Mechanical Engineering, Caltech, 1200 E. California Blvd., Pasadena, CA 91125
Wayne Kreider, Cochair
CIMU, Applied Physics Laboratory, University of Washington, 1013 NE 40th Street, Seattle, WA 98105
Contributed Papers
3aBA1. Comparable clinical outcomes with two lithotripters having
substantially different acoustic characteristics. James E. Lingeman,
Naeem Bhojani (Urology, Indiana Univ. School of Medicine, 1801 N. Senate Blvd., Indianapolis, IN 46202, jlingeman@iuhealth.org), James C. Williams, Andrew P. Evan, and James A. McAteer (Anatomy and Cell Biology,
Indiana Univ. School of Medicine, Indianapolis, IN)
A consecutive case study was conducted to assess the clinical performance of the Lithogold, an electrohydraulic lithotripter having a relatively
low P + and broad focal width (FW) (~20 MPa, ~20 mm), and the electromagnetic Storz-SLX having higher P + and narrower FW (~50 MPa, 3–4
mm). Treatment was at 60 SW/min with follow-up at ~2 weeks. Stone free
rate (SFR) was defined as no residual fragments remaining after single session SWL. SFR was similar for the two lithotripters (Lithogold 29/76 =
38.2%; SLX 69/142 = 48.6% p = 0.15), with no difference in outcome for renal stones (Lithogold 20/45 = 44.4%; SLX 33/66 = 50%, p = 0.70) or stones
in the ureter (Lithogold 9/31 = 29%; SLX 36/76 = 47.4%, p = 0.08). Stone
size did not differ between the two lithotripters for patients who were not
stone free (9.163.7 mm for Lithogold vs. 8.563.5 mm for SLX, P = 0.42),
but the stone-free patients in the Lithogold group had larger stones on average than the stone-free patients treated with the SLX (7.662.5 mm vs.
6.263.2 mm, P = 0.005). The percentage of stones that did not break was
similar (Lithogold 10/76 = 13.2%; SLX 23/142 = 16.2%). These data present a realistic picture of clinical outcomes using modern lithotripters, and
although the acoustic characteristics of the Lithogold and SLX differ considerably, outcomes were similar. [NIH-DK43881.]
8:15
3aBA2. Characterization of an electromagnetic lithotripter using transient acoustic holography. Oleg A. Sapozhnikov, Sergey A. Tsysar (Phys.
Faculty, Moscow State Univ., Leninskie Gory, Moscow 119991, Russian
Federation, oa.sapozhnikov@gmail.com), Wayne Kreider (Ctr. for Industrial and Medical Ultrasound, Appl. Phys. Lab., Univ. of Washington, Seattle, WA), Guangyan Li (Dept. of Anatomy and Cell Biology, Indiana Univ.
School of Medicine, Indianapolis, IN), Vera A. Khokhlova (Ctr. for Industrial and Medical Ultrasound, Appl. Phys. Lab., Univ. of Washington, Seattle, WA), and Michael R. Bailey (Dept. of Urology, Univ. of Washington
Medical Ctr., Seattle, WA)
Shock wave lithotripters radiate high intensity pulses that are focused on
a kidney stone. High pressure, short rise time, and path-dependent nonlinearity make characterization in water and extrapolation to tissue difficult.
Here acoustic holography is applied for the first time to characterize a lithotripter. Acoustic holography is a method to determine the distribution of
acoustic pressure on the surface of the source (source hologram). The electromagnetic lithotripter characterized in this effort is a commercial model
(Dornier Compact S, Dornier MedTech GmbH, Wessling, Germany) with
6.5 mm focal width. A broadband hydrophone (HGL-0200, sensitive diameter 200 mm, Onda Corp., Sunnyvale, CA) was used to sequentially measure
2191
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
the field over a set of points in a plane in front of the source. Following the
previously developed transient holography approach, the recorded pressure
field was numerically back-propagated to the source surface and then used
for nonlinear forward propagation to predict waveforms in different points
in the focal region. Pressure signals predicted from the source hologram
coincide well with the waveforms measured by a fiber optic hydrophone.
Moreover, the method provides an accurate boundary condition from which
the field in tissue can be simulated. [Work supported by RSF 14-15-00665
and NIH R21EB016118, R01EB007643, and DK043881.]
8:30
3aBA3. Multiscale model of comminution in shock wave lithotripsy.
Sorin M. Mitran (Mathematics, Univ. of North Carolina, CB 3250, Chapel
Hill, NC 27599-3250, mitran@amath.unc.edu), Georgy Sankin, Ying
Zhang, and Pei Zhong (Mech. Eng. and Mater. Sci., Duke Univ., Durham,
NC)
A previously introduced model for stone comminution in shock wave
lithotripsy is extended to include damage produced by cavitation. At the
macroscopic, continuum level a 3D elasticity model with time-varying material constants capturing localized damage provides the overall stress field
within kidney stone simulants. Regions of high stress are identified and a
mesoscopic crack propagation is used to dynamically update localized damage. The crack propagation model in turn is linked with a microscopic grain
dynamics model. Continuum stresses and surface pitting is provided by a
multiscale cavitation model (see related talk). The overall procedure is capable of tracking stone fragments and surface cavitation of the fragments
through several levels of breakdown. Computed stone fragment distributions
are compared to experimental results. [Work supported by NIH through
5R37DK052985-18.]
8:45
3aBA4. Exploring the limits of treatment used to invoke protection
from extracorporeal shock wave lithotripsy induced injury. Bret A. Connors, Andrew P. Evan, Rajash K. Handa, Philip M. Blomgren, Cynthia D.
Johnson, James A. McAteer (Anatomy and Cell Biology, IU School of Medicine, Medical Sci. Bldg., Rm. 5055, 635 Barnhill Dr., Indianapolis, IN
46202, bconnors@iupui.edu), and James E. Lingeman (Urology, IU School
of Medicine, Indianapolis, IN)
Previous studies with our juvenile pig model have shown that a clinical
dose of 2000 shock waves (SWs) (Dornier HM-3, 24 kV, 120 SWs/min)
produces a lesion ~3–5% of the functional renal volume (FRV) of the SWtreated kidney. This injury was significantly reduced (to ~0.4% FRV) when
a priming dose of 500 low-energy SWs immediately preceded this clinical
dose, but not when using a priming dose of 100 SWs [BJU Int. 110, E1041
(2012)]. The present study examined whether using only 300 priming dose
SWs would initiate protection against injury. METHODS: Juvenile pigs
were treated with 300 SW’s (12 kV) delivered to a lower pole calyx using a
HM-3 lithotripter. After a pause of 10 s, 2000 SWs (24 kV) were delivered
168th Meeting: Acoustical Society of America
2191
3a WED. AM
8:00
to that same kidney. The kidneys were then perfusion-fixed and processed
to quantitate the size of the parenchymal lesion. RESULTS: Pigs (n = 9)
treated using a protocol with 300 low-energy priming dose SWs had a lesion
measuring 0.8460.43% FRV (mean 6 SE). This lesion was smaller than
that seen with a clinical dose of 2000 SWs at 24 kV. CONCLUSIONS: A
treatment protocol including 300 low-energy priming dose SWs can provide
protection from injury during shock wave lithotripsy. [Research supported
by NIH grant P01 DK43881.]
9:00
3aBA5. Shockwave lithotripsy with renoprotective pause is associated
with vasoconstriction in humans. Franklin Lee, Ryan Hsi, Jonathan D.
Harper (Dept. of Urology, Univ. of Washington School of Medicine, Seattle,
WA), Barbrina Dunmire, Michael Bailey (Cntr.Industrial and Medical
Ultrasound, Appl. Phys. Lab, Univ. of Washington, 1013 NE 40th St., Seattle, WA 98105, bailey@apl.washington.edu), Ziyue Liu (Dept. of Biostatistics, Indiana Univ. School of Medicine, Indianapolis, Washington), and
Mathew D. Sorensen (Dept. of Urology, Dept. of Veteran Affairs Medical
Ctr., Seattle, WA)
A pause early in shock wave lithotripsy (SWL) increased vasoconstriction as measured by resistive index (RI) during treatment and mitigated renal injury in an animal model. The purpose of our study was to investigate
whether RI rose during SWL in humans. Prospectively recruited patients
underwent SWL of renal stones with a Dornier Compact S lithotripter. The
renal protective protocol consisted of treatment at 1 Hz and slow power
ramping for the initial 250 shocks followed by a 2 min pause. RI was measured using ultrasound prior to treatment, after 250 shocks, after 750 shocks,
after 1500 shocks, and after SWL. A linear mixed-effects model was used to
compare RI at the different time points and to account for additional covariates in fifteen patients. RI was significantly higher than baseline for all time
points 250 shocks and after. Age, gender, body mass index, and treatment
side were not significantly associated with RI. Monitoring for a rise in RI
during SWL is possible and may provide real-time feedback as to when the
kidney is protected. [Work supported by NIH DK043881, NSBRI through
NASA NCC 9-58, and resources from the VA Puget Sound Health Care
System.]
9:15
3aBA6. Renal shock wave lithotripsy may be a risk factor for earlyonset hypertension in metabolic syndrome: A pilot study in a porcine
model. Rajash Handa (Anatomy & Cell Biology, Indiana Univ. School of
Medicine, 635 Barnhill Dr., MS 5035, Indianapolis, IN 46202-5120,
rhanda@iupui.edu), Ziyue Liu (Biostatistics, Indiana Univ. School of Medicine, Indianapolis, IN), Bret Connors, Cynthia Johnson, Andrew Evan
(Anatomy & Cell Biology, Indiana Univ. School of Medicine, Indianapolis,
IN), James Lingeman (Kidney Stone Inst., Indiana Univ. Health Methodist
Hospital, Indianapolis, IN), David Basile, and Johnathan Tune (Cellular &
Integrative Physiol., Indiana Univ. School of Medicine, Indianapolis,
IN)
A pilot study was conducted to assess whether extracorporeal shock
wave lithotripsy (SWL) treatment of the kidney influences the onset and severity of metabolic syndrome (MetS)—a cluster of conditions that includes
central obesity, insulin resistance, impaired glucose tolerance, dyslipidemia,
and hypertension. Methods: Three-month-old juvenile female Ossabaw miniature pigs were treated with either SWL (2000 SWs, 24 kV, 120 SWs/min
using the HM3 lithotripter; n = 2) or sham-SWL (no SWs; n = 2). SWs were
targeted to the upper pole of the left kidney so as to model treatment that
would also expose the pancreas—an organ involved in blood glucose homeostasis—to SWs. The pigs were then instrumented for direct measurement
of arterial blood pressure via implanted radiotelemetry devices, and later fed
a hypercaloric atherogenic diet for ~7 months to induce MetS. The development of MetS was assessed from intravenous glucose tolerance tests.
Results: The progression and severity of MetS were similar in the shamtreated and SWL-treated groups. The only exception was arterial blood pressure, which remained relatively constant in the sham-treated pigs and rose
toward hypertensive levels in SW-treated pigs. Conclusions. These preliminary results suggest that renal SWL appears to be a risk factor for earlyonset hypertension in MetS.
9:30–9:45 Break
2192
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
9:45
3aBA7. Modeling vascular injury due to shock-induced bubble collapse
in lithotripsy. Vedran Coralic and Tim Colonius (Mech. Eng., Caltech,
1200 E. California Blvd., Pasadena, CA 91125, colonius@caltech.edu)
Shock-induced collapse (SIC) of preexisting bubbles is investigated as a
potential mechanism for vascular injury in shockwave lithotripsy (SWL).
Preexisting bubbles exist under normal physiological conditions and grow
larger and more numerous with ongoing treatment. We compute the threedimensional SIC of a bubble using the multi-component Euler equations,
and determine the resulting three-dimensional finite-strain deformation field
in the material surrounding the collapsing bubble. We propose a criterion
for vessel rupture and estimate the minimum bubble size, across clinical
SWL pressures, which could result in rupture of microvasculature. Postprocessing of the results and comparison to viscoelastic models for spherical
bubble dynamics demonstrate that our results are insensitive to a wide range
of estimated viscoelastic tissue properties during the collapse phase. During
the jetting phase, however, viscoelastic effects are non-negligible. The minimum bubble size required to rupture a vessel is then estimated by adapting a
previous model for the jet’s penetration depth as a function of tissue
viscosity.
10:00
3aBA8. Multiscale model of cavitation bubble formation and breakdown. Isaac Nault, Sorin M. Mitran (Mathematics, Univ. of North Carolina,
CB3250, Chapel Hill, NC, naulti@live.unc.edu), Georgy Sankin, and Pei
Zhong (Mech. Eng. and Mater. Sci., Duke Univ., Durham, NC)
Cavitation damage is responsible for initial pitting of kidney stone surfaces, damage that is thought to play an important role in shock wave lithotripsy. We introduce a multiscale model of the formation of cavitation
bubbles in water, and subsequent breakdown. At a macroscopic, continuum
scale cavitation is modeled by the 3D Euler equations with a Tait equation
of state. Adaptive mesh refinement is used to provide increased resolution at
the liquid/vapor boundary. Cells with both liquid and vapor phases are
flagged by the continuum solver for mesoscale, kinetic modeling by a lattice
Boltzmann description capable of capturing non-equilibrium behavior (e.g.,
phase change, energetic jet impingement). Isolated and interacting two-bubble configurations are studied. Computational simulation results are compared with high-speed experimental imaging of individual bubble dynamics
and bubble–bubble interaction. The model is used to build a statistical
description of multiple-bubble interaction, with input from cavitation cloud
imaging. [Work supported by NIH through 5R37DK052985-18.]
10:15
3aBA9. Preliminary results of the feasibility to reposition kidney stones
with ultrasound in humans. Jonathan D. Harper, Franklin Lee, Susan
Ross, Hunter Wessells (Dept. of Urology, Univ. of Washington School of
Medicine, Seattle, WA), Bryan W. Cunitz, Barbrina Dunmire, Michael Bailey (Ctr.Industrial and Medical Ultrasound, Appl. Phys. Lab, Univ. of
Washington, 1013 NE 40th St., Seattle, WA 98105, bailey@apl.washington.
edu), Jeff Thiel (Dept. of Radiology, Univ. of Washington School of Medicine, Seattle), Michael Coburn (Dept. of Urology, Baylor College of Medicine, Houston, TX), James E. Lingeman (Dept. of Urology, Indiana Univ.
School of Medicine, Indianapolis, IN), and Mathew Sorensen (Dept. of
Urology, Dept. of Veteran Affairs Medical Ctr., Seattle)
Preliminary investigational use of ultrasound to reposition human kidney stones is reported. The three study arms include: de novo stones, postlithotripsy fragments, and large stones within the preoperative setting. A
pain questionnaire is completed immediately prior to and following propulsion. A maximum of 40 push attempts are administered. Movement is classified as no motion, movement with rollback or jiggle, or movement to a new
location. Seven subjects have been enrolled and undergone ultrasonic propulsion to date. Stones were identified, targeted, and moved in all subjects.
Subjects who did not have significant movement were in the de novo arm.
None of the subjects reported pain associated with the treatment. One subject in the post-lithotripsy arm passed two small stones immediately following treatment corresponding to the two stones displaced from the interpolar
region. Three post-lithotripsy subjects reported passage of multiple small
fragments within two weeks of treatment. In four subjects, ultrasonic
168th Meeting: Acoustical Society of America
2192
width resulted in an underestimation of 0.5 6 1.7 mm (p < 0.001). A posterior acoustic shadow was seen in the majority of stones and was a more
accurate measure of stone size. This would provide valuable information for
stone management. [Work supported by NIH DK43881 and DK092197, and
NSBRI through NASA NCC 9-58.]
10:30
11:00
3aBA10. Nonlinear saturation effects in ultrasound fields of diagnostictype transducers used for kidney stone propulsion. Maria M. Karzova
(Phys. Faculty, Dept. of Acoust., M.V. Lomonosov Moscow State Univ.,
Leninskie Gory 1/2, Moscow 119991, Russian Federation, masha@acs366.
phys.msu.ru), Bryan W. Cunitz (Ctr. for Industrial and Medical Ultrasound,
Appl. Phys. Lab., Univ. of Washington, Seattle, WA), Petr V. Yuldashev
(Phys. Faculty, M.V. Lomonosov Moscow State Univ., Moscow, Russian
Federation), Vera A. Khokhlova, Wayne Kreider (Ctr. for Industrial and
Medical Ultrasound, Appl. Phys. Lab., Univ. of Washington, Seattle, WA),
Oleg A. Sapozhnikov (Phys. Faculty, M.V. Lomonosov Moscow State
Univ., Moscow, Russian Federation), and Michael R. Bailey (Dept. of Urology, Univ. of Washington Medical Ctr., Seattle, WA)
3aBA12. Development and testing of an image-guided prototype system
for the comminution of kidney stones using burst wave lithotripsy.
Bryan Cunitz (Appl. Phys. Lab., Univ. of Washington, 1013 NE 40th St.,
Seattle, WA 98105, bwc@apl.washington.edu), Adam Maxwell (Dept. of
Urology, Univ. of Washington Medical Ctr., Seattle, WA), Wayne Kreider,
Oleg Sapozhnikov (Appl. Phys. Lab, Univ. of Washington, Seattle, WA),
Franklin Lee, Jonathan Harper, Matthew Sorenson (Dept. of Urology, Univ.
of Washington Medical Ctr., Seattle, WA), and Michael Bailey (Appl. Phys.
Lab, Univ. of Washington, Seattle, WA)
A novel therapeutic application of ultrasound for repositioning kidney
stones is being developed. The method uses acoustic radiation force to expel
mm-sized stones or to dislodge even larger obstructing stones. A standard
diagnostic 2.3 MHz C5-2 array probe has been used to generate pushing
acoustic pulses. The probe comprises 128 elements equally spaced at the 55
mm long convex cylindrical surface with 41.2 mm radius of curvature. The
efficacy of the treatment can be increased by using higher transducer output
to provide stronger pushing force; however, nonlinear acoustic saturation
effect can be a limiting factor. In this work, nonlinear propagation effects
were analyzed for the C5-2 transducer using a combined measurement and
modeling approach. Simulations were based on the 3D Westervelt equation;
the boundary condition was set to match low power measurements. Focal
waveforms simulated for several output power levels were compared with
the fiber-optic hydrophone measurements and were found in good agreement. It was shown that saturation effects do limit the acoustic pressure in
the focal region of the transducer. This work has application to standard
diagnostic probes and imaging. [Work supported by RSF 14-12-00974, NIH
EB007643, DK43881 and DK092197, and NSBRI through NASA NCC 958.]
10:45
3aBA11. Evaluating kidney stone size in children using the posterior
acoustic shadow. Franklin C. Lee, Jonathan D. Harper, Thomas S. Lendvay
(Urology, Univ. of Washington, Seattle, WA), Ziyue Liu (Biostatistics, Indiana Univ. School of Medicine , Indianapolis, IN), Barbrina Dunmire (Appl.
Phys. Lab, Univ. of Washington, 1013 NE 40th St, Seattle, WA 98105,
mrbean@uw.edu), Manjiri Dighe (Radiology, Univ. of Washington, Seattle,
WA), Michael R. Bailey (Appl. Phys. Lab, Univ. of Washington, Seattle,
WA), and Mathew D. Sorensen (Urology, Dept. of Veteran Affairs Medical
Ctr., Seattle, WA)
Ultrasound, not x-ray, is preferred for imaging kidney stones in children;
however, stone size determination is less accurate with ultrasound. In vitro
we found stone sizing was improved by measuring the width of the acoustic
shadow behind the stone. We sought to determine the prevalence and accuracy of the acoustic shadow in pediatric patients. A retrospective analysis
was performed of all initial stone events at a children’s hospital over the last
10 years. Included subjects had a computed tomography (CT) scan and renal
ultrasound within 3 months of each other. The width of the stone and acoustic shadow were measured on ultrasound and compared to the stone size as
determined by CT. Thirty-seven patients with 49 kidney stones were
included. An acoustic shadow was seen in 85% of stones evaluated. Stone
width resulted in an average overestimation of 1.2 6 2.2 mm while shadow
2193
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Burst wave lithotripsy is a novel technology that uses focused, sinusoidal
bursts of ultrasound to fragment kidney stones. Prior research laid the
groundwork to design an extracorporeal, image-guided probe for in-vivo
testing and potentially human clinical testing. Toward this end, a 12-element
330 kHz array transducer was designed and built. The probe frequency, geometry, and shape were designed to break stones up to 1 cm in diameter into
fragments <2mm. A custom amplifier capable of generating output bursts
up to 3 kV was built to drive the array. To facilitate image guidance, the
transducer array was designed with a central hole to accommodate co-axial
attachment of an HDI P4-2 probe. Custom B-mode and Doppler imaging
sequences were developed and synchronized on a Verasonics ultrasound
engine to enable real-time stone targeting and cavitation detection, Preliminary data suggest that natural stones will exhibit Doppler “twinkling” artifact in the BWL focus and that the Doppler power increases as the stone
begins to fragment. This feedback allows accurate stone targeting while
both types of imaging sequences can also detect cavitation in bulk tissue that
may lead to injury. [Work supported by NIH grants DK043881, EB007643,
EB016118, T32 DK007779, and NSBRI through NASA NCC 9-58.]
11:15
3aBA13. Removal of residual bubble nuclei to enhance histotripsy kidney stone erosion at high rate. Alexander P. Duryea (Biomedical Eng.,
Univ. of Michigan, 2131 Gerstacker Bldg., 2200 Bonisteel Blvd., Ann
Arbor, MI 48109, duryalex@umich.edu), William W. Roberts (Urology,
Univ. of Michigan, Ann Arbor, MI), Charles A. Cain, and Timothy L. Hall
(Biomedical Eng., Univ. of Michigan, Ann Arbor, MI)
Previous work has shown that histotripsy can effectively erode model
kidney stones to tiny, sub-millimeter debris via a cavitational bubble cloud
localized on the stone surface. Similar to shock wave lithotripsy, histotripsy
stone treatment displays a rate-dependent efficacy, with pulses applied at
low repetition frequency producing more efficient erosion compared to
those applied at high repetition frequency. This is attributed to microscopic
residual cavitation bubble nuclei that can persist for hundreds of milliseconds following bubble cloud collapse. To mitigate this effect, we have
developed low amplitude (MI<1) acoustic pulses to actively remove residual nuclei from the field. These bubble removal pulses utilize the Bjerknes
forces to stimulate the aggregation and subsequent coalescence of remnant
nuclei, consolidating the population from a very large number to a countably
small number of remnant bubbles within several milliseconds. Incorporation
of this bubble removal scheme in histotripsy model stone treatments performed at high rate (100 pulses/second) produced drastic improvement in
treatment efficiency, with an average erosion rate increase of 12-fold in
comparison to treatment without bubble removal. High speed imaging indicates that the influence of remnant nuclei on the location of bubble cloud
collapse is the dominant contributor to this disparity in treatment efficacy.
168th Meeting: Acoustical Society of America
2193
3a WED. AM
propulsion identified a collection of stones previously characterized as a single stone on KUB and ultrasound. There have been no treatment related
adverse events reported with mean follow-up of 3 months. [Trial supported
by NSBRI through NASA NCC 9-58. Development supported by NIH
DK043881and DK092197.]
WEDNESDAY MORNING, 29 OCTOBER 2014
MARRIOTT 9/10, 8:00 A.M. TO 11:50 A.M.
Session 3aEA
Engineering Acoustics and Structural Acoustics and Vibration: Mechanics of Continuous Media
Andrew J. Hull, Cochair
Naval Undersea Warfare Center, 1176 Howell St, Newport, RI 02841
J. Gregory McDaniel, Cochair
Mechanical Engineering, Boston Univ., 110 Cummington St., Boston, MA 02215
Invited Papers
8:00
3aEA1. Fundamental studies of zero Poisson ratio metamaterials. Elizabeth A. Magliula (Div. Newport, Naval Undersea Warfare
Ctr., 1176 Howell St., Bldg. 1302, Newport, RI 02841, elizabeth.magliula@navy.mil), J. Gregory McDaniel, and Andrew Wixom
(Mech. Eng. Dept., Boston Univ., Boston, MA)
As material fabrication advances, new materials with special properties will be possible to accommodate new design boundaries. An
emerging and promising field of investigation is to study the basic phenomena of materials with a negative Poisson ratio (NPR). This
work seeks to develop zero Poisson ratio (ZPR) metamaterials for use in reducing acoustic radiation from compressional waves. Such a
material would neither contract or expand laterally when compressed or stretched, and therefore not radiate sound. Previous work has
provided procedures for creating NPR copper foam through transformation of the foam cell structure from a convex polyhedral shape to
a concave “re-entrant” shape. A ZPR composite will be developed and analyzed in an effort to achieve desired wave propagation characteristics. Dynamic investigations have been conducted using ABAQUS, in which a ZPR is placed under load to observe displacement
behavior. Inspection of the results at 1 kHz and 5 kHz show that the top and bottom surfaces experience much less displacement compared to respective conventional reference layer build-ups. However, at 11 kHz small lateral displacements were experienced at the outer
surfaces. Results indicate that the net zero Poisson effect was successfully achieved at frequencies where half the wavelength is greater
than the thickness.
8:20
3aEA2. Scattering by targets buried in elastic sediment. Angie Sarkissian, Saikat Dey, Brian H. Houston (Code 7130, Naval Res.
Lab., Code 7132, 4555 Overlook Ave. S.W., Washington, DC 20375, angie.sarkissian@nrl.navy.mil), and Joseph A. Bucaro (Excet,
Inc., Springfield, VA)
Scattering results are presented for targets of various shapes buried in elastic sediment with a plane wave incident from air above.
The STARS3D finite element program recently extended to layered, elastic sediments is used to compute the displacement field just
below the interface. Evidence of the presence of Rayleigh waves is observed in the elastic sediment and an algorithm based on the Rayleigh waves subtracts the contribution of the Rayleigh waves to simplify the resultant scattering pattern. Results are presented for scatterers buried in uniform elastic media as well as layered media. [This work was supported by ONR.]
8:40
3aEA3. Response shaping and scale transition in dynamic systems with arrays of attachments. Joseph F. Vignola, Aldo A. Glean
(Mech. Eng., The Catholic Univ. of America, 620 Michigan Ave., NE, Washington, DC 20064, vignola@cua.edu), John Sterling (Carderock Div., Naval Surface Warfare Ctr., West Bethesda, MD), and John A. Judge (Mech. Eng., The Catholic Univ. of America, Washington, DC)
Arrays of elastic attachments can be design to act as energy sinks in dynamic systems. This presentation describes design strategies
for drawing off mechanical energy to achieve specific objectives such as mode suppression and response tailoring in both extended and
discrete systems. The design parameters are established using numerical simulations for both propagating and standing compressional
waves in a one-dimensional system. The attachments were chosen to be cantilevers so that higher modes would have limited interaction
with the higher modes of the primary structure. The two cases considered here are concentrated groups of cantilevers and spatial distributions of similar cantilevers. Relationships between the number and placement of the attachments and their masses and frequency distributions are of particular interest, along with the energy density distribution between the primary structure and the attachments. The
simulations are also used to show how fabrication error degrades performance and how energy scale transition can be managed to maintain linear behavior.
2194
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2194
9:00
3aEA4. Accelerated general method for computing noise effects in arrays. Heather Reed, Jeffrey Cipolla, Mahesh Bailakanavar, and
Patrick Murray (Weidlinger Assoc., 40 Wall St 18th Fl., New York, NY 10005, heather.reed@wai.com)
Noise in an acoustic array can be defined as any unwanted signal, and understanding how noise interacts with a structural system is
paramount for optimal design. For example, in an underwater vehicle we may want to understand how structural vibrations radiate
through a surrounding fluid; or an engineer may want to evaluate the level of sound inside a car resulting from the turbulent boundary
layer (TBL) induced by a moving vehicle. This talk will discuss a means of modeling noise at a point of interest (e.g., at a sensor location) stemming from a known source by utilizing a power transfer function between the source and the point of interest, a generalization
of the work presented in [1]. The power transfer function can be readily computed from the acoustic response to an incident wave field,
requiring virtually no additional computation. The acoustic solution may be determined via analytic frequency domain approaches or
through a finite element analysis, enabling the noise solution to be a fast post processing exercise. This method is demonstrated by modeling the effects of a TBL pressure and noise induced by structural vibrations on a sensor array embedded in an elastic, multi-layer solid.
Additionally, uncertainties in the noise model can be easily quantified through Monte Carlo techniques due to the fast evaluation of the
noise spectrum. Ko, S.H. and Schloemer, H.H. “Flow noise reduction techniques for a planar array of hydrophones,” J. Acoust. Soc.
Am. 92, 3409 (1992).
9:20
3aEA5. Response of distributed fiber optic sensor cables to spherical wave incidence. Jeffrey Boisvert (NAVSEA Div. Newport,
1176 Howell St., Newport, RI 02841, cboisvertj@cox.net)
A generalized multi-layered infinite-length fiber optic cable is modeled using the exact theory of three-dimensional elasticity in cylindrical coordinates. A cable is typically composed of a fiber optic (glass) core surrounded by various layered materials such as plastics,
metals, and elastomers. The cable is excited by an acoustic spherical wave radiated by a monopole source at an arbitrary location in the
acoustic field. For a given source location and frequency, the radial and axial strains within the cable are integrated over a desired sensor
zone length to determine the optical phase sensitivity using an equation that relates the strain distribution in an optical fiber to changes
in the phase of an optical signal. Directivity results for the cable in a free-field water environment are presented at several frequencies
for various monopole source locations. Some comparisons of the sensor directional response resulting from nearfield (spherical wave)
incidence and farfield (plane wave) incidence are made. [Work supported by NAVSEA Division Newport ILIR Program.]
3a WED. AM
9:40
3aEA6. Testing facility concepts for the material characterization of porous media consisting of relatively limp foam and stiff
fluid. Michael Woodworth and Jeffrey Cipolla (ASI, Weidlinger Assoc., Inc., 1825 K St NW, #350, Washington, DC 20006, michael.
woodworth@wai.com)
Fluid filled foams are important components of acoustical systems. Most are made up of a stiff skeleton medium relative to the fluid
considered, usually air. Biot’s theory of poroelasticity is appropriate for characterizing and modeling these foams. The use of relatively
stiff fluid (such as water) and limp foam media pose a greater challenge. Recently modifications to Biot’s theory have generated the mechanical relationships required to model these systems. Necessary static material properties for the model can be obtain through in vacuo
measurement. Frequency dependent properties are more difficult to obtain. Traditional impedance tube methods suffer from fluid structure interaction when the bulk modulus of the fluid media approaches that of the waveguide. The current investigation derives the theory
for, and investigates the feasibility of, several rigid impedance tube alternatives for characterizing limp foams in stiff fluid media. Alternatives considered include a sufficiently rigid impedance tube, a pressure relief impedance tube and, the most promising, a piston excited
oscillating chamber of small aspect ratio. The chamber concept can recover the descriptive properties of a porous medium described by
Biot’s theory or by complex-impedance equivalent-fluid models. The advantages of this facility are small facility size, low cost, and
small sample size.
10:00–10:20 Break
10:20
3aEA7. Adomian decomposition identifies an approximate analytical solution for a set of coupled strings. David Segala (Naval
Undersea Warfare Ctr., 1176 Howell St., Newport, RI 02841, david.segala@navy.mil)
The use of Adomian decomposition method (ADM) has been successfully applied in various applications across the applied mechanics and mathematics community. Originally, Adomian developed this method to derive analytical approximate solutions to nonlinear
functional equations. It was shown that the solution to the given nonlinear functional equation can be approximated by an infinite series
solution of the linear and nonlinear terms, provided the nonlinear terms are represented by a sum of series of Adomian polynomials.
Here, ADM is used to derive an approximate analytical solution to a set of partial differential equations (PDEs) describing the motion of
two coupled strings that lie orthogonal to each other. The PDEs are derived using Euler-Lagrange equations of motion. The ends of the
strings are pinned and the strings are coupled with a nonlinear spring. A finite element model of the system is developed to provide a
comparative baseline. Both the finite element model and analytical solution were driven by an initial displacement condition. The results
from both the FEA and analytical solution were compared at six different equally spaced time points over the course of a 1.2 second
simulation.
2195
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2195
10:40
3aEA8. Comprehensive and practical explorations of nonlinear energy harvesting from stochastic vibrations. Ryan L. Harne and
Kon-Well Wang (Mech. Eng., Univ. of Michigan, 2350 Hayward St., 2250 GG Brown Bldg., Ann Arbor, MI 48109-2125, rharne@
umich.edu)
Conversion of ambient vibrational energies to electrical power is a recent, popular motivation for research that seeks to realize selfsustaining electronic systems including biomedical implants and remote wireless structural sensors. Many vibration resources are stochastic with spectra concentrated at extremely low frequencies, which is a challenging bandwidth to target in the design of compact, resonant electromechanical harvesters. Exploitation of design-based nonlinearities has uncovered means to reduce and broaden a
harvester’s frequency range of greatest sensitivity to be more compatible with ambient spectra, thus dramatically improving energy conversion performance. However, studies to date draw differing conclusions regarding the viability of the most promising nonlinear harvesters, namely, those designed around the elastic stability limit, although the investigations present findings having limited verification.
To help resolve the outstanding questions about energy harvesting from stochastic vibrations using systems designed near the elastic stability limit, this research integrates rigorous analytical, numerical, and experimental explorations. The harvester architecture considered
is a cantilever beam, which is the common focus of contemporary studies, and evaluates critical, practical factors involved for its effective implementation. From the investigations, the most favorable incorporations of nonlinearity are identified and useful design guidelines are proposed.
11:00
3aEA9. Response of infinite length bars and beams with periodically varying area. Andrew J. Hull and Benjamin A. Cray (Naval
Undersea Warfare Ctr., 1176 Howell St., Newport, RI 02841, andrew.hull@navy.mil)
This talk develops a solution method for the longitudinal motion of a rod or the flexural motion of a beam of infinite length whose
area varies periodically. The conventional rod or beam equation of motion is used with the area and moment of inertia expressed using
analytical functions of the longitudinal (horizontal) spatial variable. The displacement field is written as a series expansion using a periodic form for the horizontal wavenumber. The area and moment of inertia expressions are each expanded into a Fourier series. These are
inserted into the differential equations of motion and the resulting algebraic equations are orthogonalized to produce a matrix equation
whose solution provides the unknown wave propagation coefficients, thus yielding the displacement of the system. An example problem
of both a rod and beam are analyzed for three different geometrical shapes. The solutions to both problems are compared to results from
finite element analysis for validation. Dispersion curves of the systems are shown graphically. Convergence of the series solutions is
illustrated and discussed.
Contributed Papers
11:20
11:35
3aEA10. On the exact analytical solutions to equations of nonlinear
acoustics. Alexander I. Kozlov (Medical and biological Phys., Vitebsk State
Medical Univ., 27, Frunze Ave., Vitebsk 210023, Belarus, albapasserby@
yahoo.com)
3aEA11. A longitudinal shear wave and transverse compressional wave
in solids. ali Zorgani (LabTAU, INSERM, Univ. of Lyon, Bron, France),
Stefan Catheline (LabTAU, INSERM, Univ. of Lyon, 151 Cours Albert
Thomas, Lyon, France, stefan.catheline@inserm.fr), and Nicolas Benech
(Instituto de fisica, Facultad de ciencia, Montevideo, Uruguay)
Some different equations derived as second-order approximations to
complete system of equations of nonlinear acoustics of Newtonian media
(such as Lighthill-Westerwelt equation, Kuznetsov one, etc.) are usually
solved numerically or at least approximately. A general exact analytical
method of solution of these problems based on a short chain of changes of
variables is presented in the work. It is shown that neither traveling-wave
solutions nor classical soliton-like solutions obey these equations. There are
three types of possible forms of acoustical pressure depending on parameters of initial equation: so-called continuous shock (or diffusive soliton), a
monotonously decaying solution as well as a sectionally continuous periodic
one. Obtained results are in good qualitative agreement with previously published numerical calculations of different authors.
2196
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
What general definition can one give to elastic P- and S-wave, especially
when they are transversely and longitudinally polarized respectively? This
question is the main motivation of the analysis of the Green’s function
reported in this letter. By separating the Green’s function in a divergence
free and a rotational free terms, not only a longitudinal S-wave but also a
transversal P-wave are described. These waves are shown to be parts of the
solution of the wave equation known as coupling terms. Similarly to surface
water wave, they are divergence and rotational free. Their special motion is
carefully described and illustrated.
168th Meeting: Acoustical Society of America
2196
WEDNESDAY MORNING, 29 OCTOBER 2014
MARRIOTT 6, 9:00 A.M. TO 11:00 A.M.
Session 3aID
Student Council, Education in Acoustics and Acoustical Oceanography: Graduate Studies in Acoustics
(Poster Session)
Zhao Peng, Cochair
Durham School of Architectural Engineering and Construction, University of Nebraska-Lincoln, 1110 S. 67th Street, Omaha,
NE 68182
Preston S. Wilson, Cochair
Mech. Eng., The University of Texas at Austin, 1 University Station, C2200, Austin, TX 78712
Whitney L. Coyle, Cochair
The Pennsylvania State University, 201 Applied Science Building, University Park, PA 16802
All posters will be on display from 9:00 a.m. to 11:00 a.m. To allow contributors an opportunity to see other posters, contributors of
odd-numbered papers will be at their posters from 9:00 a.m. to 10:00 a.m. and contributors of even-numbered papers will be at their
posters from 10:00 a.m. to 11:00 a.m.
Invited Papers
3a WED. AM
3aID1. The Graduate Program in Acoustics at The Pennsylvania State University. Victor Sparrow and Daniel A. Russell (Grad.
Prog. Acoust., Penn State, 201 Appl. Sci. Bldg., University Park, PA 16802, vws1@psu.edu)
In 2015, the Graduate Program in Acoustics at Penn State will be celebrating 50 years as the only program in the United States offering the Ph.D. in Acoustics as well as M.S. and M.Eng. degrees in Acoustics. An interdisciplinary program with faculty from a variety of
academic disciplines, the Acoustics Program is administratively aligned with the College of Engineering and closely affiliated with the
Applied Research Laboratory. The research areas include: ocean acoustics, structural acoustics, signal processing, aeroacoustics, thermoacoustics, architectural acoustics, transducers, computational acoustics, nonlinear acoustics, marine bioacoustics, noise and vibration
control, and psychoacoustics. The course offerings include fundamentals of acoustics and vibration, electroacoustic transducers, signal
processing, acoustics in fluid media, sound-structure interaction, digital signal processing, experimental techniques, acoustic measurements and data analysis, ocean acoustics, architectural acoustics, noise control engineering, nonlinear acoustics, ultrasonic NDE, outdoor
sound propagation, computational acoustics, flow induced noise, spatial sound and 3D audio, marine bioacoustics, and the acoustics of
musical instruments. Penn State Acoustics graduates serve widely throughout military and government labs, academic institutions, consulting firms, and industry. This poster will summarize faculty, research areas, facilities, student demographics, successful graduates,
and recent enrollment and employment trends.
3aID2. Graduate studies in acoustics and noise control in the School of Mechanical Engineering at Purdue University. Patricia
Davies, J. Stuart Bolton, and Kai Ming Li (Ray W. Herrick Labs., School of Mech. Eng., Purdue Univ., 177 South Russell St., West Lafayette, IN 47907-2099, daviesp@purdue.edu)
The acoustics community at Purdue University will be described with special emphasis on the graduate program in Mechanical Engineering (ME). Purdue is home to around 30 faculty who study various aspects of acoustics and related disciplines, and so, there are
many classes to choose from as graduate students structure their plans of study to complement their research activities and to broaden
their understanding of the various aspects of acoustics. In Mechanical Engineering, the primary emphasis is on understanding noise generation, noise propagation, and the impact of noise on people, as well as development of noise control strategies, experimental techniques, and noise and noise impact prediction tools. The ME acoustics research is conducted at the Ray W. Herrick Laboratories, which
houses several large acoustics chambers that are designed to facilitate testing of a wide array mechanical systems, reflecting the Laboratories’ long history of industry-relevant research. Complementing the acoustics research, Purdue has vibrations, dynamics, and electromechanical systems research programs and is home to a collaborative group of engineering and psychology professors who study human
perception and its integration into engineering design. There are also very strong ties between ME acoustics faculty and faculty in Biomedical Engineering and Speech Language and Hearing Sciences.
3aID3. Acoustics program at the University of Rhode Island. Gopu R. Potty, James H. Miller, Brenton Wallin (Dept. of Ocean Eng.,
Univ. of Rhode Island, 115 Middleton Bldg., Narragansett, RI 02882, potty@egr.uri.edu), Charles E. White (Naval Undersea Warfare
Ctr., Newport, RI), and Jennifer Giard (Marine Acoust., Inc., Middletown, RI)
The undergraduate and graduate program in Ocean Engineering at the University of Rhode Island is one of the oldest such programs
in the United States. This program offers Bachelors, Masters (thesis and non-thesis options), and Ph.D. degrees in Ocean Engineering.
The Ocean Engineering program has a strong acoustic component both at the undergraduate and graduate level. At the graduate level,
students can specialize in several areas including geoacoustic inversion, propagation modeling, marine mammal acoustics, ocean
2197
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2197
acoustic instrumentation, transducers, etc. Current acoustics related research activities of various groups will be presented. Information
regarding the requirements of entry into the program will be provided. Many graduates from the program hold faculty positions at a
number of universities in the United States and abroad. In addition, graduates from the ocean acoustics program at URI are key staff at
many companies and organizations. The opportunities and challenges facing the program will be summarized.
3aID4. Graduate education and research in architectural acoustics at Rensselaer Polytechnic Institute. Ning Xiang, Jonas
Braasch, and Todd Brooks (Graduate Program in Architectural Acoust., School of Architecture, Rensselaer Polytechnic Inst., Troy, NY
12180, xiangn@rpi.edu)
The rapid pace of change in the fields of architectural-, physical-, and psycho-acoustics has constantly advanced the Graduate Program in Architectural Acoustics from its inception in 1998 with an ambitious mission of educating future experts and leaders in architectural acoustics. Recent years we have reshaped its pedagogy using “STEM” (science, technology, engineering, and mathematics)
methods, including intensive, integrative hands-on experimental components that fuse theory and practice in a collaborative environment. Our pedagogy enables graduate students from a broad range of fields to succeed in this rapidly changing field. The graduate program has attracted graduate students from a variety of disciplines including individuals with B.S., B.A., or B.Arch. degrees in
Engineering, Physics, Mathematics, Computer Science, Electronic Media, Sound Recording, Music, Architecture, and related fields.
RPI’s Graduate Program in Architectural Acoustics has since graduated more than 100 graduates with both M.S. and Ph.D. degrees.
Along with faculty members they have also actively 0contributed to the program’s research in architectural acoustics, psychoaoustics,
communication acoustics, signal processing in acoustics as well as our scientific exploration at the intersection of cutting edge research
and traditional architecture/music culture. This paper shares the growth and evolution of the graduate program.
3aID5. Graduate training opportunities in the hearing sciences at the University of Louisville. Pavel Zahorik, Jill E. Preminger
(Div. of Communicative Disord., Dept. of Surgery, Univ. of Louisville School of Medicine, Psychol. and Brain Sci., Life Sci. Bldg. 317,
Louisville, KY 40292, pavel.zahorik@louisville.edu), and Christian E. Stilp (Dept. of Psychol. and Brain Sci., Univ. of Louisville,
Louisville, KY)
The University of Louisville currently offers two branches of training opportunities for students interested in pursuing graduate training in the hearing sciences: A Ph.D. degree in experimental psychology with concentration in hearing science, and a clinical doctorate in
audiology (Au.D.). The Ph.D. degree program offers mentored research training in areas such as psychoacoustics, speech perception,
spatial hearing, and multisensory perception, and guarantees students four years of funding (tuition plus stipend). The Au.D. program is
a 4-year program designed to provide students with the academic and clinical background necessary to enter audiologic practice. Both
programs are affiliated with the Heuser Hearing Institute, which, along with the University of Louisville, provides laboratory facilities
and clinical populations for both research and training. An accelerated Au.D./Ph.D. training program that integrates key components of
both programs for training of students interested in clinically based research is under development. Additional information is available
at http://louisville.edu/medicine/degrees/audiology and http://louisville.edu/psychology/graduate/vision-hearing.
3aID6. Graduate studies in acoustics, Speech and Hearing at the University of South Florida, Department of Communication Sciences and Disorders. Catherine L. Rogers (Dept. of Commun. Sci. and Disord., Univ. of South Florida, USF, 4202 E. Fowler Ave.,
PCD1017, Tampa, FL 33620, crogers2@usf.edu)
This poster will provide an overview of programs and opportunities for students who are interested in learning more about graduate
studies in the Department of Communication Sciences and Disorders at the University of South Florida. Ours is a large and active
department, offering students the opportunity to pursue either basic or applied research in a variety of areas. Current strengths of the
research faculty in the technical areas of Speech Communication and Psychological and Physiological Acoustics include the following:
second-language speech perception and production, aging, hearing loss and speech perception, auditory physiology, and voice acoustics
and voice quality. Entrance requirements and opportunities for involvement in student research and professional organizations will also
be described.
3aID7. Graduate programs in Hearing and Speech Sciences at Vanderbilt University. G. Christopher Stecker and Anna C. Diedesch
(Hearing and Speech Sci., Vanderbilt Univ. Medical Ctr., 1215 21st Ave. South, Rm. 8310, Nashville, TN 37232-8242, g.christopher.
stecker@vanderbilt.edu)
The Department of Hearing and Speech Sciences at Vanderbilt University is home to several graduate programs in the areas of Psychological and Physiological Acoustics and Speech Communication. Programs include the Ph.D. in Audiology, Speech-Language Pathology, and Hearing or Speech Science, Doctor of Audiology (Au.D.), and Master’s programs in Speech-Language Pathology and
Education of the Deaf. The department is closely affiliated with Vanderbilt University’s Graduate Program in Neurobiology. Several
unique aspects of the research and training environment in the department provide exceptional opportunities for students interested in
studying the basic science as well as clinical-translational aspects of auditory function and speech communication in complex environments. These include anechoic and reverberation chambers capable of multichannel presentation, the Dan Maddox Hearing Aid Laboratory, and close connections to active Audiology, Speech-Pathology, Voice, and Otolaryngology clinics. Students interested in the
neuroscience of communication utilize laboratories for auditory and multisensory neurophysiology and neuroanatomy, human electrophysiology and neuroimaging housed within the department and at the neighboring Vanderbilt University Institute for Imaging Science.
Finally, department faculty and students engage in numerous engineering and industrial collaborations, which benefit from our home
within Vanderbilt University and setting in Music City, Nashville. Tennessee.
2198
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2198
3aID8. Underwater acoustics graduate study at the Applied Physics Laboratory, University of Washington. Robert I. Odom
(Appl. Phys. Lab, Univ. of Washington, 1013 NE 40th St., Seattle, WA 98105, odom@apl.washington.edu)
With faculty representation in the Departments of Electrical Engineering, and Mechanical Engineering within the College of Engineering, the School of Oceanography, and the Department of Earth and Space Sciences within the College of the Environment, underwater acoustics at APL-UW touches on topics as diverse as long range controlled source acoustics, very low frequency seismics,
sediment acoustics, marine mammal vocalizations, and noise generated by industrial activities such as pile driving, among other things.
Graduate studies leading to both M.S. and Ph.D. degrees are available. Examples of projects currently being pursued and student opportunities are highlighted in this poster.
3aID9. Graduate acoustics at Brigham Young University. Timothy W. Leishman, Kent L. Gee, Tracianne B. Neilsen, Scott D. Sommerfeldt, Jonathan D. Blotter, and William J. Strong (Brigham Young Univ., N311 ESC, Provo, UT 84602, tbn@byu.edu)
Graduate studies in acoustics at Brigham Young University prepare students for jobs in industry, research, and academia by complementing in-depth coursework with publishable research. In the classroom, a series of five graduate-level core courses provides students
with a solid foundation in core acoustics principles and practices. The associated lab work is substantial and provides hands-on experience in diverse areas of acoustics: calibration, directivity, scattering, absorption, Doppler vibrometry, lumped-element mechanical systems, equivalent circuit modeling, arrays, filters, room acoustics measurements, active noise control, and near-field acoustical
holography. In addition to coursework, graduate students complete independent research projects with faculty members. Recent thesis
and dissertation topics have included active noise control, directivity of acoustic sources, room acoustics, radiation and directivity of
musical instruments, energy-based acoustics, aeroacoustics, propagation modeling, nonlinear propagation, and high-amplitude noise
analysis. In addition to their individual projects, graduate students often serve as peer mentors to undergraduate students on related projects and often participate in field experiments to gain additional experience. Students are expected to develop their communication
skills, present their research at multiple professional meetings, and publish it in peer-reviewed acoustics journals. In the past five years,
nearly all graduate students have published at least one refereed paper.
3a WED. AM
3aID10. Acoustics-related research in the Department of Speech and Hearing Sciences at Indiana University. Tessa Bent, Steven
Lulich, Robert Withnell, and William Shofner (Dept. of Speech and Hearing Sci., Indiana Univ., 200 S. Jordan Ave., Bloomington, IN
47405, tbent@indiana.edu)
In the Department of Speech and Hearing Sciences at Indiana University, there are many highly active laboratories that conduct
research on a wide range of areas in acoustics. Four of these laboratories are described below. The Biophysics Lab (PI: Robert Withnell)
focuses on the mechanics of hearing. Acoustically based signal processing and data acquisition provide experimental data for modelbased analysis of peripheral sound processing. The Comparative Perception Lab (PI: William Shofner) focuses on how the physical features of complex sounds are related to their perceptual attributes, particularly pitch and speech. Understanding behavior and perception
in animals, particularly in chinchillas, is an essential component of the research. The Speech Production Laboratory (PI: Steven Lulich)
conducts research on imaging of the tongue and oral cavity, speech breathing, and acoustic modeling of the whole vocal/respiratory
tract. Laboratory equipment includes 3D/4D ultrasound, digitized palate impressions, whole-body and inductive plethysmography, electroglottography, oral and nasal pressure and flow recordings, and accelerometers. The Speech Perception Lab (PI: Tessa Bent) focuses
on the perceptual consequences of phonetic variability in speech, particularly foreign-accented speech. The main topics under investigation are perceptual adaptation, individual differences in word recognition, and developmental speech perception.
3aID11. Biomedical research at the image-guided ultrasound therapeutics laboratories. Christy K. Holland (Internal Medicine,
Univ. of Cincinnati, 231 Albert Sabin Way, CVC 3935, Cincinnati, OH 45267-0586, Christy.Holland@uc.edu), T. Douglas Mast (Biomedical Eng., Univ. of Cincinnati, Cincinnati, OH), Kevin J. Haworth, Kenneth B. Bader, Himanshu Shekhar, and Kirthi Radhakrishnan
(Internal Medicine, Univ. of Cincinnati, Cincinnati, OH)
The Image-guided Ultrasound Therapeutic Laboratories (IgUTL) are located at the University of Cincinnati in the Heart, Lung, and
Vascular Institute, a key component of efforts to align the UC College of Medicine and UC Health research, education, and clinical programs. These extramurally funded laboratories, directed by Prof. Christy K. Holland, are comprised of graduate and undergraduate students, postdoctoral fellows, principal investigators, and physician-scientists with backgrounds in physics and biomedical engineering,
and clinical and scientific collaborators in fields including cardiology, neurosurgery, neurology, and emergency medicine. Prof. Holland’s research focuses on biomedical ultrasound including sonothrombolysis, ultrasound-mediated drug and bioactive gas delivery, development of echogenic liposomes, early detection of cardiovascular diseases, and ultrasound-image guided tissue ablation. The
Biomedical Ultrasonics and Cavitation Laboratory within IgUTL, directed by Prof. Kevin J. Haworth, employs ultrasound-triggered
phase-shift emulsions (UPEs) for image-guided treatment of cardiovascular disease, especially thrombotic disease. Imaging algorithms
incorporate both passive and active cavitation detection. The Biomedical Acoustics Laboratory within IgUTL, directed by Prof. T.
Douglas Mast, employs ultrasound for monitoring thermal therapy, ablation of cancer and vascular targets, transdermal drug delivery,
and noninvasive measurement of tissue deformation.
2199
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2199
3aID12. Graduate acoustics education in the Cockrell School of Engineering at The University of Texas at Austin. Michael R.
Haberman (Appl. Res. Labs., The Univ. of Texas at Austin, Austin, TX), Neal A. Hall (Elec. and Comp. Eng. Dept., The Univ. of Texas
at Austin, Austin, TX), Mark F. Hamilton (Mech. Eng. Dept., The Univ. of Texas at Austin, 1 University Station, C2200, Austin, TX
78712), Marcia J. Isakson (Appl. Res. Labs., The Univ. of Texas at Austin, Austin, TX), and Preston S. Wilson (Mech. Eng. Dept., The
Univ. of Texas at Austin, Austin, TX, pswilson@mail.utexas.edu)
While graduate study in acoustics takes place in several colleges and schools at The University of Texas at Austin (UT Austin),
including Communication, Fine Arts, Geosciences, and Natural Sciences, this poster focuses on the acoustics program in Engineering.
The core of this program resides in the Departments of Mechanical Engineering (ME) and Electrical and Computer Engineering (ECE).
Acoustics faculty in each department supervise graduate students in both departments. One undergraduate and seven graduate acoustics
courses are cross-listed in ME and ECE. Instructors for these courses include staff at Applied Research Laboratories at UT Austin, where
many of the graduate students have research assistantships. The undergraduate course, taught every fall, begins with basic physical
acoustics and proceeds to draw examples from different areas of engineering acoustics. Three of the graduate courses are taught every
year: a two-course sequence on physical acoustics, and a transducers course. The remaining four graduate acoustics courses, taught in
alternate years, are on nonlinear acoustics, underwater acoustics, ultrasonics, and architectural acoustics. An acoustics seminar is held
most Fridays during the long semesters, averaging over ten per semester since 1984. The ME and ECE departments both offer Ph.D.
qualifying exams in acoustics.
3aID13. Graduate studies in Ocean Acoustics in the Massachusetts Institute of Technology and Woods Hole Oceanographic Institution Joint Program. Andone C. Lavery (Appl. Ocean Phys. and Eng., Woods Hole Oceanographic Inst., 98 Water St., MS 11, Bigelow 211, Woods Hole, MA 02536, alavery@whoi.edu)
An overview of graduate studies in Ocean Acoustics within the framework of the Massachusetts Institute of Technology (MIT) and
Woods Hole Oceanographic Institution (WHOI) Joint Program is presented, including a brief history of the program, facilities, details of
the courses offered, alumni placing, funding opportunities, and current program status, faculty members and research. Emphasis is given
to the key role of the joint strengths provided by MIT and WHOI, the strong sea-going history of the program, and the potential for
highly interdisciplinary research.
3aID14. Graduate studies in acoustics at the University of Notre Dame. Christopher Jasinski and Thomas C. Corke (Aerosp. and
Mech. Eng., Univ. of Notre Dame, 54162 Ironwood Rd., South Bend, IN 46635, chrismjasinski@gmail.com)
The University of Notre Dame department of Aerospace and Mechanical Engineering is conducting cutting edge research in aeroacoustics, structural vibration, and wind turbine noise. Expanding facilities are housed at two buildings of the Hessert Laboratory for
Aerospace Engineering and include two 25 kW wind turbines, a Mach 0.6 wind tunnel, and an anechoic wind tunnel. Several faculty
members conduct research related to acoustics and multiple graduate level courses are offered in general acoustics and aeroacoustics.
This poster presentation will give an overview of the current research activities, laboratory facilities, and graduate students and faculty
involved at Notre Dame’s Hessert Laboratory for Aerospace Engineering.
3aID15. Graduate study in Architectural Acoustics within the Durham School at the University of Nebraska—Lincoln. Lily M.
Wang, Matthew G. Blevins, Zhao Peng, Hyun Hong, and Joonhee Lee (Durham School of Architectural Eng. and Construction, Univ. of
Nebraska-Lincoln, 1110 South 67th St., Omaha, NE 68182-0816, lwang4@unl.edu)
Persons interested in pursuing graduate study in architectural acoustics are encouraged to consider joining the Architectural Engineering Program within the Durham School of Architectural Engineering and Construction at the University of Nebraska—Lincoln
(UNL). Among the 21 ABET-accredited Architectural Engineering (AE) programs across the United States, the Durham School’s program is one of the few that offers graduate engineering degree programs (MAE, MS, and PhD) and one of only two that offers an area of
concentration in architectural acoustics. Acoustics students in the Durham School benefit both from the multidisciplinary environment
in an AE program and from our particularly strong ties to the building industry, since three of the largest architectural engineering companies in the United States are headquartered in Omaha, Nebraska. Descriptions will be given on the graduate-level acoustics courses,
newly renovated acoustic lab facilities, the research interests and achievements of our acoustics faculty and students, and where our
graduates are to date. Our group is also active in extracurricular activities, particularly through the University of Nebraska Acoustical
Society of America Student Chapter. More information on the “Nebraska Acoustics Group” at the Durham School may be found online
at http://nebraskaacousticsgroup.org/.
3aID16. Pursuing the M.Eng. in acoustics through distance education from Penn State. Daniel A. Russell and Victor W. Sparrow
(Graduate Program in Acoust., Penn State Univ., 201 Appl. Sci. Bldg, University Park, PA 16802, drussell@engr.psu.edu)
Since 1987, the Graduate Program in Acoustics at Penn State has been providing remote access to graduate level education leading
to the M.Eng. degree in Acoustics. Course lecture content is currently broadcast as a live-stream via Adobe Connect to distance students
scattered throughout North America and around the world, while archived recordings allow distance students to access lecture material
at their convenience. Distance Education students earn the M.Eng. in Acoustics degree by completing 30 credits of coursework (six
required core courses and four electives) and writing a capstone paper. Courses offered for distance education students include: fundamentals of acoustics and vibration, electroacoustic transducers, signal processing, acoustics in fluid media, sound and structure interaction, digital signal processing, aerodynamic noise, acoustic measurements and data analysis, ocean acoustics, architectural acoustics,
noise control engineering, nonlinear acoustics, outdoor sound propagation, computational acoustics, flow induced noise, spatial sound
2200
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2200
and 3D audio, marine bioacoustics, and acoustics of musical instruments. This poster will summarize the distance education experience
leading to the M.Eng. degree in Acoustics from Penn State showcasing student demographics, capstone paper topics, enrollment statistics and trends, and the success of our graduates.
3aID17. Graduate studies in acoustics at Northwestern University. Ann Bradlow (Linguist, Northwestern Univ., 2016 Sheridan Rd.,
Evanston, IL IL, abradlow@northwestern.edu)
Northwestern University has a vibrant and highly interdisciplinary community of acousticians. Of the 13 ASA technical areas, three
have strong representation at Northwestern: Speech Communication, Psychological and Physiological Acoustics, and Musical Acoustics.
Sound-related work is conducted across a wide range of departments including Linguistics (in the Weinberg College of Arts and Sciences), Communication Sciences & Disorders, and Radio/Television/Film (both in the School of Communication), Electrical Engineering
& Computer Science (in the McCormick School of Engineering), Music Theory & Cognition (in the Bienen School of Music), and Otolaryngology (in the Feinberg School of Medicine). In addition, The Knowles Hearing Center involves researchers and labs across the
university dedicated to the prevention, diagnosis and treatment of hearing disorders. Specific acoustics research topics across the university range from speech perception and production across the lifespan and across languages, dialect and socio-indexical properties of
speech, sound design, machine perception of music and audio, musical communication, the impact of long-term musical experience on
auditory encoding and representation, auditory perceptual learning, and the cellular, molecular, and genetic bases of hearing function.
We invite you to visit our poster to learn more about the “sonic boom” at Northwestern University!
WEDNESDAY MORNING, 29 OCTOBER 2014
SANTA FE, 9:00 A.M. TO 11:45 A.M.
Session 3aMU
3a WED. AM
Musical Acoustics: Topics in Musical Acoustics
Jack Dostal, Chair
Physics, Wake Forest University, P.O. Box 7507, Winston-Salem, NC 27109
Contributed Papers
9:00
9:15
3aMU1. Study of free reed attack transients using high speed video.
Spencer Henessee (Phys., Coe College, GMU #447, 1220 First Ave. NE,
Cedar Rapids, IA 52402, sahenessee@coe.edu), Daniel M. Wolff (Univ. of
North Carolina at Greensboro, Greensboro, NC), and James P. Cottingham
(Phys., Coe College, Cedar Rapids, IA)
3aMU2. Detailed analysis of free reed initial transients. Daniel M. Wolff
(Univ. of North Carolina at Greensboro, 211 McIver St. Apt. D, Greensboro,
NC 27403, dmwolff@uncg.edu), Spencer Henessee, and James P. Cottingham (Phys., Coe College, Cedar Rapids, IA)
Earlier methods of studying the motion of free reeds have been augmented with the use of high-speed video, resulting in a more detailed picture
of reed oscillation, especially the initial transients. Displacement waveforms
of selected points on the reed tongue image can be obtained using appropriate tracking software. The waveforms can be analyzed for the presence of
higher modes of vibration and other features of interest in reed oscillation,
and they can be used in conjunction with displacement or velocity waveforms obtained by other means, along with finite element simulations, to
obtain detailed information about reed oscillation. The high speed video
data has a number of advantages. It can provide a two-dimensional image of
the motion of any point tracked on the reed tongue, and the freedom to
change the points selected for tracking provides flexibility in data acquisition. In addition, the high speed camera is capable of simultaneous triggering of other motion sensors as well as oscilloscopes and spectrum analyzers.
Some examples of the use of high speed video are presented and some difficulties in the use of this technique are discussed. [Work partially supported
by US National Science Foundation REU Grant PHY-1004860.]
2201
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
The motion of the reed tongue in early stages of the attack transient has
been studied in some detail for reeds from a reed organ. Oscillation waveforms were obtained using a laser vibrometer system, variable impedance
transducer proximity sensors, and high speed video with tracking software.
Typically, the motion of the reed tongue begins with an initial displacement
of the equilibrium position, often accompanied by a few cycles of irregular
oscillation. This is followed by a short transitional period in which the amplitude of oscillation gradually increases and the frequency stabilizes at the
steady state oscillation frequency. In the next stage, the amplitude of oscillation continues to increase to the steady state value. The spectra derived from
the waveforms in each stage have been analyzed, showing that the second
transverse mode and the first torsional mode are both observed in the transient, with the amplitude of the torsional mode apparently especially significant in the earlier stages of oscillation. Comparison of reed tongues of
different design have been made to explore the role of the torsional mode
the initial excitation. Finite element simulations have been used to aid in the
verification and interpretation of some of the results. [Work supported by
US National Science Foundation REU Grant PHY-1004860.]
168th Meeting: Acoustical Society of America
2201
9:30
10:30
3aMU3. Comparison of traditional and matched grips: Rhythmic
sequences played in jazz drumming. E. K. Ellington Scott (Oberlin College, OCMR2639, Oberlin College, Oberlin, OH 44074, escott@oberlin.
edu) and James P. Cottingham (Phys., Coe College, Cedar Rapids, IA)
3aMU6. Temporal analysis, manipulation, and resynthesis of musical
vibrato. Mingfeng Zhang, Gang Ren, Mark Bocko (Dept. Elec. and Comput. Eng., Univ. of Rochester, Rochester, NY 14627, mzhang43@hse.rochester.edu), and James Beauchamp (Dept. Elec. and Comput. Eng., Univ. of
Illinois at Urbana–Champaign, Urbana, IL)
Traditional and matched grips have been compared using a series of
measurements involving rhythmic sequences played by experienced jazz
drummers using each of the two grips. Rhythmic sequences played on the
snare drum were analyzed using high speed video as well as other measurement techniques including laser vibrometry and spectral analysis of the
sound waveforms. The high speed video images, used with tracking software, allow observation of several aspects of stick-drum head interaction.
These include two-dimensional trajectories of the drum stick tip, a detailed
picture of the stick-drum head interaction, and velocities of both the stick
and the drum head during the contact phase of the stroke. Differences
between the two grips in timing during the rhythmic sequences were investigated, and differences in sound spectrum were also analyzed. Some factors
that may be player dependent have been explored, such as the effect of tightness of the grip, but an effort has been made to concentrate on factors that
are independent of the player. [Work supported by US National Science
Foundation REU Grant PHY-1004860.]
9:45
3aMU4. A harmonic analysis of oboe reeds. Julia Gjebic, Karen Gipson
(Phys., Grand Valley State Univ., 10255 42nd Ave., Apt. 3212, Allendale,
MI 49401, gjebicj@mail.gvsu.edu), and Marlen Vavrikova (Music and
Dance, Grand Valley State Univ., Allendale, MI)
Because oboists make their own reeds to satisfy personal and physiological preferences, no two reed-makers construct their reeds in the same manner, just as no two oboe players have the same sound. The basic structure of
an oboe reed consists of two curved blades of the grass Arundo donax bound
to a conical metal tube (a staple) such that the edges of the blades meet and
vibrate against one another when stimulated by a change in the surrounding
pressure. While this basic structure is constant across reed-makers, the physical measurements of the various portions of the reed (tip, spine, and heart)
resulting from the final stage of reed-making (scraping) can vary significantly between individual oboists. In this study, we investigated how the
physical structure of individual reeds relates to the acoustic spectrum. We
performed statistical analyses to discern which areas of the finished reed
influence the harmonic series most strongly. This information is of great interest to oboists as it allows them quantitative insight into how their individual scrape affects their overall tone quality and timbre.
10:00
3aMU5. Modeling and numerical simulation of a harpsichord. Rossitza
Piperkova, Sebastian Reiter, Martin Rupp, and Gabriel Wittum (Goethe Ctr.
for Sci. Computing, Goethe Univ. Frankfurt, Kettenhofweg 139, Frankfurt
am Main 60325, Germany, Wittum@gcsc.uni-frankfurt.de)
This research studies what influences various properties of a soundboard
may have upon the acoustic feedback to gain a better understanding about
the relevance of different properties regarding the sound characteristics. It
may also help to improve the quality of simulations. We did a modal analysis of a real soundboard of a harpsichord using a Laser-Doppler-Vibrometer
and also simulated several models of the very same soundboard in three
space dimensions using the simulation software UG4. The used models of
the sound board differed from each other by changing or skipping several
properties and components. Then, we compared the simulated vibration patterns with the patterns measured on the real sound board to gain a better
understanding about their influences on the vibrations. In particular, we
used models with and without soundboard bars and bridge, but also were
using different thicknesses for the soundboard itself.
10:15–10:30 Break
2202
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Vibrato is an important music performance technique for both voice and
various music instruments. In this paper, a signal processing framework for
vibrato analysis, manipulation and resynthesis is presented. In the analysis
part, music vibrato is treated as a generalized descriptor of music timbre and
the signal magnitude and instantaneous frequency is implemented as temporal features. Specifically, the magnitude track shows the dynamic variations
of audio loudness, and the frequency track shows the frequency deviations
varying with time. In the manipulation part, several manipulation methods
for the magnitude track and the frequency track is implemented. The tracking results are manipulated in both the time- and the frequency-domain.
These manipulation methods are implemented as an interactive process to
allow musicians to manually adjust the processing parameters. In the resynthesis part, the simulated vibrato audio is created using sinusoidal resynthesis process. The resynthesis part serves three purpose: to imitate human
music performance, to migrate sonic features across music performances,
and to serve as creative audio design tools, e.g., to create non-existing
vibrato characteristics. The source audio from human music performance
and the resynthesize audio is compared using subjective listening tests to
validate our proposed framework.
10:45
3aMU7. Shaping musical vibratos using multi-modal pedagogical interactions. Mingfeng Zhang, Fangyu Ke (Dept. Elec. and Comput. Eng., Univ.
of Rochester, Rochester, NY 14627, mzhang43@hse.rochester.edu), James
Beauchamp (Dept. Elec. and Comput. Eng., Univ. of Illinois at Urbana–
Champaign, Urbana, IL), and Mark Bocko (Dept. Elec. and Comput. Eng.,
Univ. of Rochester, Rochester, NY)
The music vibrato is termed a “pulsation in pitch, intensity, and timbre”
because of its effectiveness in artistic rendering. However, this sonic trick is
largely still a challenge in music pedagogy across music conservatories. In
classroom practice, music teachers use demonstration, body gestures, and
metaphors to convey their artistic intentions and the modern computer tools
are seldom employed. In our proposed framework, we use musical vibrato
visualization and sonification tools as a multi-modal computer interface for
pedagogical purpose. Specifically, we compare master performance audio
with student performance audio using signal analysis tools. Then, we obtain
various similarity measures based on these signal analysis results. Based on
these similarity measures we implement multi-modal interactions for music
students to shape their music learning process. The visualization interface is
based on audio features including dynamics, pitch and timbre. The sonifications interface is based on recorded audio and synthesized audio. To
enhance the music relevance of our proposed framework, both visualization
and sonification tools are targeted to serve a musical communicating to convey musical concepts in an intuitive manner. The proposed framework is
evaluation using subjective ratings from music students and objective
assessment of measurable training goals.
11:00
3aMU8. Absolute memory for popular songs is predicted by auditory
working memory ability. Stephen C. Van Hedger, Shannon L. Heald,
Rachelle Koch, Howard C. Nusbaum (Psych., The Univ. of Chicago, 5848
S. University Ave., Beecher 406, Chicago, IL 60637, stephen.c.hedger@
gmail.com),
While most individuals do not possess absolute pitch (AP)—the ability
to name an isolated musical note in absence of a reference note—they do
show some limited memory for absolute pitch of melodies. For example,
most individuals are able to recognize when a well-known song has been
subtly pitch shifted. Presumably, individuals are able to select the correct
absolute pitch at above-chance levels because well-known songs are frequently heard at a consistent pitch. In the current studies, we ask whether
individual differences in absolute pitch judgments for people without AP
can be explained by general differences in auditory working memory.
168th Meeting: Acoustical Society of America
2202
including sonority prototypes, prototype transposition levels, and register
specific distortions. Notably, true difference tones—audible difference tones
unsustainable apart from a sounding multiphonic—are found to be register
specific, not sonority specific; suggesting that physical locations (rather than
harmonic contexts) underpin these sounds.
Working memory capacity has been shown to predict the perceptual fidelity
of long-term category representations in vision; thus, it is possible that auditory working memory capacity explains individual differences in recognizing the tuning of familiar songs. We found that participants were reliably
above chance in classifying popular songs as belonging to the correct or
incorrect key. Moreover, individual differences in this recognition performance were predicted by auditory working memory capacity, even after controlling for overall music experience and stimulus familiarity. Implications
for the interaction between working memory and AP are discussed.
11:30
3aMU10. Linear-response reflection coefficient of the recorder air-jet
amplifier. John C. Price (Phys., Univ. of Colorado, 390 UCB, Boulder, CO
80309, john.price@colorado.edu), William Johnston (Phys., Colorado State
Univ., Fort Collins, CO), and Daniel McKinnon (Chemical Eng., Univ. of
Colorado, Boulder, CO)
11:15
3aMU9. Constructing alto saxophone multiphonic space. Keith A. Moore
(Music, Columbia Univ., 805 W Church St., Savoy, Illinois 10033,
kam101@columbia.edu)
Multiphonics are sonorities with two or more independent tones arising
from instruments, or portions of instruments, associated with the production
of single pitches. Since the 1960s multiphonics have been probed in two
ways. Acousticians have explored the role of nonlinearity in multiphonic
sound production (Benade 1976; Backus 1978; Keefe & Laden 1991) and
musicians have created instrumental catalogs of multiphonic sounds (Bartolozzi 1967; Rehfeldt 1977; Kientzy 1982; Levine 2002). These lines of inquiry have at times been combined (Veale & Mankopf 1994). However, a
meta-level analysis has not yet emerged from this work that answers basic
questions such as how many kinds of multiphonics are found on one particular instrument and which physical conditions underlie such variety. The
present paper suggests a database driven approach to the problem, producing a “quantitative resonant frequency curve” that shows every audible
appearance of each frequency in a large—if not permutationally exhaustive—set of alto saxophone multiphonics. Compelling data emerges,
WEDNESDAY MORNING, 29 OCTOBER 2014
MARRIOTT 3/4, 8:45 A.M. TO 12:00 NOON
Session 3aNS
Noise and ASA Committee on Standards: Wind Turbine Noise
Nancy S. Timmerman, Cochair
Nancy S. Timmerman, P.E., 25 Upton Street, Boston, MA 02118
Robert D. Hellweg, Cochair
Hellweg Acoustics, 13 Pine Tree Rd., Wellesley, MA 02482
Paul D. Schomer, Cochair
Schomer and Associates Inc., 2117 Robert Drive, Champaign, IL 61821
Kenneth Kaliski, Cochair
RSG Inc., 55 Railroad Row, White River Junction, VT 05001
Invited Papers
8:45
3aNS1. Massachusetts Wind Turbine Acoustics Research Project—Goals and preliminary results. Kenneth Kaliski, David Lozupone (RSG Inc., 55 RailRd. Row, White River Junction, VT 05001, ken.kaliski@rsginc.com), Peter McPhee (Massachusetts Clean
Energy Ctr., Boston, MA), Robert O’Neal (Epsilon Assoc., Maynard, MA), John Zimmerman (Northeast Wind, Waterbury, VT), Kieth
Wilson (Keith Wilson, Hanover, NH), and Carol Rowan-West (Massachusetts Dept. of Environ. Protection, Boston, MA)
The Commonwealth of Massachusetts (USA) has 43 operating wind turbine projects of 100 kW or more. At several of these projects,
noise complaints have been made to state authorities. The Massachusetts Clean Energy Center, which provides funding for early stage
analysis and development of wind power projects, and the Massachusetts Department of Environmental Protection, which regulates
2203
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2203
3a WED. AM
Steady-state oscillations in a duct flute, such as the recorder, are controlled by (1) closing tone holes and (2) adjusting the blowing pressure or
air-jet velocity. The acoustic amplitude in steady-state cannot be controlled
independent of the jet velocity, because it is determined by the gain saturation properties of the air-jet amplifier. Consequently, the linear-response
gain of the air-jet amplifier has only very rarely been studied [Thwaites and
Fletcher, J. Acoust. Soc. Am. 74, 400–408 (1983)]. Efforts have focused
instead on the more complex gain-saturated behavior, which is controlled
by vortex shedding at the labium. We replace the body of a Yamaha YRT304B tenor recorder with a multi-microphone reflectometer and measure the
complex reflection coefficient of the head at small acoustic amplitudes as a
function of air-jet velocity and acoustic frequency. We find that the gain
(reflection coefficient magnitude) has a maximum value of 2.5 at a Strouhal
number of 0.3 (jet transit time divided by acoustic period), independent of
jet velocity. Surprisingly, the frequency where the gain peaks for a given
blowing pressure is not close to the in-tune pitch of a note that is played at
the same blowing pressure.
noise, launched the project to increase understanding of (1) wind turbine acoustic impacts, taking into account variables such as wind
turbine size, technology, wind speed, topography and distance, and (2) the generation, propagation, and measurement of sound around
wind turbine projects, to inform policy-makers on how pre- and post-construction wind turbine noise studies should be conducted. This
study involved the collection of detailed sound and meteorological data at five locations. The resulting database and interim reports contain information on infrasound and audible frequencies, including amplitude modulation, tonality, and level. Analyses will include how
the effects of wind shear and other variables may affect these parameters. Preliminary findings reflect the effects of meteorological conditions on wind turbine sound generation and propagation.
9:05
3aNS2. Wind turbine annoyance—A clue from acoustic room modes. William K. Palmer (TRI-LEA-EM, 76 SideRd. 33-34 Saugeen,
RR 5, Paisley, ON N0G2N0, Canada, trileaem@bmts.com)
When one admits that they do not know all the answers and sets out to listen to the stories of people annoyed by wind turbines, the
clues can seem confusing. Why would some people report that they could get a better night’s sleep in an outdoor tent, rather than their
bedroom? Others reported that they could sleep better in the basement recreation room of their home, than in bedrooms. That made little
sense either. A third mysterious clue came from acoustic measurements at homes nearby wind turbines. Analysis of the sound signature
revealed low frequency spikes, but at amplitudes well below those expected to cause annoyance. The clues merged while studying the
acoustic room modes in a home, to reveal a remarkable hypothesis as to the cause of annoyance from wind turbines. In rooms where
annoyance was felt, the frequencies flagged by room mode calculations and the low frequency spikes observed from wind turbine measurements coincided. This paper will discuss the research and the results, which revealed a finding that provides a clue to the annoyance,
and potentially even a manner of providing limited relief.
9:25
3aNS3. A perspective on wind farm complaints and the Acoustical Society of America’s public policy. Paul D. Schomer (Schomer
and Assoc., Inc., 2117 Robert Dr., Champaign, IL 61821, schomer@SchomerAndAssociates.com) and George Hessler (Hessler Assoc.,
Haymarket, VA)
Worldwide, hundreds of wind farms have been built and commissioned. A sizeable fraction of these have had some complaints about
wind farm noise, perhaps 10 to 50%. A smaller percentage of wind farms have engendered more widespread complaints and claims of
adverse health effects, perhaps 1 to 10%. And in the limit (0 to 1%), there have been very widespread, vociferous complaints and in
some cases people have abandoned their houses. Some advocates for potentially affected communities have opined that many will be
made ill, while living miles from the nearest turbine, and some, who are wind power advocates, have opined that there is no possibility
anyone can possibly be made ill from wind turbine acoustic emissions. In an attempt to ameliorate this frequently polarized situation,
the ASA has established a public policy statement that calls for the development of a balanced research agenda to establish facts, where
“balanced” means the research should resolve issues for all parties with a material interest, and all parties should have a seat at the table
where the research plans are developed. This paper presents some thoughts and suggestions as to how this ASA public policy statement
can be nurtured and brought to fruition.
9:45
3aNS4. Balancing the research approach on wind turbine effects through improving psychological factors that affect community
response. Brigitte Schulte-Fortkamp (Inst. of Fluid Mech. and Eng. Acoust., TU Berlin, Einsteinufer 25, Berlin 101789, Germany, b.
schulte-fortkamp@tu-berlin.de)
There is a substantial need to find a balanced approach to deal with people’s concern about wind turbine effects. Indeed, the psychological factors that affect community response will be an important facet in this complete agenda development. Many of these relevant
issues are related to the soundscape concept which was adopted as an approach to provide a more holistic evaluation of “noise” and its
effects on the quality of life. Moreover, the soundscape technique uses a variety of investigation techniques, taxonomy and measurement
methods. This is a necessary protocol to approach a subject or phenomenon, to improve the validity of the research or design outcome
and to reduce the uncertainty of relying only on one approach. This presentation will use recent data improving the understanding about
the role of psychoacoustic parameters going beyond equivalent continuous sound level in wind turbine affects in order to discuss relevant psychological factors based on soundscape techniques.
10:05–10:25 Break
Contributed Papers
10:25
3aNS5. Measurement and synthesis of wind turbine infrasound. Bruce
E. Walker (Channel Islands Acoust., 676 W Highland Dr., Camarillo, CA
93010, noiseybw@aol.com) and Joseph W. Celano (Newson-Brown Acoust.
LLC, Santa Monica, CA)
As part of an ongoing investigation into the putative subjective effects
of sub-20 Hz acoustical emissions from large industrial wind turbines, measurement techniques for faithful capture of emissions waveforms have been
2204
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
developed and reported. To evaluate perception thresholds, Fourier synthesis and high fidelity low-frequency playback equipment has been used to
duplicate in a residential-like listening environment the amplitudes and
wave slopes of the actual emissions, with pulsation rate in the range of 0.5–
1.0 per second. Further, the amplitudes and slopes of the synthesized waves
can be parametrically varied and the harmonic phases “scrambled” to assess
the relative effects on auditory and other subjective responses. Measurement
and synthesis system details and initial subjective response results will be
shown.
168th Meeting: Acoustical Society of America
2204
10:40
sound output of individual turbines. In this study, a combined approach of
the Finite Element Method (FEM) and Parabolic Equation (PE) method is
employed to predict the sound levels from a wind turbine. In the prediction
procedure, the near field acoustic data is obtained by means of a computational fluid dynamic program which serves as a good starting field of sound
propagation. It is then possible to advance wind turbine noise in range by
using the FEM/PE marching algorithm. By incorporating the simulated turbulence profiles near wind turbine, more accurate predictions of sound field
in realistic atmospheric conditions are obtained.
3aNS6. Propagation of wind turbine noise through the turbulent atmosphere. Yuan Peng, Nina Zhou, Jun Chen, and Kai Ming Li (Mech. Eng.,
Purdue Univ., 177 South Russel St., West Lafayette, IN 47907-2099,
peng45@purdue.edu)
It is well known that turbulence can cause fluctuations in the resulting
sound fields. In the issue of wind turbine noise, such effect is non-negligible
since either the inflow turbulence from nearby turbine wakes or the atmospheric turbulence generated by rotating turbine blades can increase the
10:55–12:00 Panel Discussion
WEDNESDAY MORNING, 29 OCTOBER 2014
INDIANA C/D, 8:20 A.M. TO 11:30 A.M.
Session 3aPA
Physical Acoustics, Underwater Acoustics, Structural Acoustics and Vibration, and Noise: Acoustics of Pile
Driving: Models, Measurements, and Mitigation
Kevin M. Lee, Cochair
Applied Research Laboratories, The University of Texas at Austin, 10000 Burnet Road, Austin, TX 78758
3a WED. AM
Mark S. Wochner, Cochair
AdBmTechnologies, 1605 McKinley Ave., Austin, TX 78702
Invited Papers
8:20
3aPA1. Understanding effects of man-made sound on fishes and turtles: Gaps and guidelines. Arthur N. Popper (Biology, Univ. of
Maryland, Biology/Psych. Bldg., College Park, MD 20742, apopper@umd.edu) and Anthony D. Hawkins (Loughine Ltd, Aberdeen,
United Kingdom)
Mitigating measures may be needed to protect animals and humans that are exposed to sound from man-made sources. In this context, the levels of man-made sound that will disrupt behavior or physically harm the receiver should drive the degree of mitigation that
is needed. If a particular sound does not affect an animal adversely, then there is no need for mitigation! The problem then is to know
the sound levels that can affect the receiving animal. For most marine animals, there are relatively few data to develop guidelines that
can help formulate the levels at which mitigation is needed. In this talk, we will review recent guidelines for fishes and turtles. Since so
much remains to be determined in order to make guidelines more useful, it is important that priorities be set for future research. The
most critical data, with broadest implications for marine life, should be obtained first. This paper will also consider the most critical gaps
and present recommendations for future research.
8:40
3aPA2. The relationship between underwater sounds generated by pile driving and fish physiological responses. Michele B. Halvorsen (CSA Ocean Sci. Inc., 8502 SW Kanner Hwy, Stuart, FL 334997, mhalvorsen@conshelf.com)
Assessment of fish physiology after exposure to impulsive sound has been limited by quantifying physiological injuries, which range
from mortal to recoverable. A complex panel of injuries was reduced to a single metric by a model called the Fish Index of Trauma.
Over several years, six species of fishes from different morphological groupings, (e.g., physoclistous, physostomous, and lack of a swim
bladder) were studied. The onset of physiological tissue effect was determined across a range of cumulative sound exposure levels with
varying number of pile strikes. Follow up studies included investigation of healing from incurred injuries. The level of injury that animals expressed was influenced by their morphological grouping. Finally, investigation of the inner ear sensory hair cells showed that
damage occurred at higher sound exposure levels than when the onset of tissue injury would occur.
2205
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2205
9:00
3aPA3. A model to predict tissue damage in fishes from vibratory and impact pile driving. Mardi C. Hastings (George W. Woodruff School of Mech. Eng., Georgia Inst. of Technol., Atlanta, GA 30332-0405, mardi.hastings@gatech.edu)
Predicting effects of underwater pile driving on marine life requires coupling of pile source models with biological receiver models.
Fishes in particular are very vulnerable to tissue damage and hearing loss from pile driving activities, especially since they are often restricted to specific habitat sites and migratory routes. Cumulative sound exposure level is the metric used by government agencies for
sound exposure criteria to protect marine animals. In recent laboratory studies, physical injury and hearing loss in fish from simulated
impact pile driving signals have even been correlated with this metric. Mechanisms for injury and hearing loss in fishes, however,
depend on relative acoustic particle motion within the body of the animal, which can be disproportionately large in the vicinity of a pile.
Modeling results will be presented showing correlation of auditory tissue damage in three species of fish with relative particle motion
that can be generated 10–20 m from driving a 24-in diameter steel pile with an impact hammer. Comparative results with vibratory piling based on measured waveforms indicate that particle motion mechanisms may provide an explanation why the very large cumulative
sound exposure levels associated with vibratory pile driving do not produce tissue damage.
9:20
3aPA4. Pile driving pressure and particle velocity at the seabed: Quantifying effects on crustaceans and groundfish. James H.
Miller, Gopu R. Potty, and Hui-Kwan Kim (Ocean Eng., Univ. of Rhode Island, URI Bay Campus, 215 South Ferry Rd., Narragansett,
RI 02882, miller@egr.uri.edu)
In the United States, offshore wind farms are being planned and construction could begin in the near future along the East Coast of
the US. Some of the sites being considered are known to be habitat for crustaceans such as the American lobster, Homarus americanus,
which has a range from New Jersey to Labrador along the coast of North America. Groundfish such as summer flounder, Paralichthys
dentatus, and winter flounder, Pseudopleuronectes americanus, also are common along the East Coast of the US. Besides sharing the
seafloor in locations where wind farms are planned, all three of these species are valuable commercially. We model the effects on crustaceans, groundfish, and other animals near the seafloor due to pile driving. Three different waves are investigated including the compressional wave, shear wave and interface wave. A Finite Element (FE) technique is employed in and around the pile while a Parabolic
Equation (PE) code is used to predict propagation at long ranges from the pile. Pressure, particle displacement, and particle velocity are
presented as a function of range at the seafloor for a shallow water environment near Rhode Island. We will discuss the potential effects
on animals near the seafloor.
9:40
3aPA5. Finite difference computational modeling of marine impact pile driving. Alexander O. MacGillivray (JASCO Appl. Sci.,
2305–4464 Markham St., Victoria, BC V8Z7X8, Canada, alex@jasco.com)
Computational models based on the finite difference (FD) method can be successfully used to predict underwater pressure waves
generated by marine impact pile driving. FD-based models typically discretize the equations of motion for a cylindrical shell to model
the vibrations of a submerged pile in the time-domain. However, because the dynamics of a driven pile are complex, realistic models
must also incorporate physics of the driving hammer and surrounding acousto-elastic media into the FD formulation. This paper discusses several of the different physical phenomena involved, and shows some approaches to simulating them using the FD method.
Topics include dynamics of the hammer and its coupling to the pile head, transmission of axial pile vibrations into the soil, energy dissipation at the pile wall due to friction, acousto-elastic coupling to the surrounding media, and near-field versus far-field propagation modeling. Furthermore, this paper considers the physical parameters required for predictive modeling of pile driving noise in conjunction
with some practical considerations about how to determine these parameters for real-world scenarios.
10:00–10:20 Break
10:20
3aPA6. On the challenges of validating a profound pile driving noise model. Marcel Ruhnau, Tristan Lippert, Kristof Heitmann, Stephan Lippert, and Otto von Estorff (Inst. of Modelling and Computation, Hamburg Univ. of Technol., Denickestraße 17, Hamburg,
Hamburg 21073, Germany, mub@tuhh.de)
When predicting underwater sound levels for offshore pile driving by using numerical simulation models, appropriate model validation becomes of major importance. In fact, different parallel transmission paths for sound emission into the water column, i.e., pile-towater, pile-to-soil, and soil-to-water, make a validation at each of the involved interfaces necessary. As the offshore environment comes
with difficult and often unpredictable conditions, measurement campaigns are very time consuming and cost intensive. Model developers have to keep in mind that even thorough planning cannot overcome practical restrictions as well as technical limits and thus require
for a reasonable model balancing. The current work presents the validation approach chosen for a comprehensive pile driving noise
model—consisting of a near field finite element model as well as a far field propagation model—that is used for the prediction of noise
levels at offshore wind farms.
10:40
3aPA7. Underwater noise and transmission loss from vibratory pile driving. Peter H. Dahl and Dara M. Farrell (Appl. Phys. Lab.
and Mech. Eng. Dept., Univ. of Washington, 1013 NE 40th St., Seattle, WA 98105, dahl@apl.washington.edu)
High levels of underwater sound can be produced in vibratory pile driving that can carry regulatory implications. In this presentation,
observations of underwater noise from vibratory pile driving made with a vertical line array placed at range 17 m from the source (water
depth 7.5 m) are discussed, along with simultaneous measurements made at ranges of order 100 m. It is shown that the dominant spectral
2206
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2206
features are related to the frequency of the vibratory pile driving hammer (typically 15–35 Hz), producing spectral lines at intervals of
this frequency. Homomorphic analysis removes these lines to reveal the underlying variance spectrum. The mean square pressure versus
depth is subsequently studied in octave bands in view of the aforementioned spectral line property, with depth variation well modeled by
an incoherent sum of sources distributed over the water column. Adiabatic mode theory is used to model the range dependent local bathymetry, including the effect of elastic seabed, and comparisons are made with simultaneous measurements of the mean-square acoustic
pressure at ranges 200 and 400 m. This approach makes clear headway into the problem of predicting transmission loss versus range for
this method of pile driving.
Contributed Papers
11:00
surrounded by different arrays of resonators. The results indicate that airfilled resonators are a potential alternative to using encapsulated bubbles for
low frequency underwater noise mitigation. [Work supported by AdBm
Technologies.]
3aPA8. Using arrays of air-filled resonators to attenuate low frequency
underwater sound. Kevin M. Lee, Andrew R. McNeese (Appl. Res. Labs.,
The Univ. of Texas at Austin, 10000 Burnet Rd., Austin, TX 78758, klee@
arlut.utexas.edu), Preston S. Wilson (Mech. Eng. Dept. and Appl. Res.
Labs., The Univ. of Texas at Austin, Austin, TX), and Mark S. Wochner
(AdBm Technologies, Austin, TX)
11:15
This paper investigates the acoustic behavior of underwater air-filled
resonators that could potentially be used in an underwater noise abatement
system. The resonators are similar to Helmholtz resonators without a neck,
consisting of underwater inverted air-filled cavities with combinations of
rigid and elastic wall members, and they are intended to be fastened to a
framework to form a stationary array surrounding a noise source, such as a
marine pile driving operation, a natural resource production platform, or an
air gun array, or to protect a receiving area from outside noise. Previous
work has demonstrated the potential of surrounding low frequency sound
sources with arrays of large stationary encapsulated bubbles that can be
designed to attenuate sound levels over any desired frequency band and
with levels of reduction up to 50 dB [Lee and Wilson, Proceedings of Meeting on Acoustics 19, 075048 (2013)]. Open water measurements of underwater sound attenuation using resonators were obtained during a set of lake
experiments, where a low-frequency electromechanical sound source was
We present experiments on the dynamic buckling of slender rods axially
impacted by a projectile. By combining the results of Saint-Venant and elastic beam theory, we derive a preferred wavelength for the buckling instability, and experimentally verify the resulting scaling law for a range of
materials using high speed video analysis. The scaling law for the preferred
excitation mode depends on the ratio of the longitudinal speed of sound in
the beam to the impact speed of the projectile. We will briefly present the
imprint of this deterministic mechanism on the fragmentation statistics for
brittle beams.
WEDNESDAY MORNING, 29 OCTOBER 2014
MARRIOTT 1/2, 8:00 A.M. TO 9:20 A.M.
Session 3aSAa
Structural Acoustics and Vibration, Architectural Acoustics, and Noise: Vibration Reduction
in Air-Handling Systems
Benjamin M. Shafer, Chair
Technical Services, PABCO Gypsum, 3905 N 10th St, Tacoma, WA 98406
Chair’s Introduction—8:00
Invited Papers
8:05
3aSAa1. Vibration reduction in air handling systems. Angelo J. Campanella (Acculab, Campanella Assoc., 3201 Ridgewood Dr.,
Ohio, Hilliard, OH 43026, a.campanella@att.net)
Air handling units (AHU) mounted on elevated floors in old and new buildings can create floor vibrations that transmit through the
building structure to perturb nearby occupants and sensitive equipment such as electron microscopes. Vibration sources include rotating
fan imbalance and air turbulence. Isolation springs and the deflecting floor then create a two degree of freedom system. The analysis discussed here was originally published in “Sound and Vibration,” October 1987, pp. 26–30. Analysis parameters will be discussed along
with inertia block affects and spring design strategy for floors of finite mass.
2207
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2207
3a WED. AM
3aPA9. Axial impact driven buckling dynamics of slender beams. Josh
R. Gladden (Phys. & NCPA, Univ. of MS, 108 Lewis Hall, University, MS
38677, jgladden@olemiss.edu), Nestor Handzy, Andrew Belmonte (Dept.
of Mathematics, The Penn State Univ., University Park, PA), and E. Villermaux (Institut de Recherche sur les Phenomenes Hors Equilibre, Universite
de Provence, Marseille, France)
8:25
3aSAa2. Determining fan generated dynamic forces for use in predicting and controlling vibration and structure-borne noise
from air handling equipment. James E. Phillips (Wilson, Ihrig & Assoc., Inc., 6001 Shellmound St., Ste. 400, Emeryville, CA 94608,
jphillips@wiai.com)
Vibration measurements were conducted to determine the dynamic forces imparted by an operating fan to the floor of an existing
rooftop mechanical room. The calculated forces where then used as inputs to a Finite Element Analysis (FEA) computer model to predict the vibration and structure-borne noise in a future building with a similar fan. This paper summarizes the vibration measurements,
analysis of the measured data, the subsequent FEA analysis of the future building and the recommendations developed to control fan
generated noise and vibration in the future building.
8:45
3aSAa3. Vibration isolation of mechanical equipment: Case studies from light weight offices to casinos. Steve Pettyjohn (The
Acoust. & Vib. Group, Inc., 5765 9th Ave., Sacramento, CA CA, spettyjohn@acousticsandvibration.com)
Whether to vibration isolate HVAC equipment or not is often left to the discretion of the mechanical engineer or the equipment supplier. Leaving he isolators out saves money in materials and installation. The value of putting them is not so clear. The cost of not installing the isolators is seldom understood nor the cost of installing them later and the loss of trust by the client. Vibration is generated by
all rotating and reciprocating equipment. The resulting unbalanced forces ares seldom known with certainty nor are they quantified. This
paper explores the isolation of HVAC equipment on roof, penthouses and roofs without consideration for the stiffness of the structures
or resonances of other building elements. The influence of horizontal forces and the installation of the equipment to account for these
forces is seldom considered. The application of restraining forces must consider where the force is applied and what the moment arm is.
A quick review of the basic formulas will be given one-degree and multi-degree systems. Examples of problems that arose when the
vibration isolated was not considered will be presented for a variety of conditions. The corrective actions will be given also.
Contributed Paper
9:05
3aSAa4. Transition of steady air flow into an anharmonic acoustic
pulsed flow in a prototype reactor column: Experimental results and
mathematical modeling. Hasson M. Tavossi (Phys., Astronomy, & GeoSci., Valdosta State University, 2402 Spring Valley Cir, Valdosta, GA
31602, htavossi@valdosa.edu)
A prototype experimental setup is designed to convert steady air flow
into an oscillatory anharmonic acoustic pulsed flow, under special experimental conditions. The steady flow in a cylindrical reactor column of 3 m
height and 15 cm in diameter with a porous layer, transforms itself abruptly
into an oscillatory acoustic pulsed flow. Experimental results show the existence of a threshold for flow-rate, beyond which this transformation into
anharmonic oscillatory flow takes place. This change in flow regime is analogous to the phenomenon of bifurcation in a chaotic system, with abrupt
change from one energy state into another. Experimental results show that
the acoustic oscillations amplitude depends on system size. Preliminary
mathematical model will be presented that includes; relaxation oscillations,
non-equilibrium thermodynamics, and Joule-Thomson effect. The frequencies at peak amplitude for the acoustic vibrations in the reactor column are
expressed in terms of flow-rate, pressure-drop, viscosity, and dimensionless
characteristic numbers of the air flow in the system.
WEDNESDAY MORNING, 29 OCTOBER 2014
MARRIOTT 1/2, 10:00 A.M. TO 12:00 NOON
Session 3aSAb
Structural Acoustics and Vibration: General Topics in Structural Acoustics and Vibration
Benjamin Shafer, Chair
Technical Services, PABCO Gypsum, 3905 N 10th St., Tacoma, WA 98406
Contributed Papers
10:00
3aSAb1. Design of an experiment to measure unsteady shear stress and
wall pressure transmitted through an elastomer in a turbulent boundary layer. Cory J. Smith (Appl. Res. Lab., The Penn State Univ., 1109
Houserville Rd., State College, PA 16801, coryjonsmith@gmail.com), Dean
E. Capone, and Timothy A. Brungart (Graduate Program in Acoust., Appl.
Res. Lab., The Penn State Univ., State College, PA)
A flat plate that is exposed to a turbulent boundary layer (TBL) experiences unsteady velocity fluctuations which result in fluctuating wall
2208
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
pressures and shear stresses on the surface of the plate. There is an interest
in understanding how fluctuating shear stresses and normal pressures generated on the surface of an elastomer layer exposed to a TBL in water are
transmitted through the layer onto a rigid backing plate. Analytic models
exist which predict these shear stress and normal pressure spectra on the surface of the elastomer as well as those transmitted through the elastomer.
The design of a novel experiment is proposed which will utilize Surface
Stress Sensitive Films (S3F) to measure the fluctuating shear stress and
hydrophones to measure fluctuating normal pressure at the elastomer-plate
interface. These experimental measurements would then be compared to
168th Meeting: Acoustical Society of America
2208
10:15
3aSAb2. Exploration into the sources of error in the two-microphone
transfer function impedance tube method. Hubert S. Hall (Naval Surface
Warfare Ctr. Carderock Div., 9500 MacArthur Blvd., West Bethesda, MD
20817, hubert.hall@navy.mil), Joseph Vignola, John Judge (Dept. of Mech.
Eng., The Catholic Univ. of America, Washington, DC), and Diego Turo
(Dept. of Biomedical Eng., George Mason Univ., Fairfax, VA)
solutions for mitigating the noise and vibration in adjoining spaces due to
floor impact problems. Also discussed in this paper are the qualitative
results of some preliminary tests performed in order to better understand the
mechanics of impacts on floating floor assemblies.
11:00
3aSAb5. Stethoscope-based detection of detorqued bolts using impactinduced acoustic emissions. Joe Guarino (Mech. and Biomedical Eng.,
Boise State Univ., Boise, ID) and Robert Hamilton (civil Eng., Boise State
Univ., 1910 University Dr., Boise, ID 83725, rhamilton@boisestate.edu)
The two-microphone transfer function method has become the most
widely used method of impedance tube testing. Due to its measurement
speed and ease of implementation, it has surpassed the standing-wave ratio
method in popularity despite inherent frequency limitations due to tube geometry. Currently, the two-microphone technique is described in test standards ASTM E1050 and ISO 10534-2 to ensure accurate measurement.
However, while detailed for correct test execution, the standards contain
vague recommendations for a variety of measurement parameters. For
instance, it is only stated in ASTM E1050 that “tube construction shall be
massive so sound transmission through the tube wall is negligible.” To
quantify this requirement, damping of the tube was varied to determine how
different loss factor values effect measured absorption coefficient values.
Additional sources of error explored are the amount of required absorbing
material within the tube for reflective material measurements, additional calibration methods needed for test of excessive reflective materials, and alternate methods of combating microphone phase error and tube attenuation.
Non-invasive impact analysis can be used to detect loosened bolts in a
steel structure composed of construction-grade I beams. An electronically
enhanced stethoscope was used to acquire signals from a moderate to light
impact of a hammer on a horizontal steel I beam. Signals were recorded by
placing the diaphragm of the stethoscope on the flange of either the horizontal beam or the vertical column proximal to a bolted connection connecting
the two members. Data were taken using a simple open-loop method; the
input signal was not recorded, nor was it used to reference the output signal.
The bolted connection had eight bolts arranged in a standard configuration.
Using the “turn of the nut” standard outlined by the Research Council on
Structural Connections (RCSC, TDS-012 2-18-08), the bolted joint was
tested in three conditions: turn of the nut tight, finger tight, and loose. We
acquired time-based data from each of 52 patterns of the eight bolts in three
conditions of tightness. Results of both time and frequency-based analyses
show that open-loop responses associated with detorqued bolts vary in both
amplitude decay and frequency content. We conclude that a basic mechanism can be developed to assess the structural health of bolted joints.
Results from this project will provide a framework for further research,
including the analysis of welded joints using the same approach.
10:30
11:15
3aSAb3. Analysis of the forced response and radiation of a singledimpled beam with different boundary conditions. Kyle R. Myers and
Koorosh Naghshineh (Mech. & Aerosp. Eng., Western Michigan Univ.,
College of Eng. & Appl. Sci., 4601 Campus Dr., Kalamazoo, MI 49008,
kyle.r.myers@wmich.edu)
3aSAb6. Creep behavior of composite interlayer and its influence on
impact sound of floating floor. Tongjun Cho, Byung Kwan Oh, Yousok
Kim, and Hyo Seon Park (Architectural Eng., Yonsei Univ., Yonseino 50
Seodaemun-gu, Seoul 120749, South Korea, tjcho@yonsei.ac.kr)
Beading and dimpling via the stamping process has been used for decades to stiffen structures (e.g., beams, plates, and shells) against static loads
and buckling. Recently, this structural modification technique has been used
as a means to shift a structure’s natural frequencies and to reduce its radiated sound power. Most studies to date have modeled dimpled beams and
dimpled/beaded plates using the finite element method. In this research, an
analytical model is developed for a beam with any number of dimples using
Hamilton’s Principle. First, the natural frequencies and mode shapes are predicted for a dimpled beam in free transverse vibration. A comparison with
those obtained using the finite element method shows excellent agreement.
Second, the forced response of a dimpled beam is calculated for a given
input force. Mode shapes properly scaled from the forced response are used
in order to calculate the beam strain energy, thus demonstrating the effect of
dimpling on beam natural frequencies. Finally, some preliminary results are
presented on the changes in the radiation properties of dimpled beams.
10:45
3aSAb4. The impact of CrossFit training—Weight drops on floating
floors. Richard S. Sherren (Kinetics Noise Control, 6300 Irelan Pl., Dublin,
OH 43062, rsherren@kineticsnoise.com)
CrossFit training is a popular fitness training method. Some facilities
install lightweight plywood floating floor systems as a quick, inexpensive
method to mitigate impact generated noise and vibration into adjoining
spaces. Part of the CrossFit training regimen involves lifting significant
weight overhead, and then dropping the weight on the floor. The energy
transferred to the floor system can cause severe damage to floor surfaces
and structures; and, when using a lightweight floating floor system, even the
isolators can be damaged. This paper describes a spreadsheet based analytical model being used to study the effects of such impacts on various floor
systems. This study is a prelude to experiments that will be performed on a
full scale model test floor. The results of those experiments will be used to
verify the model so that it can be used as a design tool for recommending
2209
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Creep-induced changes in dynamic stiffness of resilient interlayer used
for floating floor is an important parameter of vibration isolator in long-term
use. Compressive creep behavior of a composite layer made from closedcell foam and fibrous material is investigated using a Findley equation-based
method recommended by International Organization for Standardization
(ISO). Quasi-static mechanical analysis is used to evaluate the dynamic
stiffness influenced by the creep-deformation of the composite layer. It is
shown in the present work that the long-term creep strain of the interlayer
under nominal load of the floor and furniture is within the zone where
dynamic stiffness increases. The changes in low frequency impact sound by
the long-term creep deformation are estimated through real scale laboratory
experiments and numerical vibro-acoustic analysis.
11:30
3aSAb7. Investigation of damping in the polymer concrete sleeper for
use in reduction of rolling noise from railway. SangKeun Ahn, Eunbeom
Jeon, Junhong Park, Hak-sung Kim (Mech. Eng., Hanyang Univ., 222,
Wangsimni-ro, Seongdong-gu, Appendix of Eng. Ctr., 211, Seoul 133-791,
South Korea, ask9156@hanyang.ac.kr), and Hyo-in Kho (Korea RailRd.
Res. Inst., Uiwang, South Korea)
The purpose of this study was to measure damping of various polymer
concretes to be used as sleepers for railway. The polymer concretes consisted of epoxy monomer, hardener and aggregates. Various polymer concrete specimens were made by changing epoxy resin weight ratio and curing
temperature. The dynamic properties of the polymer concrete specimens
were measured by using beam-transfer function method. To predict reduction performance of the polymer concrete sleepers, an infinite Timoshenko
beam model was investigated after applying measured concrete properties.
The moving loads from rotating wheels on railway due to different roughness were utilized in railway vibration analysis. The vibration response was
predicted from which the effects of supporting stiffness and loss factor of
sleeper were investigated. The radiated sound power was predicted using
calculated rail vibration response. Consequently, the sound power levels
168th Meeting: Acoustical Society of America
2209
3a WED. AM
models of unsteady shear and unsteady pressure spectra within a TBL for
purposes of model validation. This work will present the design of an
experiment to measure the unsteady pressure and unsteady shear at the elastomer-plate interface and the methodology for comparing the measured
results to the analytic model predictions.
during the injection process is one of the main contributors to engine combustion noise. This impact noise is generated during opening and closing by an injector rod operated by a solenoid. For design of an injector with reduced noise
generation, it is necessary to analyze its sound radiation mechanism and propose consequent evaluation method. Spectral and modal characteristics of the
injectors were measured through vibration induced by external hammer excitation. The injector modal characteristics were analyzed using a simple beam after analyzing its boundaries by complex transverse and rotational springs. To
evaluate impulsive sounds more effectively, Prony analysis of sounds was used
for verifying influence of injector modal characteristics.
were compared for rails supported by different polymer concrete sleepers.
The result of this study assists constructing low noise railway.
11:45
3aSAb8. Study on impulsive noise radiation from of gasoline direct injector. Yunsang Kwak and Junhong Park (Mech. Eng., Hanyang Univ., 515
FTC Hanyang Univ. 222, Wangsimni-ro Seongdong-gu, Seoul
ASI|KR|KS013|Seoul, South Korea, toy0511@hanmail.net)
A gasoline direct injection (GDI) engine uses its own injectors for high
pressure fuel supply to the combustion chamber. High frequency impact sound
WEDNESDAY MORNING, 29 OCTOBER 2014
MARRIOTT 5, 8:00 A.M. TO 12:00 NOON
Session 3aSC
Speech Communication: Vowels = Space + Time, and Beyond: A Session in Honor of Diane Kewley-Port
Catherine L. Rogers, Cochair
Dept. of Communication Sciences and Disorders, University of South Florida, USF, 4202 E. Fowler Ave., PCD1017, Tampa,
FL 33620
Amy T. Neel, Cochair
Dept. of Speech and Hearing Sci., Univ. of New Mexico, MSC01 1195, University of New Mexico, Albuquerque, NM 87131
Chair’s Introduction—8:00
Invited Papers
8:05
3aSC1. Vowels and intelligibility in dysarthric speech. Amy T. Neel (Speech and Hearing Sci., Univ. of New Mexico, MSC01 1195,
University of New Mexico, Albuquerque, NM 87131, atneel@unm.edu)
Diane Kewley-Port’s work in vowel perception under challenging listening conditions and in the relation between vowel perception
and production in second language learners has important implications for disordered speech. Vowel space area has been widely used as
an index of articulatory working space in speakers with hypokinetic dysarthria related to Parkinson disease (PD), with the assumption
that a larger vowel space is associated with higher speech intelligibility. Although many studies have reported acoustic measures of vowels in Parkinson disease, vowel identification and transcription tasks designed to relate changes in production with changes in perception
are rarely performed. This study explores the effect of changes in vowel production by six talkers with PD speaking at habitual and loud
levels of effort on listener perception. The relation among vowel acoustic measures (including vowel space area and measures of temporal and spectral distinctiveness), vowel identification scores, speech intelligibility ratings, and sentence transcription accuracy for speakers with dysarthria will be discussed.
8:25
3aSC2. Vowels in clear and conversational speech: Within-talker variability in acoustic characteristics. Sarah H. Ferguson and
Lydia R. Rogers (Commun. Sci. and Disord., Univ. of Utah, 390 South 1530 East, Rm. 1201, Salt Lake City, UT 84112, sarah.ferguson@hsc.utah.edu)
The Ferguson Clear Speech Database was developed for the first author’s doctoral dissertation, which was directed by Diane Kewley-Port at Indiana University. While most studies using the Ferguson Database have examined variability among the 41 talkers, the
present investigation considered within-talker differences. Specifically, this study examined the amount of variability each talker showed
among the 7 tokens of each of 10 vowels produced in clear versus conversational speech. Steady-state formant frequencies have been
measured for 5740 vowels in /bVd/ context using PRAAT, and a variety of measures of spread will be used to determine variability for
each vowel in each speaking style for each talker. Results will be compared to those of the only known previous study that included a
sufficiently large number of tokens for this type of analysis, an unpublished thesis from 1980. Based on this study, we predict that
within-token variability will be smaller in clear speech than in conversational speech.
2210
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2210
8:45
3aSC3. Understanding speech from partial information: The contributions of consonants and vowels. Daniel Fogerty (Commun.
Sci. and Disord., Univ. of South Carolina, 1621 Greene St., Columbia, SC 29208, fogerty@sc.edu)
In natural listening environments, speech is commonly interrupted by background noise. These environments require the listener to
extract meaningful speech cues from the partially preserved acoustic signal. A number of studies have now investigated the relative contribution of preserved consonant and vowel segments to speech intelligibility using an interrupted speech paradigm that selectively preserves these segments. Results have demonstrated that preservation of vowel segments results in greater intelligibility for sentences
compared to consonant segments, especially after controlling for preserved duration. This important contribution from vowels is specific
to sentence contexts and appears to result from suprasegmental acoustic cues. Converging evidence from acoustic and behavioral investigations suggests that these cues are primarily conveyed through temporal amplitude modulation of vocalic energy. Additional empirical evidence suggests that these temporal cues of vowels, conveying the rhythm and stress of speech, are important for interpreting
global linguistic cues about the sentence, such as involved in syntactic processing. In contrast, consonant contributions appear to be specific to lexical access regardless of the linguistic context. Work testing older adults with normal and impaired hearing demonstrates their
preserved sensitivity to contextual cues conveyed by vowels, but not consonants. [Work supported by NIH.]
9:05
3aSC4. Vowel intelligibility and the second-language learner. Catherine L. Rogers (Dept. of Commun. Sci. and Disord., Univ. of
South Florida, USF, 4202 E. Fowler Ave., PCD1017, Tampa, FL 33620, crogers2@usf.edu)
Diane Kewley-Port’s work has contributed to our understanding of vowel perception and production in a wide variety of ways, from
mapping the discriminability of vowel formants in conditions of minimal uncertainty to vowel processing in challenging conditions,
such as increased presentation rate and noise. From the results of these studies, we have learned much about the limits of vowel perception for normal-hearing listeners and the robustness of vowels in speech perception. Continuously intertwined with this basic research
has been its application to our understanding of vowel perception and vowel acoustics across various challenges, such as hearing impairment and second-language learning. Diane’s work on vowel perception and production by second-language learners and ongoing
research stemming from her influence will be considered in light of several factors affecting communicative success and challenge for
second-language learners. In particular, we will compare the influence of speaking style, noise, and syllable disruption on the intelligibility of vowels perceived and produced by native and non-native English-speaking listeners.
3a WED. AM
9:25
3aSC5. Vowel formant discrimination: Effects of listeners’ hearing status and language background. Chang Liu (Commun. Sci.
and Disord., The Univ. of Texas at Austin, 1 University Station A1100, Austin, TX 78712, changliu@utexas.edu)
The goal of this study was to examine effects of listeners’ hearing status (e.g., normal and impaired hearing) and language background (e.g., native and non-native) on vowel formant discrimination. Thresholds of formant discrimination were measured for F1 and
F2 of English vowels at 70 dB SPL for normal- (NH) and impaired-hearing (HI) listeners using a three-interval, two-alternative forcedchoice procedure with a two-down, one-up tracking algorithm. Formant thresholds of HI listeners were comparable to those of NH listeners for F1, but significantly higher than NH listeners for F2. Results of a further experiment indicated that an amplification of the F2
peak could markedly improve formant discrimination for HI listeners, but a simple amplification of the sound level did not provide any
benefit to them. On the other hand, another experiment showed that vowel density of listeners’ native language appeared to affect vowel
formant discrimination, i.e., more crowded vowel space of listeners’ native language, better their vowel formant discrimination. For
example, English-native listeners showed significantly lower thresholds of formant discrimination for both English and Chinese vowels
than Chinese-native listeners. However, the two groups of listeners had similar psychophysical capacity to discriminate formant frequency changes in non-speech sounds.
9:45
3aSC6. Consonant recognition in noise for bilingual children with simulated hearing loss. Kanae Nishi, Andrea C. Trevino (Boys
Town National Res. Hospital, 555 N. 30th St., Omaha, NE 68131, kanae.nishi@boystown.org), Lydia Rosado Rogers (Commun. Sci.
and Disord., Univ. of Utah, Omaha, Nebraska), Paula B. Garcia, and Stephen T. Neely (Boys Town National Res. Hospital, Omaha, NE)
The negative impacts of noisy listening environments and hearing loss on speech communication are known to be greater for children
and non-native speakers than adult native speakers. Naturally, the synergistic influence of listening environment and hearing loss is
expected to be greater for bilingual children than their monolingual or normal-hearing peers, but limited studies have explored this issue.
The present study compared the consonant recognition performance of highly fluent school-age Spanish-English bilingual children to that
of monolingual English-speaking peers. Stimulus materials were 13 English consonants embedded in three symmetrical vowel-consonantvowel (VCV) syllables. To control for variability in hearing loss profiles, mild-to-moderate sloping sensorineural hearing loss modeled after
Pittman & Stelmachowicz [Ear Hear 24, 198–205 (2003)] was simulated following the method used by Desloge et al. [Trends Amplification 16(1), 19–39 (2012)]. Listeners heard VCVs in quiet and in the background of speech-shaped noise with and without simulated hearing
loss. Overall performance and the recognition of individual consonants will be discussed in terms of the influence of language background
(bilingual vs. monolingual), listening condition, simulated hearing loss, and vowel context. [Work supported by NIH.].
10:05–10:20 Break
2211
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2211
10:20
3aSC7. Distributions of confusions for the 109 syllable constituents that make up the majority of spoken English. James D. Miller,
Charles S. Watson, and Roy Sillings (Res., Commun. Disord. Technol., Inc., 3100 John Hinkle Pl, Ste. 107, Bloomington, IN 47408,
jamdmill@indiana.edu)
Among the interests of Kewley-Port have been the perception and production of English Speech Sounds by native speakers of other
languages. ESL students from four language backgrounds (Arabic, Chinese, Korean, and Spanish) were enrolled in a speech perception
training program. Similarities and differences between these L1 groups in their primary confusions were determined for onsets, nuclei
and codas utilized in spoken English. An analysis in terms of syllable constituents is more meaningful than analyses in terms of phonemes as individual phonemes have differing articulatory and acoustic structures depending on their roles in the syllable and their phonetic environments. An important observation is that only a few of all the possible confusions that might occur do occur. Another
interesting characteristic of confusions among syllable constituents is that many more confusions are observed than those popularly
cited, e.g., the /r/ v /l/ for Japanese speakers. As noted by many, the perceptual problems encountered by learners of English are conditioned on the relations between the sound-structures of English with each talker’s L1. These data suggest that the intrinsic similarities
within of the sounds of English also play an important role.
10:40
3aSC8. Identification and response latencies for Mandarin-accented isolated words in quiet and in noise. Jonathan Dalby (Commun. Sci. and Disord., Indiana-Purdue, Fort Wayne, 2101 East Coliseum Blvd., Fort Wayne, IN 46805, dalbyj@ipfw.edu), Teresa Barcenas (Speech and Hearing Sci., Portland State Univ., Portland, OR), and Tanya August (Speech-Lang. Pathol., G-K-B Community
School District, Garrett, IN)
This study compared the intelligibility of native and foreign-accented American English speech presented in quiet and mixed with
two different levels of background noise. Two native American English speakers and two native Mandarin Chinese speakers for whom
English is a second language read three 50-word lists of phonetically balanced words (Stuart, 2004). The words were mixed with noise
at three different signal-to-noise levels—no noise (quiet), SNR + 10 dB (signal 10 dB louder than noise) and SNR 0 (signal and noise at
equal loudness). These stimuli were presented to ten native American English listeners who were simply asked to repeat the words they
heard the speakers say. Listener response latencies were measured. The results showed that for both native and accented speech,
response latencies increased as the noise level increased. For words identified correctly, response times to accented speech were longer
than for native speech but the noise conditions affected both types equally. For words judged incorrectly, however, the noise conditions
increased latencies for accented speech more than for native speech. Overall, these results support the notion that processing accented
speech requires more cognitive effort than processing native speech.
11:00
3aSC9. The contribution of vowels to auditory-visual speech recognition and the contributions of Diane Kewley-Port to the field
of speech communication. Carolyn Richie (Commun. Sci. & Disord., Butler Univ., 4600 Sunset Ave, Indianapolis, IN 46208, crichie@
butler.edu)
Throughout her career, Diane Kewley-Port has made enduring contributions to the field of Speech Communication in two ways—
through her research on vowels and through her mentoring. Diane has contributed greatly to current knowledge about vowel acoustics,
vowel discrimination and identification, and the role of vowels in speech recognition. Within that line of research, Richie & KewleyPort (2008) investigated the effects of visual cues to vowels on speech recognition. Specifically, we demonstrated that an auditory-visual
vowel-identification training program benefited sentence recognition under difficult listening conditions more than consonant-identification training and no training. In this presentation, I will describe my continuing research on the relationship between auditory-visual
vowel-identification training and listening effort, for adults with normal hearing. In this study, listening effort was measured in terms of
response time and participants were tested on auditory-visual sentence recognition in noise. I will discuss the ways that my current work
has been inspired by past research with Diane, and how her mentoring legacy lives on.
11:20
3aSC10. Individual differences in the perception of nonnative speech. Tessa Bent (Dept. of Speech and Hearing Sci., Indiana Univ.,
200 S. Jordan Ave., Bloomington, IN 47405, tbent@indiana.edu)
As a mentor, Diane Kewley-Port was attentive to each student’s needs and took a highly hands-on, individualized approach. In many
of her collaborative research endeavors, she has also taken a fine-grained approach toward both discovering individual differences in
speech perception and production as well as explaining the causes and consequences of this range of variation. I will present research
investigating several cognitive-linguistic factors that may contribute to individual differences in the perception of nonnative speech.
Recognizing words from nonnative talkers can be particularly difficult when combined with environmental degradation (e.g., background noise) or listener limitations (e.g., child listener). Under these conditions, the range of performance across listeners is substantially wider than observed under more optimal conditions. My work has investigated these issues in monolingual and bilingual adults
and children. Results have indicated that age, receptive vocabulary, and phonological awareness are predictive of nonnative word recognition. Factors supporting native word recognition, such as phonological memory, were less strongly associated with nonnative word
recognition. Together, these results suggest that the ability to accurately perceive nonnative speech may rely, at least partially, on different underlying cognitive-linguistic abilities than those recruited for native word recognition. [Work supported by NIH-R21DC010027.]
2212
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2212
11:40
3aSC11. Individual differences in sensory and cognitive processing across the adult lifespan. Larry E. Humes (Indiana Univ., Dept.
Speech & Hearing Sci., Bloomington, IN 47405-7002, humes@indiana.edu)
A recent large-scale (N = 245) cross-sectional study of threshold sensitivity and temporal processing in hearing, vision and touch for
adults ranging in age from 18 through 82 years of age questioned the long-presumed link between aging and declines in cognitive-processing [Humes, L.E., Busey, T.A., Craig, J. & Kewley-Port, D. (2013). Attention, Perception and Psychophysics, 75, 508–524]. The
results of this extensive psychophysical investigation suggested that individual differences in sensory processing across multiple tasks
and senses drive individual differences in cognitive processing in adults regardless of age. My long-time colleague at IU, Diane KewleyPort, was instrumental in the design, execution and interpretation of results for this large study, especially for the measures of auditory
temporal processing. The methods used and the results obtained in this study will be reviewed, with a special emphasis on the auditory
stimuli and tasks involved. The potential implications of these findings, including possible interventions, will also be discussed. Finally,
future research designed to better evaluate the direction of the association between sensory-processing and cognitive-processing deficits
will be described. [Work supported, in part, by research grant R01 AG008293 from the NIA.]
WEDNESDAY MORNING, 29 OCTOBER 2014
INDIANA G, 8:30 A.M. TO 10:00 A.M.
Session 3aSPa
Signal Processing in Acoustics: Beamforming and Source Tracking
Contributed Papers
8:30
3aSPa1. An intuitive look at the unscented Kalman filter. Edmund Sullivan (Res., prometheus, 46 Lawton Brook Ln., Portsmouth, RI 02871, bewegungslos@fastmail.fm)
The Unscented Kalman Filter or UKF is a powerful and easily used
modification to the Kalman filter that permits it use in the case of a nonlinear process or measurement model. Its power lies in its ability to allow the
mean and covariance of the data to be correctly passed through a nonlinearity, regardless of the form of the nonlinearity. There is a great deal of literature on the UKF that describes the method and gives instruction on its use,
but there are no clear descriptions on why it works. In this paper, we show
that by computing the mean and covariance as the expectations of a Gaussian process, passing the results through a nonlinearity and solving the
resulting integrals using Gauss-Hermite quadrature, the reason for the ability
of the UKF to maintain the correct mean and covariance is explained by the
fact that the Gauss-Hermite quadrature uses the same abscissas and weights
regardless of the form of the integrand.
8:45
3aSPa2. Tracking unmanned aerial vehicles using a tetrahedral microphone array. Geoffrey H. Goldman (U.S. Army Res. Lab., 2800 Powder
Mill Rd., Adelphi, MD 20783-1197, geoffrey.h.goldman.civ@mail.mil) and
R. L. Culver (Appl. Res. Lab., Penn State Univ., State College, PA)
Unmanned Aerial Vehicles (UAVs) present a difficult localization problem for traditional radar systems due to their small radar cross section and
relatively slow speeds. To help address this problem, the U.S. Army
Research Laboratory (ARL) is developing and testing acoustic-based detection and tracking algorithms for UAVs. The focus has been on detection,
bearing and elevation angle estimation using either minimum mean square
error or adaptive beamforming methods. A model-based method has been
implemented which includes multipath returns, and a Kalman filter has been
implemented for tracking. The acoustic data were acquired using ARL’s
2213
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
tetrahedral microphone array against several UAV’s. While the detection
and tracking algorithms perform reasonably well, several challenges remain.
For example, interference from other sources resulted in lower signal to interference ratio (SIR), which can significantly degrade performance. The
presence of multipath nearly always results in greater variance in elevation
angle estimates than in bearing angle estimates.
9:00
3aSPa3. An ultrasonic echo characterization approach based on particle
swarm optimization. Adam Pedrycz (Sonic/LWD, Schlumberger, 2-2-1
Fuchinobe, Sagamihara, Kanagawa 229-0006, Japan, APedrycz@slb.com),
Henri-Pierre Valero, Hiroshi Hori, Kojiro Nishimiya, Hitoshi Sugiyama,
and Yoshino Sakata (Sonic/LWD, Schlumberger, Sagamihara, Kanagawaken, Japan)
Presented is a hands-free approach for the extraction and characterization
of ultrasonic echoes embedded in noise. By means of model-based nondestructive evaluation approaches, echoes can be represented parametrically by arrival
time, amplitude, frequency, etc. Inverting for such parameters is a non-linear
task, usually employing gradient-based, least-squared minimization such as
Gauss-Newton (GN). To improve inversion stability, suitable initial echo parameter guesses are required which may not be possible under the presence of
noise. To mitigate this requirement, particle swarm optimization (PSO) is
employed in lieu of GN. PSO is a population-based optimization technique
wherein a swarm of particles explores a multidimensional search space of candidate solutions. Particles seek out the global optimum by iteratively moving to
improve their position by evaluating their individual performance as well as
that of the collective. Since the inversion problem is non-linear, multiple suboptimal solutions exist, and in this regard PSO has a much lower propensity of
becoming trapped in a local minima compared to gradient-based approaches.
Due to this, it is possible to omit initial guesses and utilize a broad search range
instead, which becomes far more trivial. Real pulse-echoes were used to evaluate the efficacy of the PSO approach under varying noise severity. In all cases,
PSO characterized the echo correctly while GN required an initial guess within
30% of the true value to converge.
168th Meeting: Acoustical Society of America
2213
3a WED. AM
R. Lee Culver, Chair
ARL, Penn State University, PO Box 30, State College, PA 16804
9:15
3aSPa4. Beamspace compressive spatial spectrum estimation on large
aperture acoustic arrays. Geoffrey F. Edelmann, Jeffrey S. Rogers, and
Steve L. Means (Acoust., Code 7160, U. S. Naval Res. Lab., 4555 Overlook
Ave SW, Code 7162, Washington, DC 20375, edelmann@nrl.navy.mil)
For large aperture sonar arrays, the number of acoustic elements can be
quite sizable and thus increase the dimensionality of the l1 minimization
required for compressive beamforming. This leads to high computational
complexity that scales by the cube of the number of array elements. Furthermore, in many applications, raw sensor outputs are often not available since
computation of the beamformer power is a common initial processing step
performed to reduce subsequent computational and storage requirements. In
this paper, a beamspace algorithm is presented that computes the compressive spatial spectrum from conventional beamformer output power. Results
from CALOPS-07 experiment will be presented and shown to significantly
reduce the computational load as well as increase robustness when detecting
low SNR targets. [This work was supported by ONR.]
9:30
3aSPa5. Experimental investigations on coprime microphone arrays for
direction-of-arrival estimation. Dane R. Bush, Ning Xiang (Architectural
Acoust., Rensselaer Polytechnic Inst., 2609 15th St., Troy, NY 12180, danebush@gmail.com), and Jason E. Summers (Appl. Res. in Acoust. LLC
(ARiA), Washington, DC)
Linear microphone arrays are powerful tools for determining the direction of a sound source. Traditionally, uniform linear arrays (ULA) have
inter-element spacing of half of the wavelength in question. This produces
the narrowest possible beam without introducing grating lobes—a form of
aliasing governed by the spatial Nyquist theorem. Grating lobes are often
undesirable because they make direction of arrival indistinguishable among
their passband angles. Exploiting coprime number theory, however, an array
can be arranged sparsely with fewer total elements, exceeding the aforementioned spatial sampling limit separation. Two sparse ULA sub-arrays with
coprime number of elements, when nested properly, each produce narrow
grating lobes that overlap with one another exactly in just one direction. By
combining the sub-array outputs it is possible to retain the shared beam
while mostly canceling the other superfluous grating lobes. This work
implements two coprime microphone arrays with different lengths and subarray spacings. Experimental beam patterns are shown to correspond with
simulated results even at frequencies above and below the array’s design
frequency. Side lobes in the directional pattern are inversely correlated with
bandwidth of analyzed signals.
9:45
3aSPa6. Shallow-water waveguide invariant parameter estimation and
source ranging using narrowband signals. Andrew Harms (Elec. and
Comput. Eng., Duke Univ., 129 Hudson Hall, Durham, NC 27708, andrew.
harms@duke.edu), Jonathan Odom (Georgia Tech Res. Inst., Durham,
North Carolina), and Jeffrey Krolik (Elec. and Comput. Eng., Duke Univ.,
Durham, NC)
This paper concerns waveguide invariant parameter estimation using narrowband underwater acoustic signals from multiple sources at known range, or
alternatively, the ranges of multiple sources assuming known waveguide invariant parameters. Previously, the waveguide invariant has been applied to estimate the range or bottom properties from intensity striations observed from a
single broadband signal. The difficulty in separating striations from multiple
broadband sources, however, motivates the use of narrowband components,
which generally have higher signal-to-noise ratios and are non-overlapping in
frequency. In this paper, intensity fluctuations of narrowband components are
shown to be related across frequency by a time-warping (i.e., stretching or contracting) of the intensity profile, assuming constant radial source velocity and
the waveguide invariant b. A maximum likelihood estimator for the range with
b known or for the invariant parameter b with known source range is derived,
as well as Cramer-Rao bounds on estimation accuracy assuming a Gaussian
noise model. Simulations demonstrate algorithm performance for constant radial velocity sources in a representative shallow-water ocean waveguide.
[Work supported by ONR.]
WEDNESDAY MORNING, 29 OCTOBER 2014
INDIANA G, 10:15 A.M. TO 12:00 NOON
Session 3aSPb
Signal Processing in Acoustics: Spectral Analysis, Source Tracking, and System Identification
(Poster Session)
R. Lee Culver, Chair
ARL, Penn State University, PO Box 30, State College, PA 16804
All posters will be on display from 10:15 a.m. to 12:00 noon. To allow contributors an opportunity to see other posters, contributors of
odd-numbered papers will be at their posters from 10:15 a.m. to 11:00 a.m. and contributors of even-numbered papers will be at their
posters from 11:00 a.m. to 12:00 noon.
Contributed Papers
3aSPb1. Improvement of the histogram in the degenerate unmixing estimation technique algorithm. Junpei Mukae, Yoshihisa Ishida, and Takahiro
Murakami (Dept. of Electronics and Bioinformatics, Meiji Univ., 1-1-1 Higashi-mita, Tama-ku, Kawasaki-shi 214-8571, Japan, ce41094@meiji.ac.jp)
A method of improving the histogram in the degenerate unmixing estimation technique (DUET) algorithm is proposed. The DUET algorithm is
one of the methods of blind signal separation (BSS). The BSS framework is
2214
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
to retrieve source signals from mixtures of them without a priori information
about the source signals and the mixing process. In the DUET algorithm,
the histogram of both the direction-of-arrivals (DOAs) and the distances is
formed from the mixtures which are observed using two sensors. And then,
signal separation is achieved using time-frequency masking based on the
histogram. Consequently, the capability for the DUET algorithm strongly
depends on the performance of the histogram. In general, the histogram is
degraded by the reverberation or the reflection of the source signals when
the DUET algorithm is applied in the real environment. Our approach is to
168th Meeting: Acoustical Society of America
2214
3aSPb2. Start point estimation of a signal in a frame. Anri Ota (Dept. of
Electronics and Bioinformatics, Meiji Univ., 1-1-1 Higashi-mita, Tama-ku,
Kawasaki-shi 214-8571, Japan, ce41017@meiji.ac.jp), Yoshihisa Ishida,
and Takahiro Murakami (Dept. of Electronics and Bioinformatics, Meiji
Univ., Kawashiki-shi, Japan)
An algorithm for start-point estimation of a signal from a frame is presented. In many applications of speech signal processing, the signal to be
processed is often segmented into several frames, and then the frames are
categorized into speech and non-speech frames. Instead, we focus on only
the frame in which the speech starts. To simplify the problem, we assume
that the speech is modeled by a number of complex sinusoidal signals.
When a complex sinusoidal signal that starts in a frame is observed, it can
be modeled as multiplication of a complex sinusoidal signal of which the
length is infinite and a window function that has finite duration in the time
domain. In the frequency domain, the spectrum of the signal of the frame is
given by the shifted spectrum of the window function. Sharpness of the
spectrum of the window function depends on the start point of the signal.
Hence, the start point of the signal is estimated by the sharpness of the
observed spectrum. This approach can be extended to the signal that consists
of a number of complex sinusoidal signals. Simulation results using artificially generated signals show the validity of our method.
3aSPb3. Examination and development of numerical methods and algorithms designed for the determination of an enclosure’s acoustical characteristics via the Schroeder Function. Miles Possing (Acoust., Columbia
College Chicago, 1260 N Dearborn, 904, Chicago, IL 60610, miles@possing.com)
A case study was conducted to measure the acoustical properties of a
church auditorium. While modeling the project using EASE 2.1, some problems arose when attempting to determine the reverberation time using the
Schroder Back Integrated Impulse Function within EASE 2.1. An auxiliary
investigation was launched aiming to better understand the Schroeder algorithm in order to produce a potentially improved version in MATLAB. It was
then theorized that the use of a single linear regression is not sufficient to
understand the nature of the decay, due to the non-linearity of the curve,
particularly during the initial decay. Rather, it is hypothesized that the use
of numerical methods to find instantaneous rates of change over the entire
initial decay along with a Savitsky-Golay Filter could possibly yield much
more robust, accurate results when attempting to derive the local reverberation time from reflectogram data.
3aSPb4. A modified direction-of-arrival estimation algorithm for acoustic vector sensors based on Unitary Root-MUSIC. Junyuan Shen, Wei Li,
Yuanming Guo, and Yongjue Chen (Electronics and Commun. Eng., Harbin
Inst. of Technol., XiLi University Town HIT C#101, Shenzhen, GuangDong
GD 755, China, Juny_Shen@hotmail.com)
A novel method applying for direction-of-arrival(DOA) using acoustic
vector sensors(AVS) based on Unitary Root-MUSIC algorithm(URM) is
proposed in this paper. AVS array has a characteristic named coherence
principle of sound pressure and velocity, which can significantly improve
2215
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
the detection performance of DOA by reducing the influence of white Gaussian noise. We apply this characteristic and the extra velocity information of
AVS to construct a modified covariance matrix. In particular, the modified
covariance matrix need not extend the dimension in calculation of AVS covariance matrix which means saving the computing time. In addition, we
combine the characteristics of modified matrix with URM algorithm to
design a new algorithm, which can minimize the impact of environment
noise and further reduce computational complexity to a lower order of magnitude. So the proposed method can not only improve the accuracy of DOA
detection but also reduce the computational complexity, compared to the
classic DOA algorithm. Theory analysis and simulation experiment show
that the proposed algorithm for AVS based on URM can significantly
improve the DOA resolution in low SNR ratios and few snapshots.
3aSPb5. Multiple pitch estimation using comb filters considering overlap of frequency components. Kyohei Tabata, Ryo Tanaka, Hiroki Tanji,
Takahiro Murakami, and Yoshihisa Ishida (Dept. of Electronics and Bioinformatics, Meiji Univ., 1-1-1 Higashimita, Tama-ku, Kawasaki-shi, Kanagawa 214-8571, Japan, ce31063@meiji.ac.jp)
We propose a method of the multiple pitch estimation using the comb
filters for transcript. We can know the pitches of a musical sound by detecting the bigger outputs in comb filters connected in parallel. Each comb filter
has peak corresponding to each pitch and its harmonic frequencies. The outputs of the comb filters corresponding to input pitch frequencies have bigger
frequency components, and show bigger outputs than other comb filter’s
ones. However, when there is the fundamental frequency of higher tone near
harmonics of lower tones, the pitch estimation often fails. Therefore, the
estimation is assigned to a wrong note when frequency components are
shared. The proposed method estimates the correct pitch by correcting the
outputs using the matrix, which is defined as the power ratio of the harmonic
frequencies to the fundamental frequency. The effectiveness of our proposed
method is confirmed by simulations. The proposed method enables more
accurate pitch estimation than other conventional methods.
3aSPb6. Evaluating microphones and microphone placement for signal
processing and automatic speech recognition of teacher-student dialog.
Michael C. Brady, Sydney D’Mello, Nathan Blanchard (Comput. Sci., Univ. of
Notre Dame, Fitzpatrick Hall, South Bend, IN 46616, mbrady8@nd.edu),
Andrew Olney (Psych., Univ. of Memphis, Memphis, TN), and Martin
Nystrand (Education, English, Univ. of Wisconsin, Madison, WI)
We evaluate a variety of audio recording techniques for a project on the
automatic analysis of speech dialog in middle school and high school classrooms. In our scenario, the teacher wears a headset microphone or a lapel
microphone. A second microphone is then used to collect speech and related
sounds from students in the classroom. Various boundary microphones,
omni-directional microphones, and cardioid microphones are tested as this
second classroom microphone. A commercial microphone array [Microsoft
Xbox Kinect] is also tested. We report on how well digital source-separation
techniques work for segregating the teacher and student speech signals from
one another based on these various microphones and placements. We also
test the recordings using various automatic speech recognition engines for
word recognition error rates under different levels of background noise. Preliminary results indicate one boundary microphone, the Crown PZM-30, to
be superior for the classroom recordings. This is based on its performance at
capturing near and distant student signals for ASR in noisy conditions, as
measured by ASR error rates across different ASR engines.
168th Meeting: Acoustical Society of America
2215
3a WED. AM
improve the histogram of the DUET algorithm. In our method, the phase
component of the observed mixture at each frequency bin is modified by
those at the neighboring frequency bins. The proposed method gives the
sharper histogram in comparison with the conventional approach.
WEDNESDAY MORNING, 29 OCTOBER 2014
INDIANA F, 9:00 A.M. TO 11:30 A.M.
Session 3aUW
Underwater Acoustics, Acoustical Oceanography, Animal Bioacoustics, and ASA Committee on Standards:
Standardization of Measurement, Modeling, and Terminology of Underwater Sound
Susan B. Blaeser, Cochair
Acoustical Society of America Standards Secretariat, 1305 Walt Whitman Road, Suite 300, Melville, NY 11747
Michael A. Ainslie, Cochair
Underwater Tech. Dept., TNO, P.O. Box 96864, The Hague 2509JG, Netherlands
George V. Frisk, Cochair
Dept. of Ocean Eng., Florida Atlantic Univ., Dania Beach, FL 33004-3023
Chair’s Introduction—9:00
Invited Papers
9:05
3aUW1. Strawman outline for a standard on the use of passive acoustic towed arrays for marine mammal monitoring and mitigation. Aaron Thode (SIO, UCSD, 9500 Gilman Dr., MC 0238, La Jolla, CA 92093-0238, athode@ucsd.edu)
There is a perceived need from several U.S. federal agencies and departments to develop consistent standards for how passive acoustic monitoring (PAM) for marine mammals is implemented for mitigation and regulatory monitoring purposes. The use of towed array
technology is already being required for geophysical exploration activities in the Atlantic Ocean and the Gulf of Mexico. However, to
date no specific standards have been developed or implemented for towed arrays. Here, a strawman outline for a ANSI standard is presented (http://wp.me/P4j34t-a) to cover requirements and recommendations for the following aspects of towed array operations: initial
planning (including guidelines for when PAM is not appropriate), hardware, software, and operator training requirements, real-time mitigation and monitoring procedures, and required steps for performance validation. The outline scope, at present, does not cover operational shutdown decision criteria, sound source verification, or defining the required detection range of the system. Instead of specifying
details of towed array systems, the current strategy is to focus on the process of defining the required system performance for a given
application, and then stepping through how the system hardware, software, and operations should be selected and validated to meet or
exceed these requirements. [Work supported by BSEE.]
9:30
3aUW2. Towards a standard for the measurement of underwater noise from impact pile driving in shallow water. Peter H. Dahl
(Appl. Phys. Lab. and Mech. Eng. Dept., Univ. of Washington, Mech. Eng., 1013 NE 40th St., Seattle, WA 98105, dahl@apl.washington.edu), Pete D. Theobald, and Stephen P. Robinson (National Physical Lab., Children’s Respiratory and Critical Care Specialists, PA,
Middlesex, United Kingdom)
Measurements of the underwater noise field from impact pile driving are essential to the address environmental regulations in effect
in both Europe and North America to protect marine life. For impact pile driving in shallow water there exists a range scale R* = H/
tan(H) that delineates important features in the propagation of underwater sound from impact pile driving, where H is the Mach angle
of the wavefront radiated into the water from the pile and H is water depth. This angle is about 17o for many steel piles typically used,
and thus R* is approximately 3H. For range R, such that R/R* ~ 0.5 depth variation in the noise field is highest, more so with peak pressure than with sound exposure level (SEL); for R/R* > 1 the field becomes more uniform with depth. This effect of measurement range
can thus have implications on environmental monitoring designed to obtain a close-range datum, which is often used with a transmission
loss model to infer the noise level at farther range. More consistent results are likely obtained if the measurement range is at least 3H.
Ongoing standardization activities for the measurement and reporting of sound levels radiated from impact pile driving will also be
discussed.
2216
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2216
9:55
3aUW3. Importance of metrics standardization involving the effects of sound on fish. Michele B. Halvorsen (CSA Ocean Sci. Inc,
8502 SW Kansas Hwy, Stuart, FL 34997, mhalvorsen@conshelf.com)
Reporting accurate metrics while employing good measurement practices is a topic that is gaining awareness. Seemingly a simple
and expected task, however when reading current and past literature, reporting sound metrics utilized is often not met. It is clear that
increased awareness and development of standardization of acoustic metrics is necessary. When reviewing previously published literature on the effects of sound on fish, it is often difficult to fully understand how metrics were calculated leaving the reader to make
assumptions. Furthermore, the lack of standardization and definition decreases the amount of data and research studies that could be
directly comparable. In a field that has paucity of effects of sound on fish, this situation underscores the importance and need for
standardization.
10:20
3aUW4. Developments in standards and calibration methods for hydrophones and electroacoustic transducers for underwater
acoustics. Stephen P. Robinson (National Physical Lab., National Physical Lab., Hampton Rd., Teddington TW11 0LW, United Kingdom, stephen.robinson@npl.co.uk), Kenneth G. Foote (Woods Hole Oceanographic Inst., Woods Hole, MA), and Pete D. Theobald
(National Physical Lab., Teddington, United Kingdom)
If they are to be meaningful, underwater acoustic measurements must be related to common standards of measurement. In this paper,
a description is given of the existing standards for the calibration of hydrophones and electroacoustic transducers for underwater acoustics. The description covers how primary standards are currently realized and disseminated, and how they are validated by international
comparisons. A report is also given of the status of recent developments in specification standards, for example within the International
Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC). The discussion focuses on the revision
of standards for transducer calibration, and the inclusion of extended guidance on uncertainty assessment, and on the criteria for determining the locations of the acoustic near-field and far-field. A description is then provided of recent developments using non-traditional
techniques such as optical sensing, which may lead to the next generation of standards. A report is also given of the status of recent
developments in and of a number of current initiatives to promote best measurement practice.
10:45
3aUW5. All clicks are not created equally: Variations in high-frequency
acoustic signal parameters of the Amazon river dolphin (Inia geoffrensis). Marie Trone (Math and Sci., Valencia College, 1800 Denn John Ln.,
Kissimee, FL 34744, mtronedolphin@yahoo.com), Randall Balestriero
(Universite de Toulon, La Garde, France), Herve Glotin (Universite de Toulon, Toulon, France), and Bonnett E. David (None, None, Silverdale,
WA)
The quality and quantity of acoustical data available to researchers are
rapidly increasing with advances in technology. Recording cetaceans with a
500 kHz sampling rate provides a more complete signal representation than
traditional sampling at 96 kHz and lower. Such sampling provides a profusion of data concerning various parameters, such as click duration, interclick intervals, frequency, amplitude, and phase. However, there is disagreement in the literature in the use and definitions of these acoustic terms and
parameters. In this study, Amazon River dolphins (Inia geoffrensis) were
recorded using a 500 kHz sampling rate in the Peruvian Amazon River
watershed. Subsequent spectral analyses, including time waveforms, fast
Fourier transforms and wavelet scalograms, demonstrate acoustic signals
with differing characteristics. These high frequency, broadband signals are
compared, and differences are highlighted, despite the fact that currently an
unambiguous way to describe these acoustic signals is lacking. The need for
standards in cetacean bioacoustics with regard to terminology and collection
techniques is emphasized.
11:00
3aUW6. Acoustical terminology in the Sonar Modelling Handbook.
Andrew Holden (Dstl, Dstl Portsdown West, Fareham PO17 6AD, United
Kingdom, apholden@dstl.gov.uk)
The UK Sonar Modelling Handbook (SMH) defines the passive and
active Sonar Equations, and their individual terms and units, which are
extensively used for sonar performance modelling. The new Underwater
2217
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Acoustical Terminology ISO standard, which is currently being developed
by the ISO working group TC43/SC3/WG2 to standardize terminology will
have an impact on the SMH definitions. Work will be presented comparing
the current SMH terminology with both the future ISO standard and other
well-known definitions to highlight the similarities and differences between
each of these.
11:15
3aUW7. The definitions of “level,” “sound pressure,” and “sound pressure level” in the International System of Quantities, and their implications for international standardization in underwater acoustics. Michael
A. Ainslie (Acoust. and Sonar, TNO, P.O. Box 96864, The Hague 2509JG,
Netherlands, michael.ainslie@tno.nl)
The International System of Quantities (ISQ), incorporating definitions
of physical quantities and their units, was completed in 2009 following an
extensive collaboration between two major international standards organizations, the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC). The ISQ encompasses all SI
units as well as selected units outside the SI such as the byte (including both
decimal and binary multiples), bel, neper, and decibel. The ISQ, which
includes definitions of the terms “level,” “sound pressure,” and “sound pressure level,” is presently being used to underpin an underwater acoustics terminology standard under development by ISO. For this purpose, pertinent
ISQ definitions are analyzed and compared with alternative standard definitions, and with conventional use of the same terms. The benefits of combining IEC and ISO definitions into a single standard, solving some longstanding problems, are described. The comparison also reveals some teething problems, such as internal inconsistencies within the ISQ, and discrepancies with everyday use of some of the terms, demonstrating the need for
continued collaboration between the major standards bodies. As of 2014,
the ISQ is undergoing a major revision, leading to a unique opportunity to
resolve these discrepancies.
168th Meeting: Acoustical Society of America
2217
3a WED. AM
Contributed Papers
WEDNESDAY AFTERNOON, 29 OCTOBER 2014
MARRIOTT 7/8, 1:00 P.M. TO 3:00 P.M.
Session 3pAA
Architectural Acoustics: Architectural Acoustics Medley
Norman H. Philipp, Chair
Geiler & Associates, 1840 E. 153rd Circle, Olathe, KS 66062
Contributed Papers
1:00
3pAA1. From the sound up: Reverse-engineering room shapes from
sound signatures. Willem Boning and Alban Bassuet (Acoust., ARUP, 77
Water St., New York, NY 10005, willem.boning@arup.com)
Typically, architects and acousticians design rooms for music starting
from a model room shape known from past experience to perform well
acoustically. We reverse the typical design process by using a model sound
signature to generate room shapes. Our method builds off previous research
on reconstructing room shapes from recorded impulse responses, but takes
an instrumental, design-oriented approach. We demonstrate how an abstract
sound signature constructed in a hybrid image source-statistical acoustical
simulator can be translated into a room shape with the aid of a parametric
design interface. As a proof of concept, we present a study in which we generated a series of room shapes from the same sound signature, analyzed
them with commercially available room acoustic software, and found objective parameters for comparable receiver positions between shapes to be
within just-noticeable-difference ranges of each other.
1:15
3pAA2. Achieving acoustical comfort in restaurants. Paul Battaglia
(Architecture, Univ. at Buffalo, 31 Rose Ct Apt. 4, Snyder, NY 14226,
plb@buffalo.edu)
The achievement of a proper acoustical ambiance for restaurants has
long been described as a problem of controlling noise to allow for speech
intelligibility among patrons at the same table. This simplification of the
acoustical design problem for restaurants does not entirely result in achieving either a sensation of acoustical comfort or a preferred condition for
social activity sought by architects. In order to more fully study the subjective impression of acoustical comfort a large data base from 11 restaurants
with 75 patron surveys for each (825 total) was assembled for analysis. The
results indicate that a specific narrow range of reverberation time can produce acoustical comfort for restaurant patrons of all ages. Other physical
and acoustical conditions of the dining space are shown to have little to no
consistent effect on the registration of comfort. The results also indicate that
different subjective components of acoustical comfort—quietude, communication, privacy—vary significantly by age group with specific consequences
for the acoustical design of restaurants for different clienteles.
1:30
3pAA3. 500-seat theater in the city of Qom; Computer simulation vs.
acoustics measurements. Hassan Azad (Architecture, Univ. of Florida,
3527 SW, 20th Ave., 1132B, Gainesville, FL 32607, h.azad@ufl.edu)
There is an under construction 500-seat Theater in Qom city in Iran in
which I was part of the acoustics design team. We went through a different
steps of the acoustics design using Odeon software packages which enabled
us to go back and forth in design process and make proper improvement
while we were suffering from having limitations on material choice. Fortunately the theater is being built so after a while it would be feasible to do
acoustics measurements with the help of Building and Housing Research
Center (BHRC) in Iran as well as subjective evaluation during the very first
performances. This paper is aimed to juxtapose the results of computer
2218
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
simulation and acoustics measurement and make a comparison in between
to see if there are any discrepancies.
1:45
3pAA4. Acoustical materials and sustainability analyses. Hassan Azad
(Architecture, Univ. of Florida, 3527 SW, 20th Ave., 1132B, Gainesville,
FL 32607, h.azad@ufl.edu)
Acoustical materials can perform a variety of functions from absorption
and diffusion to the insulation and noise control. They may have similar
acoustical performance but very different characteristics in terms of sustainability. It is important to evaluate the environmental effects of materials
which exhibit the same acoustical performance in order to wisely choose the
best alternative available. This study is intended to introduce and compare
the different tools and methods which are commonly used in environmental
sustainability analysis of materials including Eco-profile, Eco-indicator,
Eco-invent, and also software packages like IMPACT. Also, a specific kind of
computer model is proposed in which one can run process of calculation of
both acoustics properties and sustainability assessment of a given material
through computer aided techniques. The model consists of a simple cubic
room with a given set of materials for its elements like walls, floor, ceiling,
and windows or doors (if any). The acoustics properties which can be calculated are reverberation time with the help of either Odeon or Catt-Acoustics
Software and Air borne/impact sound insulation with the help of recently
developed software, SonArchitect. For the sustainability assessment both
LCA method and software packages like IMPACT are the main tools.
2:00
3pAA5. Influence of the workmanship on the airborne sound insulation
properties of light weight building plasterboard steel frame wall systems. Herbert Muellner (Acoust. and Bldg. Phys., Federal Inst. of Technol.
TGM Vienna, Wexstrasse 19-23, Vienna A-1200, Austria, herbert.muellner@tgm.ac.at) and Thomas Jakits (Appl. Res. and Development, SaintGobain Rigips Austria GesmbH, Vienna, Austria)
Building elements which are built according to the light weight mode of
construction, e.g. plasterboard steel frame wall systems show a large variation of air borne sound insulation properties although the elements appear as
identical. According to several studies conducted in the recent years, certain
aspects of workmanship have significant influence on the air borne sound
insulation characteristics of light weight building elements. The method to
fasten the planking (e.g., gypsum boards, gypsum fiber boards) as well as
the number and position of the screws can lead to considerable variations
regarding the sound insulation properties. Above 200 Hz, the sound reduction index R can differ more than 10 dB by the variation of the position of
the screws. Applying prefabricated composite panels of adhesive connected
plasterboards not only considerably reduces the depth of the dip of the critical frequency caused by the higher damping due to the interlayer but it can
also significantly decrease the negative influence of the workmanship on the
air borne sound insulation properties of these kinds of light weight walls in
comparison to the standard planking of double layer plasterboard systems.
The influence of secondary construction details and workmanship will be
discussed in the paper.
168th Meeting: Acoustical Society of America
2218
2:15
firefighters. The National Institute for Occupational Safety and Health
(NIOSH) firefighter fatality reports suggest that there have been instances
when the PASS alarm is not audible by other firefighters on the scene. This
paper seeks to use acoustic models to measure the sound pressure level of
various signals throughout a structure. With this information, a visual representation will be created to map where a PASS alarm is audible and where it
is masked by noise sources. This paper presents an initial audibility study,
including temporal masking and frequency analysis. The results will be
compared to auralizations and experimental data. Some other potential
applications will be briefly explored.
3pAA6. Contribution of floor treatment characteristics to background
noise levels in health care facilities, Part 1. Adam L. Paul, David A.
Arena, Eoin A. King, Robert Celmer (Acoust. Prog. & Lab, Univ. of Hartford , 200 Bloomfield Ave., West Hartford, CT 06117, celmer@hartford.
edu), and John J. LoVerde (Paul S. Veneklasen Res. Foundation, Santa
Monica, CA)
Acoustical tests were conducted on five types of commercial-grade
flooring to assess their potential contribution to noise generated within
health care facilities outside of patient rooms. The floor types include sheet
vinyl (with and without a 5 mm rubber backing), virgin rubber (with and
without a 5 mm rubber backing), and a rubber-backed commercial grade
carpet for comparison. The types of acoustical tests conducted were ISO3741 compliant sound power level testing (using two source types: a tapping
machine to simulate footfalls and a rolling hospital cart), and sound absorption testing as per ASTM-C423. Among the non-carpet samples, the material type that produced the least sound power was determined to be the
rubber-backed sheet vinyl. While both 5 mm-backed samples showed a significant difference compared to their un-backed counterparts with both
source types, the rubber-backed sheet vinyl performed slightly better than
the rubber-backed virgin rubber in the higher frequency bands in both tests.
The performance and suitability of these flooring materials in a health care
facility compared to commercial carpeting will be discussed. [Work supported by Paul S. Veneklasen Research Foundation.]
2:45
3pAA8. Investigations on acoustical coupling within single-space monumental structures using a diffusion equation model. Z€
uhre S€
u G€
ul (R&D
/ Architecture, MEZZO Studyo / METU, METU Technopolis KOSGEBTEKMER No112, ODTU Cankaya, Ankara 06800, Turkey, zuhre@mezzostudyo.com), Ning Xiang (Graduate Program in Architectural Acoust.,
School
¸ of Architecture, Rensselaer Polytechnic Inst., Troy, NY), and Mehmet Calışkan (Dept. of Mech. Eng., Middle East Tech. Univ. / MEZZO
Studyo, Ankara, Turkey)
2:30
3pAA7. Visualization of auditory masking for firefighter alarm detection. Casey Farmer (Dept. of Mech. Eng., Univ. of Texas at Austin, 1208
Enfield Rd., Apt. 203, Austin, TX 78703, caseymfarmer@utexas.edu), Mustafa Z. Abbasi, Preston S. Wilson (Appl. Res. Labs, Dept. of Mech. Eng.,
Univ. of Texas at Austin, Austin, TX), and Ofodike A. Ezekoye (Dept. of
Mech. Eng., Univ. of Texas at Austin, Austin, TX)
An essential piece of firefighter equipment is the Personal Alert Safety
System (PASS), which emits an alarm when a firefighter has been inactive
for a specified period of time and is used to find and rescue downed
WEDNESDAY AFTERNOON, 29 OCTOBER 2014
INDIANA A/B, 1:00 P.M. TO 3:20 P.M.
Session 3pBA
Biomedical Acoustics: History of High Intensity Focused Ultrasound
Lawrence A. Crum, Cochair
Applied Physics Laboratory, University of Washington, Center for Industrial and Medical Ultrasound, Seattle, WA 98105
Narendra T. Sanghvi, Cochair
R & D, SonaCare Medical, 4000 Pendleton way, Indianapolis, IN 46226
Invited Papers
1:00
3pBA1. History of high intensity focused ultrasound, Bill and Frank Fry and the Bioacoustics Research Laboratory. William
O’Brien and Floyd Dunn (Elec. Eng., Univ. of Illinois, 405 N. Mathews, Urbana, IL 61801, wdo@uiuc.edu)
1946 is a key year in the history of HIFU. That year, sixty-eight years ago, the Bioacoustics Research Laboratory was established at
the University of Illinois. Trained in theoretical physics, William J, (Bill) Fry (1918–1968) left his graduate studies at Penn State University to work at the Naval Research Laboratory in Washington, DC on underwater sound during World War II. Bill was hired by the
2219
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2219
3p WED. PM
Sound energy distributions and flows within single-space rooms can be
exploited to understand the occurrence of multi-slope decays. In this work,
a real-size monumental worship space is selected for investigations of nonexponential sound energy decays. Previous field tests in this single-space
venue indicate multi-slope formation within such a large volume and the
multiple-dome upper structure layout. In order to illuminate/reveal the probable reasons of non-exponential sound energy decays within such an architectural venue, sound energy distributions and energy flows are investigated.
Due to its computational efficiency and advantages in spatial energy density
and flow vector analysis, a diffusion equation model (DEM) is applied for
modeling sound field of the monumental worship space. Preliminary studies
indicate good agreement for overall energy decay time estimations among
experimental field and DEM results. The energy flow vector and energy distribution analysis indicate the upper central dome-structure to be the potential energy accumulation/ concentration zone, contributing to the later
energy decays.
University of Illinois in 1946, wanting to continue to conduct research activities of his own choosing in the freer university atmosphere.
Like Bill, Francis J. (Frank) Fry (1920–2005) went to Penn State as well as the University of Pittsburgh where he studied electrical engineering. Frank joined Bill at the University of Illinois, also in 1946, having worked at Westinghouse Electric Corporation where his division was a prime contractor on the Manhattan Project. Floyd Dunn also arrived at the University of Illinois in 1946 as an undergraduate
student, having served in the European Theater during World War II. The talk will recount some of the significant HIFU contributions
that emerged from BRL faculty, staff, and students. [NIH Grant R37EB002641.]
1:20
3pBA2. Transforming ultrasound basic research in to clinical systems. Narendra T. Sanghvi and Thomas D. Franklin (R & D, SonaCare Medical, 4000 Pendleton way, Indianapolis, IN 46226, narensanghvi@sonacaremedical.com)
In late 1960s, Robert F. Heimburger, MD, Chief of Neurosurgery at Indiana University School of Medicine, started collaborating
with William J. Fry and Francis J. Fry at Interscience Research Institute (IRI) in Champaign, IL. and treated brain cancer patients with
HIFU. In 1970, Dr. Heimburger and Indiana University School of Medicine (IUMS) invited IRI to join IUMS and Indianapolis Center
For Advanced Research, Inc. (ICFAR). In 1972, a dedicated Fortune Fry Research Laboratory (FFRL) was inaugurated to advance ultrasound research relevant for clinical use. In the ‘70s, an automated computer controlled, integrated B-mode, image-guided HIFU system
(“the candy machine”) was developed that successfully treated brain cancer patients at IUMS. HIFU was found to be safe for the
destruction of brain tumors. Later a second-generation brain HIFU device was developed to work with CAT or MR images. In 1974, the
FFRL developed a first cardiac real-time, 2-D ultrasound scanner. Prof. H. Feigenbaum pioneered this imaging technique and formed
“Echocardigraphy Society.” In 1978, an automated breast ultrasound system was successfully developed led to form Labsonics, Inc. that
proliferated 300 scanners in 4 years. In 1986, the Sonablate system to treat prostate cancer was developed. The Sonablate has been used
worldwide.
1:40
3pBA3. The development of high intensity focused ultrasound in Europe, what could we have done better? Gail ter Haar (Phys.,
Inst. of Cancer Res., Phys. Dept., Royal Marsden Hospital, Sutton, Surrey SM2 5PT, United Kingdom, gail.terhaar@icr.ac.uk)
The clinical uptake of HIFU has been disappointingly slow. This despite its promise as a minimally invasive, ultimately conformal
technique. It may be instructive to look at the way in which this technique has evolved from its early days with an eye as to whether a
different approach might have resulted in its more rapid acceptance. Examples will be drawn from HIFU’s development in the United
Kingdom.
2:00
3pBA4. LabTau’s experience in therapeutic ultrasound : From lithotripsy to high intensity focused ultrasound. Jean-Yves
Chapelon, Michael Canney, David Melodelima, and Cyril Lafon (U1032, INSERM, 151 Cours Albert Thomas, Lyon 69424, France,
jean-yves.chapelon@inserm.fr)
Research on therapeutic ultrasound at LabTau (INSERM Lyon, France) began in the early 1980s with work on shock waves that
lead to the development of the first ultrasound-guided lithotripter. In 1989, this research shifted towards new developments in the field
of HIFU with applications in urology and oncology. The most significant developments have been obtained in urology with the AblathermTM project, a transrectal HIFU device for the thermal ablation of the prostate. This technology has since become an effective therapeutic alternative for patients with localized prostate cancer. Since 2000, three generations of the AblathermTM have been CE marked
and commercialized by EDAP-TMS. The latest version, the FocalOneTM, allows for the focal treatment of prostate cancer and combines
dynamic focusing and fusion of MR images to ultrasound images acquired in real time by the imaging probe integrated in the HIFU
transducer. Using toroidal ultrasound transducers, a HIFU device was also recently validated clinically for the treatment of liver metastases. Another novel application that has reached the clinic is for the treatment of glaucoma using a miniature, disposable HIFU device.
Today, new approaches are also being investigated for treating cerebral and cardiac diseases.
2:20
3pBA5. High intensity therapeutic ultrasound research in the former USSR in the 1950s–1970s. Vera Khokhlova (Dept. of Acoust.,
Phys. Faculty, Moscow State Univ., 1013 NE 40th St., Seattle, Washington 98105, va.khokhlova@gmail.com), Valentin Burov (Dept.
of Acoust., Phys. Faculty, Moscow State Univ., Moscow, Russian Federation), and Leonid Gavrilov (Andreev Acoust. Inst., Moscow,
Russian Federation)
A historical overview of therapeutic ultrasound research performed in the former USSR in the 1950s–1970s is presented. In the
1950s, the team of A.K.Burov in Moscow proposed the use of non-thermal, non-cavitational mechanisms of high intensity unfocused
ultrasound to induce specific immune responses in treating Brown Pearce tumors in an animal model and melanoma tumors in a number
of patients. Later, in the early 1970s, new studies began at the Acoustics Institute in Moscow jointly with several medical institutions.
Significant results included first measurements of cavitation thresholds in animal brain tissues in vivo and demonstration of the feasibility to apply high intensity focused ultrasound (HIFU) for local ablation of brain structures through the intact skull. Another direction
was ultrasound stimulation of superficial and deep receptors in humans and animals using short HIFU pulses; these studies became the
basis for ultrasound stimulation of different neural structures and have found useful clinical applications for diagnostics of skin, neurological, and hearing disorders. Initial studies on the synergism between ultrasound in therapeutic doses combined with consecutive application of ionizing radiation were carried out. Later, hyperthermia research was also performed for brain tissues and for ophthalmology.
[Work supported by the grant RSF 14-12-00974.]
2220
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2220
2:40
3pBA6. The development of MRI-guided focused ultrasound at Brigham & Women’s Hospital. Nathan McDannold (Radiology,
Brigham and Women, 75 Francis St, Boston, MA MA, njm@bwh.harvard.edu)
The Focused Ultrasound Laboratory was created in the Department of Radiology at Brigham & Women’s Hospital in the early
1990’s, when Ferenc Jolesz invited Kullervo Hynynen to join him to collaborate with GE Medical Systems to develop MRI-guided
Focused Ultrasound surgery. This collaboration between Dr. Hynynen, an experienced researcher of therapeutic ultrasound, Dr. Jolesz,
who developed MRI-guided laser ablation, and the engineers at GE and later InSightec, with their decades of experience developing
MRI and ultrasound systems, established a program that over two decades produced important contributions to HIFU. In this talk,
Nathan McDannold, the current director of the laboratory, will review the achievements made by the team of researchers, which include
the development of the first MRI-guided FUS system, the creation of the first MRI-compatible phased arrays, important contributions to
the validation and implantation of MR temperature mapping and thermal dosimetry, the development of an MRI-guided transcranial system, and the discovery that ultrasound and microbubbles can temporarily disrupt the blood–brain barrier. This output of this team, which
led to clinical systems that have treated tens of thousands of patients at sites around the world, is an excellent example of how academic
research can be to the clinic.
3:00
3pBA7. What have we learned about shock wave lithotripsy in the past thirty years? Pei Zhong (Mech. Eng. and Mater. Sci., Duke
Univ., 101 Sci. Dr., Durham, NC 27708, pzhong@duke.edu)
Shock wave lithotripsy (SWL) has revolutionized the treatment of kidney stone disease since its introduction in the early 1980s.
Considering the paucity of knowledge about the bioeffects of shockwaves in various tissues and renal concretions 30 years ago, the success of SWL is a truly remarkable fest on its own. We have learned a lot since then. New technologies have been introduced for shock
wave generation, focusing, and measurement, among others. In parallel, new knowledge has been acquired progressively about the
mechanisms of stone comminution and tissue injury. Yet there are still outstanding issues that are constantly debated, waiting for resolution. In this talk, the quest for a better understanding of the shockwave interaction with stones and renal tissue in the field of SWL will
be reviewed in chronological order. Focus will be on stress waves and cavitation for their distinctly different (for their origin), yet often
synergistically combined (in their action), roles in the critical processes of SWL. This historical review will be followed by a discussion
of the recent development and future prospects of SWL technologies that may ultimately help to improve the clinical performance and
safety of contemporary shock wave lithotripters. [Work supported by NIH through 5R37DK052985-18.]
WEDNESDAY AFTERNOON, 29 OCTOBER 2014
INDIANA C/D, 2:00 P.M. TO 3:05 P.M.
3p WED. PM
Session 3pED
Education in Acoustics: Acoustics Education Prize Lecture
Uwe J. Hansen, Chair
Chemistry & Physics, Indiana State University, 64 Heritage Dr, Terre Haute, IN 47803-2374
Chair’s Introduction—2:00
Invited Paper
2:05
3pED1. Educating mechanical engineers in the art of noise control. Colin Hansen (Mech. Eng., Univ. of Adelaide, 33 Parsons St.,
Marion, SA 5043, Australia, chansen@bigpond.net.au)
Acoustics and noise control is one of the disciplines where the material that students learn during a well-structured undergraduate
course, can be immediately applied to many problems that they may encounter during their employment. However, in order to find optimal solutions to noise control problems, it is vitally important that students have a good fundamental understanding of the physical principles underlying the subject as well as a good understanding of how these principles may be applied in practice. Ideally, they should
have access to affordable software and be confident in their ability to interpret and apply the results of any computer-based modelling
that they may undertake. Students must fully understand any ethical issues that may arise, such as their obligation to ensure their actions
do not contribute to any negative impact on the health and welfare of any communities. How do we ensure that our mechanical engineering graduates develop the understanding and knowledge required to tackle noise control problems that they may encounter after graduation? This presentation attempts to answer this question by discussing the process of educating undergraduate and postgraduate
mechanical engineering students at the University of Adelaide, including details of lab classes, example problems, text books and software developed for the dual purpose of educating students and being useful in assisting graduates solve practical noise control problems.
2221
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2221
WEDNESDAY AFTERNOON, 29 OCTOBER 2014
INDIANA E, 1:00 P.M. TO 2:35 P.M.
Session 3pID
Interdisciplinary: Hot Topics in Acoustics
Paul E. Barbone, Chair
Mechanical Engineering, Boston University, 110 Cummington St, Boston, MA 02215
Chair’s Introduction—1:00
Invited Papers
1:05
3pID1. Online education: From classrooms to outreach, the internet is changing the way we teach and learn. Michael B. Wilson
(Phys., North Carolina State Univ., 1649 Highlandon Ct, State College, PA 16801, wilsomb@gmail.com)
The internet is changing the face of education in the world today. More people have access to more information than ever before,
and new programs are organizing and providing educational content for free to millions of internet users worldwide. This content ranges
from interesting facts and demonstrations that introduce a topic to entire university courses. Some of these programs look familiar and
draw from the media and education of the past, building off the groundwork laid by television programs like Watch Mr. Wizard, Bill
Nye the Science Guy, and Reading Rainbow, with others more reminiscent of traditional classroom lectures. Some programs, on the
other hand, are truly a product of modern internet culture and fan communities. While styles and target audiences vary greatly, the focus
is education, clarifying misconceptions, and sparking an interest in learning. Presented will be a survey of current online education,
resources, and outreach, as well as the state of acoustics in online education.
1:35
3pID2. Advanced methods of signal processing in acoustics. R. Lee Culver (School of Architecture, Rensselaer Polytechnic Inst.,
State College, Pennsylvania) and Ning Xiang (School of Architecture, Rensselaer Polytechnic Inst., Greene Bldg., 110 8th St., Troy, NY
12180, xiangn@rpi.edu)
Signal processing is applied in virtually all areas of modern acoustics to extract, classify, and/or quantify relevant information from
acoustic measurements. Methods range from classical approaches based on Fourier and time-frequency analysis, to array signal processing, feature extraction, computational auditory scene analysis, and Bayesian inference, which incorporates physical models of the acoustic system under investigation together with advanced sampling techniques. This talk highlights new approaches to signal processing
recently applied in a broad variety of acoustical problems.
2:05
3pID3. Hot topics in fish acoustics (active). Timothy K. Stanton (Dept. Appl. Ocean. Phys. & Eng., Woods Hole Oceanographic Inst.,
Woods Hole, MA 02543, tstanton@whoi.edu)
It is important to quantify the spatial distribution of fish in their natural environment (ocean, lake, and river) and how the distribution
evolves in time for a variety of applications including (1) management of fish stocks to maintain a sustainable source of food and (2) to
improve our understanding of the ecosystem (such as how climate change impacts fish) through quantifying predator–prey relationships
and other behavior. Active fish acoustics provides an attractive complement to nets given the great distances sound travels in the water
and its ability to rapidly survey a large region at high resolution. This method involves studying distributions of fish in the water through
analyzing their echoes through various means. While this field has enjoyed development for decades, there remain a number of “hot topics” receiving attention from researchers today. These include: (1) broadband acoustics as an emerging tool for advanced classification
of, and discrimination between, species, (2) multi-beam imaging systems used to classify fish schools by size and shape, (3) long-range
(km to 10’s km) detection of fish, and (4) using transmission loss to classify fish on one-way propagation paths. Recent advances in these
and other topics will be presented.
2222
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2222
WEDNESDAY AFTERNOON, 29 OCTOBER 2014
MARRIOTT 3/4, 1:00 P.M. TO 3:15 P.M.
Session 3pNS
Noise: Sonic Boom and Numerical Methods
Jonathan Rathsam, Cochair
NASA Langley Research Center, MS 463, Hampton, VA 23681
Alexandra Loubeau, Cochair
NASA Langley Research Center, MS 463, Hampton, VA 23681
Contributed Papers
3pNS1. Source parameters for the numerical simulation of lightning as
a nonlinear acoustic source. Andrew Marshall, Neal Evans, Chris Hackert,
and Karl Oelschlaeger (Southwest Res. Inst., 6220 Culebra Rd., San Antonio, TX 78238-5166, andrew.marshall@swri.org)
Researchers have proposed using acoustic data to obtain additional
insight into aspects of lightning physics. However, it is unclear how much
information is retained in the nonlinear acoustic waveform as it propagates
and evolves away from the lightning channel. Prior research in tortuous
lightning has used simple N-waves as the initial acoustic emission. It is not
clear if more complex properties of the lightning channel physics are also
transmitted in the far-field acoustic signal, or if simple N-waves are a sufficient source term to predict far-field propagation. To investigate this, the
authors have conducted a numerical study of acoustic emissions from a linear lightning channel. Using a hybrid strong-shock/weak-shock code, the
authors compare the propagation of a simple N-wave and emissions from a
source derived from simulated strong shock waves from the lightning channel. The implications of these results on the measurement of sound from
nearby lightning sources will be discussed.
1:15
3pNS2. Nearfield acoustic measurements of triggered lightning using a
one-dimensional microphone array. Maher A. Dayeh and Neal Evans
(Southwest Res. Inst., Div 18, B77, 6220 Culebra Rd., San Antonio, TX
78238, neal.evans@swri.org)
For the first time, acoustic signatures from rocket-triggered lightning are
measured by a 15 m long, one-dimensional microphone array consisting of
16 receivers, situated 79 m from the lightning channel. Measurements were
taken at the International Center for Lightning Research and Testing
(ICLRT) in Camp Blanding, FL, during the summer of 2014. We describe
the experimental setup and report on the first observations obtained to date.
We also discuss the implications of these novel measurements on the thunder initiation process and its energy budget during lightning discharges.
Challenges of obtaining measurements in these harsh ambient conditions
and their countermeasures will also be discussed.
1:30
3pNS3. The significance of edge diffraction in sonic boom propagation
within urban environments. Jerry W. Rouse (Structural Acoust. Branch,
NASA Langley Res. Ctr., 2 North Dryden St., MS 463, Hampton, VA
23681, jerry.w.rouse@nasa.gov)
Advances in aircraft design, computational fluid dynamics, and sonic
boom propagation modeling suggest that commercial supersonic aircraft can
be designed to produce quiet sonic booms. Driven by these advances the
decades-long government ban on overland supersonic commercial air transportation may be lifted. The ban would be replaced with a noise-based certification standard, the development of which requires knowledge of
2223
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
community response to quiet sonic booms. For inner city environments the
estimation of community exposure to sonic booms is challenging due to the
complex topography created by buildings, the large spatial extent and the
required frequency range. Such analyses are currently intractable for traditional wave-based numerical methods such as the Boundary Element
Method. Numerical methods based upon geometrical acoustics show promise, however edge diffraction is not inherent in geometrical acoustics and
may be significant. This presentation shall discuss an initial investigation
into the relative importance of edge diffraction in inner city sound fields
caused by sonic booms. Results will provide insight on the degree to which
edge diffraction effects are necessary for accurate predictions of inner city
community exposure.
1:45
3pNS4. Sonic boom noise exposure inside homes. Jacob Klos (Structural
Acoust. Branch, NASA Langley Res. Ctr., 2 N. Dryden St., MS 463, Hampton, VA 23681, j.klos@nasa.gov)
Commercial supersonic overland flight is presently banned both nationally and internationally due to the sonic boom noise that is produced in
overflown communities. However, within the next decade, NASA and
industry may develop and demonstrate advanced supersonic aircraft that significantly mitigate the noise perceived at ground level. To allow commercial
operation of such vehicles, bans on commercial supersonic flight must be
replaced with a noise-based certification standard. In the development of
this standard, variability in the dose-response model needs to be identified.
Some of this variability is due to differing sound transmission characteristics
of homes both within the same community and among different communities. A tool to predict the outdoor-to-indoor low-frequency noise transmission into homes has been developed at Virginia Polytechnic Institute and
State University, which was used in the present study to assess the indoor
exposure in two communities representative of the northern and southern
United States climate zones. Sensitivity of the indoor noise level to house
geometry and material properties will be discussed. Future plans to model
the noise exposure variation among communities within the United States
will also be discussed.
2:00
3pNS5. Evaluation of the effect of aircraft size on indoor annoyance
caused by sonic booms. Alexandra Loubeau (Structural Acoust. Branch,
NASA Langley Res. Ctr., MS 463, Hampton, VA 23681, a.loubeau@nasa.
gov)
Sonic booms from recently proposed supersonic aircraft designs developed with advanced tools are predicted to be quieter than those from previous designs. The possibility of developing a low-boom flight demonstration
vehicle for conducting community response studies has attracted international interest. These studies would provide data to guide development of a
preliminary noise certification standard for commercial supersonic aircraft.
An affordable approach to conducting these studies suggests the use of a
168th Meeting: Acoustical Society of America
2223
3p WED. PM
1:00
sub-scale experimental aircraft. Due to the smaller size and weight of the
sub-scale vehicle, the resulting sonic boom is expected to contain spectral
characteristics that differ from that of a full-scale vehicle. To determine
the relevance of using a sub-scale aircraft for community annoyance studies, a laboratory study was conducted to verify that these spectral differences do not significantly affect human response. Indoor annoyance was
evaluated for a variety of sonic booms predicted for several different sizes
of vehicles. Previously reported results compared indoor annoyance for the
different sizes using the metric Perceived Level (PL) at the exterior of the
structure. Updated results include analyses with other candidate noise metrics, nonlinear regression, and specific boom duration effects.
2:15
3pNS6. Effects of secondary rattle noises and vibration on indoor
annoyance caused by sonic booms. Jonathan Rathsam (NASA Langley
Res. Ctr., MS 463, Hampton, VA 23681, jonathan.rathsam@nasa.gov)
For the past 40 years, commercial aircraft have been banned from overland supersonic flight due to the annoyance caused by sonic booms. However,
advanced aircraft designs and sonic boom prediction tools suggest that significantly quieter sonic booms may be achievable. Additionally, aircraft noise
regulators have indicated a willingness to consider replacing the ban with a
noise-based certification standard. The outdoor noise metric used in the certification standard must be strongly correlated with indoor annoyance. However, predicting indoor annoyance is complicated by many factors including
variations in outdoor-to-indoor sound transmission and secondary indoor rattle noises. Furthermore, direct contact with vibrating indoor surfaces may also
affect annoyance. A laboratory study was recently conducted to investigate
candidate noise metrics for the certification standard. Regression analyses
were conducted for metrics based on the outdoor and transmitted indoor sonic
boom waveforms both with and without rattle noise, and included measured
floor vibration. Results indicate that effects of vibration are significant and independent of sound level. Also, the presence or absence of rattle sounds in a
transmitted sonic boom signal generally changes the regression coefficients
for annoyance models calculated from the outdoor sound field, but may not
for models calculated from the indoor sound field.
2:30
3pNS7. Artificial viscosity in smoothed particle hydrodynamics simulation of sound interference. Xu Li, Tao Zhang, YongOu Zhang (School of
Naval Architecture and Ocean Eng., Huazhong Univ. of Sci. and Technol.,
Wuhan, Hubei Province 430074, China, lixu199123@gmail.com), Huajiang
Ouyang (School of Eng., Univ. of Liverpool, Liverpool, United Kingdom),
and GuoQing Liu (School of Naval Architecture and Ocean Eng., Huazhong
Univ. of Sci. and Technol., Wuhan, Hubei Province, China)
The artificial viscosity has been widely used in reducing unphysical
oscillations in the Smoothed Particle Hydrodynamics (SPH) simulations.
However, the effects of artificial viscosity on the SPH simulation of sound
interference have not been discussed in existing literatures. This paper analyzes the effects and gives some suggestions on the choice of computational
parameters of the artificial viscosity in the sound interference simulation.
First, a standard SPH code for simulating sound interference in the time domain is built by solving the linearized acoustic wave equations. Second, the
Monaghan type artificial viscosity is used to optimize the SPH simulation.
Then the SPH codes with and without the artificial viscosity are both used to
simulate the sound interference and the numerical solutions are compared
with the theoretical results. Finally, different values of computational parameters of the artificial viscosity are used in the simulation in order to
determine the appropriate values. It turns out that the numerical solutions of
2224
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
SPH simulation of sound interference agree well with the theoretical results.
The artificial viscosity can improve the accuracy of the sound interference
simulation. The appropriate values of computational parameters of the artificial viscosity are recommended in this paper.
2:45
3pNS8. Smoothed particle hydrodynamics simulation of sound reflection and transmission. YongOu Zhang (School of Naval Architecture and
Ocean Eng., Huazhong Univ. of Sci. and Technol., Wuhan 430074, China,
zhangyo1989@gmail.com), Tao Zhang (School of Naval Architecture and
Ocean Eng., Huazhong Univ. of Sci. and Technol., Wuhan, Hubei Province,
China), Huajiang Ouyang (School of Eng., Univ. of Liverpool, Liverpool,
United Kingdom), and TianYun Li (School of Naval Architecture and
Ocean Eng., Huazhong Univ. of Sci. and Technol., Wuhan, China)
Mesh-based methods are widely used in acoustic simulations nowadays. However, acoustic problems with complicated domain topologies
and multiphase systems are difficult to be described with these methods.
On the contrary, Smoothed Particle Hydrodynamics (SPH), as a Lagrangian method, does not have much trouble in solving these problems. The
present paper aims to simulate the reflection and transmission of sound
waves with the SPH method in time domain. Firstly, the linearized acoustic equations are represented in the SPH form by using the particle approximation. Then, one dimensional sound reflection and transmission are
simulated with the SPH method and the solutions are compared with the
theoretical results. Finally, the effects of smoothing length and neighboring
particle numbers on the computation are discussed. The errors of sound
pressure, particle velocity, and change of density show that the SPH
method is feasible in simulating the reflection and transmission of sound
waves. Meanwhile, the relationship between the characteristic impedance
and the reflected waves obtained by the SPH simulation is consistent with
the theoretical result.
3:00
3pNS9. A high-order Cartesian-grid finite-volume method for aeroacoustics simulations. Mehrdad H. Farahani (Head and Neck Surgery,
UCLA, 31-24 Rehab Ctr., UCLA School of Medicine, 1000 Veteran Ave.,
Los Angeles, CA 90095, mh.farahani@gmail.com), John Mousel (Mech.
and Industrial Eng., The Univ. of Iowa, Iowa City, IA), and Sarah Vigmostad (Biomedical Eng., The Univ. of Iowa, Iowa City, IA)
A moving-least-square based finite-volume method is developed to simulate acoustic wave propagation and scattering from complicated solid geometries. This hybrid method solves the linearized perturbed compressible
equations as the governing equations of the acoustic field. The solid boundaries are embedded in a uniform Cartesian grid and represented using level
set fields. Thus, the current approach avoids unstructured grid generation for
the irregular geometries. The desired boundary conditions are imposed
sharply on the immersed boundaries using a ghost fluid method. The scope
of the implementation of the moving moving-least-square approach in the
current solver is threefold: reconstruction of the field variables on cell faces
for high-order flux construction, population of the ghost cells based on the
desired boundary condition, and filtering the high wave number modes near
the immersed boundaries. The computational stencils away from the boundaries are identical; hence, only one moving-least-square shape-function is
computed and stored with its underlying grid pattern for all the interior
cells. This feature significantly reduces the memory requirement of the
acoustic solver compared to similar finite-volume method on irregular
unstructured mesh. The acoustic solver is validated against several benchmark problems.
168th Meeting: Acoustical Society of America
2224
WEDNESDAY AFTERNOON, 29 OCTOBER 2014
MARRIOTT 9/10, 1:00 P.M. TO 3:25 P.M.
Session 3pUW
Underwater Acoustics: Shallow Water Reverberation I
Dajun Tang, Chair
Applied Physics Lab., Univ. of Washington, 1013 NE 40th St., Seattle, WA 98105
Chair’s Introduction—1:00
Invited Papers
1:05
3pUW1. Overview of reverberation measurements in Target and Reverberation Experiment 2013. Jie Yang, Dajun Tang, Brian T.
Hefner, Kevin L. Williams (Appl. Phys. Lab, Univ. of Washington, 1013 NE 40th St., Seattle, WA 98105, jieyang@apl.washington.
edu), and John R. Preston (Appl. Res. Lab., Penn State Univ., State College, PA)
The Target and REverberation EXperiment 2013 (TREX13) was carried out off the coast of Panama City, Florida, from 22 April to
16 May, 2013. Two fixed-source/fixed-receiver acoustic systems were used to measure reverberation over time under diverse environmental conditions, allowing study of reverberation level (RL) dependence on bottom composition, sea surface conditions, and water column properties. Beamformed RL data are categorized to facilitate studies emphasizing (1) bottom reverberation; (2) sea surface impact;
(3) biological impact; and (4) target echo. This presentation is an overview of RL over the entire experiment, summarizing major observations and providing a road map and suitable data sets for follow-up efforts on model/data comparisons. Emphasis will be placed on
the dependence of RL on local geoacoustic properties and sea surface conditions. [Work supported by ONR.]
1:25
3p WED. PM
3pUW2. Non-stationary reverberation observations from the shallow water TREX13 reverberation experiments using the
FORA triplet array. John R. Preston (ARL, Pennsylvania State Univ., P. O. Box 30, MS3510, State College, PA 16804, jrp7@arl.psu.
edu), Douglas A. Abraham (CausaSci LLC, Ellicott City, MD), and Jie Yang (APL, Univ. of Washington, Seattle, WA)
A large experimental effort called TREX13 was conducted in April-May 2013 off Panama City, Florida. As part of this effort, reverberation and clutter measurements were taken in a fixed-fixed configuration in very shallow water (~20 m) over a 22 day period. Results
are presented characterizing reverberation, clutter and noise in the 1800-5000 Hz band. The received data are taken from the triplet subaperture of the Five Octave Research Array (FORA). The array was fixed 2 m off the sea floor and data were passed to a nearby moored
ship (the R/V Sharp). An ITC 2015 source transducer was fixed 1.1 m off the seafloor nearby. Pulses comprised of gated CWs and
LFMs were used in this study. Matched filtered polar plots of the reverberation and clutter are presented using the FORA triplet beamformer. There are clear indications of biologic scattering. Some of the nearby shipwrecks are clearly visible in the clutter, as are reflections from a DRDC air-filled hose. The noise data show a surprising amount of time-dependent anisotropy. Some statistical
characterization of these various components of the reverberation are presented using K-distribution based algorithms to note differences
in the estimated shape parameter. Help from the Applied Physics Laboratory at the University of Washington was crucial to this effort.
[Work supported by ONR code 322OA.]
Contributed Paper
1:45
3pUW3. Propagation measurement using source tow and moored vertical line arrays during TREX13. William S. Hodgkiss, David Ensberg
(Marine Physical Lab, Scripps Inst. of Oceanogr., La Jolla, CA), and Dajun
Tang (Appl. Phys. Lab, Univ of Washington, 1013 NE 40th St., Seattle, WA
98105, djtang@apl.washington.edu)
The objective of TREX13 (Target and Reverberation EXperiment 2013)
is to investigate shallow water reverberation by concurrently measuring
propagation, local backscatter, and reverberation, as well as sufficient environmental parameters needed to achieve unambiguous model/data
2225
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
comparison. During TREX13 the Marine Physical Laboratory (MPL) conducted propagation and forward scatter measurements. The MPL effort during TREX13 included deploying three, 32-element (0.2 m element spacing),
vertical line arrays along the Main Reverberation Track at a bearing of
~128 and ranges ~2.4 km, ~4.2 km, and ~0.5 km from the R/V Sharp,
where reverberation measurements were being made. In addition, MPL carried out repeated source tows in the band of 2–9 kHz along the Main Reverberation Track, using tonal and LFM waveforms. The experimental
procedure is described and the resulting source-tow data is examined in the
context of Transmission Loss and its implications for reverberation.
168th Meeting: Acoustical Society of America
2225
Invited Papers
2:00
3pUW4. Comparison of signal coherence for continuous active and pulsed active sonar measurements in littoral waters. Paul C.
Hines (Dept. of Elec. and Comput. Eng., Dalhousie Univ., PO Box 15000, Halifax, NS B3H 4R2, Canada, phines50@gmail.com), Stefan
M. Murphy (Defence R&D Canada, Dartmouth, NS, Canada), and Keaton T. Hicks (Dept. of Mech. Eng., Dalhousie Univ., Halifax, NS,
Canada)
Military sonars must detect, localize, classify, and track submarine threats from distances safely outside their circle of attack. However, conventional pulsed active sonars (PAS) have duty cycles on the order of 1% which means that 99% of the time, the track is out of
date. In contrast, continuous active sonars (CAS) have a 100% duty cycle, which enables continuous updates to the track. This should
significantly improve tracking performance. However, one would typically want to maintain the same bandwidth for a CAS system as
for the PAS system it might replace. This will provide a significant increase in the time-bandwidth product, but may not produce the
increase in gain anticipated if there are coherence limitations associated with the acoustic channel. To examine the impact of the acoustic
channel on the gain for the two pulse types, an experiment was conducted as part of the Target and Reverberation Experiment (TREX)
in May 2013 using a moored active sonar and three passive acoustic targets, moored at ranges from 2 to 6 km away from the sonar. In
this paper, preliminary results from the experiment will be presented. [Work supported by the U.S. Office of Naval Research.]
2:20
3pUW5. Reverberation and biological clutter in continental shelf waveguides. Ankita D. Jain, Anamaria Ignisca (Mech. Eng.,
Massachusetts Inst. of Technol., Rm. 5-229, 77 Massachusetts Ave., Cambridge, MA 02139, ankitadj@mit.edu), Mark Andrews, Zheng
Gong (Elec. & Comput. Eng., Northeastern Univ., Boston, MA), Dong Hoon Yi (Mech. Eng., Massachusetts Inst. of Technol.,
Cambridge, MA), Purnima Ratilal (Elec. & Comput. Eng., Northeastern Univ., Boston, MA), and Nicholas C. Makris (Mech. Eng.,
Massachusetts Inst. of Technol., Cambridge, MA)
Seafloor reverberation in continental shelf waveguides is the primary limiting factor in active sensing of biological clutter in the
ocean for noise unlimited scenarios. The detection range of clutter is determined by the ratio of the intensity of scattered returns from
clutter versus the seafloor in a resolution cell of an active sensing system. We have developed a Rayleigh-Born volume scattering model
for seafloor scattering in an ocean waveguide. The model has been tested with data collected from a number of Ocean Acoustic Waveguide Remote Sensing (OAWRS) experiments in distinct US Northeast coast continental shelf environments, and has shown to provide
accurate estimates of seafloor reverberation over wide areas for various source frequencies. We estimate scattered returns from fish clutter by combining ocean-acoustic waveguide propagation modeling that has been calibrated in a variety of continental shelf environments
for OAWRS applications with a model for fish target strength. Our modeling of seafloor reverberation and scattered returns from fish
clutter is able to explain and elucidate OAWRS measurements along the US Northeast coast.
Contributed Papers
2:40
2:55
3pUW6. Transmission loss and reverberation variability during
TREX13. Sean Pecknold (DRDC Atlantic Res. Ctr., PO Box 1012, Dartmouth, NS B2Y 3Z7, Canada, sean.pecknold@drdc-rddc.gc.ca), Diana
McCammon (McCammon Acoust. Consulting, Waterville, NS, Canada),
and Dajun Tang (Ocean Acoust., Appl. Phys. Lab., Univ. of Washington,
Seattle, WA)
3pUW7. Transmission loss and direction of arrival observations from a
source in shallow water. David R. Dall’Osto (Appl. Phys. Lab., Univ. of
Washington, 1013 N 40th St., Seattle, WA 98105, dallosto@apl.washington.edu) and Peter H. Dahl (Appl. Phys. Lab. and Mech. Eng. Dept., Univ.
of Washington, Seattle, WA)
The ONR-funded Target and Reverberation Experiment 2013
(TREX13) took place in the Northeastern Gulf of Mexico near Panama
City, Florida, during April and May of 2013. During this trial, which took
place in a shallow water (20 m deep) environment, several sets of one-way
and two-way acoustic transmission loss and reverberation data were
acquired. Closed form expressions are derived to trace the uncertainty in the
inputs to a Gaussian beam propagation model through the model to obtain
an estimate of the uncertainty in the output, both for transmission loss and
for reverberation. The measured variability of the TREX environment is
used to compute an estimate of the expected transmission loss and reverberation variability. These estimates are then compared to the measured acoustic data from the trial.
2226
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Signals generated by the source used in the reverberation studies of the
Targets and Reverberation Experiment (TREX) were recorded by a receiving array located 4.7 km downrange. The bathymetry over this range is relatively flat, with water depth 20 m. The receiving system consists of a
7-channel vertical line array, a 4-channel horizontal line array, oriented perpendicular to the propagation direction, and a 4-channel vector sensor
(3-component vector and one pressure), with all channels recorded coherently recorded Transmissions were made once every 30 seconds and over a
two hour recording period, changes in the frequency content, amplitude and
direction were observed. As both the source and receiving array are at a
fixed position in the water column, these observations are assumed to be due
to changes in the environment. Interpretation of the data is given in terms of
the evolving sea-surface conditions, the presence of nearby scatterers such
as fish, and reflection/refraction due to the sloping shoreline.
168th Meeting: Acoustical Society of America
2226
3:10
3pUW8. Effect of a roughened sea surface on shallow water propagation
with emphasis on reactive intensity obtained with a vector sensor. David
R. Dall’Osto (Appl. Phys. Lab., Univ. Washington, 1013 N 40th St., Seattle,
WA 98105, dallosto@apl.washington.edu) and Peter H. Dahl (Appl. Phys.
Lab. and Mechanical Eng. Dept., Univ. of Washington, Seattle, WA)
3p WED. PM
In this study, sea-surface conditions during the Targets and Reverberation Experiment (TREX) are analyzed. The sea-surface directional spectrum
was experimentally measured up to 0.6 Hz with two wave buoys separated
by 5 km. The analysis presented here focuses on propagation relating to
three canonical sea-surfaces observed during the experiment: calm conditions, and rough conditions with waves either perpendicular or parallel to
the primary propagation direction. Acoustic data collected during calm and
rough conditions show a significant difference in the amount of out-of-plane
scattering. Interference due to this out-of-plane scattering is observed in the
component of reactive intensity perpendicular to the propagation direction.
These observations are compared with those generated using a model of the
sea-surface scattering based on a combination of buoy-measured and modeled directional spectrum. Simulated sea-surfaces are also constructed for
this numerical study. A model for wind waves is used to obtain surface
wavenumbers greater than those measured by the wave buoys (~1.5 rad/m).
Importantly, the spectral peak and it direction are well measured by the
buoys and no assumptions on fetch are required, resulting in a more realistic
wave spectrum and description of sea-surface conditions for acoustic
modeling.
2227
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2227
WEDNESDAY AFTERNOON, 29 OCTOBER 2014
MARRIOTT 5, 3:30 P.M. TO 4:30 P.M.
Plenary Session, Annual Meeting, and Awards Ceremony
Judy R. Dubno, President
Acoustical Society of America
Annual Meeting of the Acoustical Society of America
Presentation of Certificates to New Fellows
Mingsian Bai – for contributions to nearfield acoustic holography
David S. Burnett – for contributions to computational acoustics
James E. Phillips – for contributions to vibration and noise control and for service to the Society
Bonnie Schnitta – for the invention and application of noise mitigation systems
David R. Schwind – for contributions to the acoustical design of theaters, concert halls, and film studios
Neil T. Shade – for contributions to education and to the integration of electroacoustics in architectural acoustics
Joseph A. Turner – for contributions to theoretical and experimental ultrasonics
Announcements and Presentation of Awards
Presentation to Leo L. Beranek on the occasion of his 100th Birthday
Rossing Prize in Acoustics Education to Colin H. Hansen
Pioneers of Underwater Acoustics Medal to Michael B. Porter
Silver Medal in Speech Communication to Sheila E. Blumstein
Wallace Clement Sabine Medal to Ning Xiang
WEDNESDAY EVENING, 30 OCTOBER 2014
7:30 P.M. TO 9:00 P.M.
OPEN MEETINGS OF TECHNICAL COMMITTEES
The Technical Committees of the Acoustical Society of America will hold open meetings on Tuesday, Wednesday, and Thursday
evenings. On Tuesday the meetings will begin at 8:00 p.m., except for Engineering Acoustics which will hold its meeting starting at
4:30 p.m. On Wednesday evening, the Technical Committee on Biomedical Acoustics will meet starting at 7:30 p.m. On Thursday evening, the meetings will begin at 7:30 p.m.
Biomedical Acoustics Indiana A/B
These are working, collegial meetings. Much of the work of the society is accomplished by actions that originate and are taken in
these meetings including proposals for special sessions, workshops, and technical initiatives. All meetings participants are cordially
invited to attend these meetings and to participate actively in the discussion.
2228
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2228
ACOUSTICAL SOCIETY OF AMERICA
PIONEERS OF UNDERWATER ACOUSTICS MEDAL
Michael B. Porter
2014
The Pioneers of Underwater Acoustics Medal is presented to an individual irrespective of nationality, age, or society
affiliation, who has made an outstanding contribution to the science of underwater acoustics, as evidenced by publication of
research in professional journals or by other accomplishments in the field. The award was named in honor of five pioneers
in the field: H. J. W. Fay, R. A. Fessenden. H. C. Hayes, G. W. Pierce, and P. Langevin.
PREVIOUS RECIPIENTS
Harvey C. Hayes
Albert B. Wood
J. Warren Horton
Frederick V. Hunt
Harold L. Saxton
Carl Eckart
Claude W. Horton, Sr.
Arthur O. Williams
Fred N. Spiess
2229
1959
1961
1963
1965
1970
1973
1980
1982
1985
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Robert J. Urick
Ivan Tolstoy
Homer P. Bucker
William A. Kuperman
Darrell R. Jackson
Frederick D. Tappert
Henrik Schmidt
William M. Carey
George V. Frisk
1988
1990
1993
1995
2000
2002
2005
2007
2010
168th Meeting: Acoustical Society of America
2229
2230
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2230
CITATION FOR MICHAEL B. PORTER
. . . for contributions to underwater acoustic modeling
INDIANAPOLIS, INDIANA • 29 OCTOBER 2014
Michael B. Porter describes his college days as “ending up” at Caltech, living in an
eleven-student communal environment, sleeping on a floor-mattress, working in various
odd-jobs and mastering the culinary skill of baking beans, all in anticipation of his immediate future that would be dominated by his student loan. Aside from what to him were his
major undergraduate accomplishments, meeting his significant other, Laurel Henderson,
and developing crowd pleasing culinary talents, he also apparently learned some math and
physics. At Northwestern, he received his Ph.D. in Applied Math working for Ed Reiss in
which, among other things, he developed numerical algorithms that were to become standard methods in the Underwater Acoustics (UW) community. Probably, of the four most
often used models in all of the UW community in the last quarter century or so, Michael
B. Porter is the originator of two of them: The KRAKEN normal mode models and BELLHOP, a ray-Gaussian beam model.
However, these major contributions were only a few along the diverse research trail
that Michael pioneered. His research venues were also as diverse in that his 35 years of
research were conducted while in research, professor, and management positions at more
or less every type of research organization—government, academic, and private sector. His
research community impact is pervasive in that he is a coauthor of Computational Ocean
Acoustics, recently revised in a second edition, and he has also created and maintains the
Ocean Acoustics Library (OALIB), a site where anyone can download his MATLAB versions of all the major underwater acoustic propagation models. These latter activities alone
probably make Mike Porter a “household name” for the whole international community in
UW—but these are only part of the story.
Mike’s first pioneering acoustic contribution made not too long after he was born in
Quebec City in 1958 was when he developed an innovative glue-based repair procedure
for woofer-speakers that produced flying bits of speaker cone when directly plugged into a
wall outlet. While this experience probably motivated him to explore numerical methods,
he did spend some time later working on transducers with George Benthien at the Naval
Ocean Systems Center (NOSC).
His groundbreaking Ph.D. thesis and seminal paper in 1984 in the Journal of the Acoustical Society of America (JASA) on an unconditionally stable approach to normal mode
computation laid the foundation for KRACKEN/KRACKEN C and the SACLANTCEN
SNAP normal mode (NM) models, probably the two most used NM models in the world.
After his Ph.D. he was fortunate enough to collaborate with Homer Bucker at NOSC on
Gaussian beams from which BELLHOP would be an outgrowth. So, almost immediately
after his Ph.D. he was a leader in the UW modeling community.
I first met Mike when he came to the Naval Research Laboratory (NRL) in 1985 to work
in Orest Diachok’s Branch on Arctic (and other) acoustics. There he further organized his
models to community useable tools while also significantly contributing to the new area of
Matched Field Processing (MFP). We also established a close working relationship in that
area and in particular worked on a rapid method to do three-dimensional modal propagation, which he later enhanced to include more oceanographic as well as global propagation
phenomena. We have remained close friends and colleagues since those NRL days, including coauthoring Computational Ocean Acoustics with Finn Jensen and Henrik Schmidt.
In 1987 Mike joined Finn Jensen’s modeling group at SACLANTCEN, and it was
there that he worked with the rest of the coauthors (all from or at SACLANTCEN) on
Computational Ocean Acoustics. There he also developed ongoing research partnerships
with U. S. oceanographer Steve Piacsek as well as other European scientists. Much of
his SACLANTCEN research concerned range-dependent modeling, including a seminal
contribution to energy conservation of one-way equations, coupled mode modeling, and
chaotic effects in multipath environments.
He returned to the U. S. in 1991 to a faculty position at the New Jersey Institute of
Technology’s (NJIT) math department with David Stickler, Daljit Ahluwahlia, and Greg
Kriegsmann, and was rather quickly elevated to being one of its youngest full professors.
2231
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2231
There he worked in the area of MFP, extending it to some complicated broadband scenarios (with Zoi-Heleni Michalopoulou) as well further optimizing his models. While at
NJIT, he also did a sabbatical at the University of Algarve with his former SACLANTCEN
colleague Sergio Jesus and with the Portuguese and French hydrographers Yann Stephan
and Emanuel Coelho to study the acoustic effects of internal tides.
In 1999 he accepted a position at Science Applications International Corporation
(SAIC) as Assistant Vice President/Chief Scientist in its Ocean Science Division headed
by Peter Mikhalevsky. It was there that he began close collaborations with Paul Hursky,
Ahmad Abawi, Martin Siderius, and Keyko McDonald (from SPAWAR). At SAIC he also
completed his transition to heavy-duty experimental activity, which probably originated by
him being misled at SACLANTCEN to think that at-sea experiments were associated with
Michelin rated dining. His subsequent growth as an at-sea scientist is evidenced by his role
as chief scientist on a series of multi-institutional acoustic communications (Acomms) sea
trials. Mike was uniquely qualified for this Acomms role in that the only practical model
to describe the Acomms channel was BELLHOP. So, as the experiment chief scientist he
was also the expert on the theoretical aspects of the project. He had progressed to a level
that made him one of a very few scientists in our community capable of leading the theory,
simulation, and experimental aspects of a large UW project. This was all happening while
he was also working in MFP and inverse methods at SAIC.
Ever restless and seeking new experiences, he founded a new company in 2004, Heat
Light and Sound, Inc. (HLS), taking with him Abawi, Hursky, and Siderius. At HLS he
has continued his research, lately being involved in ocean soundscapes, marine mammal
acoustics, and other environmentally-related areas as well as continuing on in his established fields of research. During this latter period he was also a coauthor of the seminal
paper in JASA (2006) on the passive fathometer with Martin Siderius and Chris Harrison.
Most important to me is that Mike has been my very good friend over these many years,
and it has been a pleasure to watch him share his friendship and his knowledge with a very
broad segment of the UW community. He is an author of acoustic models and a book that
are central to the acoustic community and has established the Ocean Acoustics Library, the
latter probably being the most important instrument in disseminating models to students as
well as seasoned researchers. Recognized early in his career with the A. B. Wood Medal,
Michael B. Porter’s career trajectory in Underwater Acoustics has truly been a pioneering
adventure. The ASA Pioneers of Underwater Acoustics Medal is a fitting recognition of
his many achievements.
WILLIAM A. KUPERMAN
2232
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2232
ACOUSTICAL SOCIETY OF AMERICA
Silver Medal in
Speech Communication
Sheila E. Blumstein
2014
The Silver Medal is presented to individuals, without age limitation, for contributions to the advancement of science,
engineering, or human welfare through the application of acoustic principles, or through research accomplishment in
acoustics.
PREVIOUS RECIPIENTS
Franklin S. Cooper
Gunnar Fant
Kenneth N. Stevens
Dennis H. Klatt
Arthur S. House
Peter Ladefoged
2233
1975
1980
1983
1987
1991
1994
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Patricia K. Kuhl
Katherine S. Harris
Ingo R. Titze
Winifred Strange
David B. Pisoni
1997
2005
2007
2008
2010
168th Meeting: Acoustical Society of America
2233
2234
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2234
CITATION FOR SHEILA E. BLUMSTEIN
. . . for contributions to understanding how acoustic signals are transformed into linguistic
representations
INDIANAPOLIS, INDIANA • 29 OCTOBER 2014
Sheila Blumstein was born in New York City, obtained a B.A. in Linguistics from
the University of Rochester, and a Ph.D. in Linguistics from Harvard University, under
the guidance of the legendary Roman Jakobson. Sheila’s dissertation, A Phonological
Investigation of Aphasic Speech, published as a book by Mouton in 1973, already clearly
indicated the focus of her research: the representation of speech and language in the brain.
Today, as the Albert D. Mead Professor of Cognitive and Linguistic Sciences at Brown
University, Sheila pursues this research agenda as vigorously as when she started there on
the faculty in 1970.
Sheila Blumstein has contributed immeasurably to our knowledge of the acoustics and
perception of speech. Specifically, her research addresses how the continuous acoustic
signal is transformed by perceptual and neural mechanisms into linguistically relevant
representations. Among her many significant contributions to the field of Speech Communication, the following two have had a profound impact on our field. First, through detailed
analysis of speech sounds, Sheila showed that the mapping between acoustic properties
and perceived phonetic categories is richer, and more consistent and invariant, than previously thought, a finding which necessitated a new conception of the relation between the
production and perception of speech. Second, Sheila’s finding that subtle yet systematic
acoustic differences can affect activation of word candidates in the mental lexicon indicated that acoustic information not directly relevant for phoneme identification is not discarded but is retained and plays a critical role in word comprehension, providing a crucial
piece of evidence in the ongoing debate about the structure of the mental lexicon.
At the time that Sheila started investigating the speech signal in the 1970s, the prevalent scientific opinion was that there was no simple mapping between acoustic signal and
perceived phonemes because the speech signal was too variable. Acoustic properties were
strongly affected by contextual factors such as variation in speaker, speaking rate, and
phonetic environment. Careful consideration of Gunnar Fant’s acoustic theory of speech
production led Sheila to the hypothesis that invariant acoustic properties could be found
in the speech signal. In contrast to previous research that was dependent on the speech
spectrograph, Sheila focused more on global acoustic properties such as the overall shape
of the spectrum at the release of the stop consonant. Through careful and detailed acoustic
analysis and subsequent perceptual verification, Sheila uncovered stable invariant acoustic
properties that consistently signaled important linguistic features such as place and manner
of articulation. Sheila supported these claims by investigating a variety of speech sound
classes (including stop consonants, fricatives, and approximants) in a variety of languages
because she fully appreciated that conclusions drawn on the basis of one language can
be misleading and universal generalizations can only be made after crosslinguistic comparisons. Sheila’s work on acoustic features resulted in a series of seminal publications
(1978-1987) in the Journal of the Acoustical Society of America, co-authored with Kenneth
Stevens and others.
By the late 1980s, research on speech perception had moved beyond the identification
of individual consonants and vowels to the comprehension of words and to the new field of
“auditory word recognition.” While there was a general consensus that word recognition
involves a process whereby information extracted from the speech signal is matched with
a stored representation in the mental lexicon, it was not clear whether all available acoustic information in the signal played a role in this matching process. In her seminal paper
“The effect of subphonetic differences on lexical access” (Cognition, 1994), Sheila and her
students showed that subtle acoustic variations which do not affect the categorization of a
phoneme nevertheless do affect word recognition. This was a very elegant demonstration
2235
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2235
that subtle subphonemic acoustic information is not discarded before the lexicon is accessed
but instead plays a role in the comprehension of words. This was a very important finding
and necessitated reconsideration of the then dominant view that lexical access proceeds
on the basis of categorical phonemes rather than more fine-grained continuous acoustic
information.
Sheila is co-founder of Brown University’s Barus Speech Lab where she has taught,
supervised, and mentored hundreds of undergraduates, graduates, and postdocs. This lab is
one of the world’s leading research centers for the study of speech at all levels: acoustics,
psycholinguistics, and neurolinguistics. In addition to her speech research, Sheila is equally
known for her research on aphasia, focusing again on speech production and perception.
Just as Sheila was able to make use of technological advances to view the speech signal from a different perspective, she also capitalized on new brain imaging techniques to
augment her understanding of the brain that was based on behavioral data collected from
aphasic patients. Sheila’s most recent acoustic research also uses fMRI to investigate cortical regions involved in the perception of phonetic category invariance as well as neural
systems underlying lexical competition.
A quick glance at Sheila’s resume shows that she has garnered just about every honor
possible. She has been a Guggenheim Fellow, and a recipient of the Claude Pepper (Javits
Neuroscience) Investigator Award. She is a Fellow of the Acoustical Society of America,
the American Association for the Advancement of Science, the American Academy of Arts
and Sciences, the Linguistic Society of America, and the American Philosophical Society.
In addition, Sheila has served Brown University in many capacities, including Dean of the
College, Interim Provost, and Interim President. In all of the positions she has held, Sheila
has earned the admiration and respect of all constituencies. Her warm, supportive, patient
style renders an incisive critique into a constructive suggestion, reflecting her enviable
supervisory and administrative skills.
It is simply not possible to undertake work in acoustic phonetics, phonology, neuroimaging, or aphasia without referring to Sheila’s work. Sheila’s research has been continuously funded through federal research grants since the 1970s. Her research is not only
influential and pivotal, it is also incredibly inspiring. Her students have secured prestigious
positions and continue to conduct innovative research. The field would not be what it is
today without Sheila’s many seminal contributions spanning five decades.
ALLARD JONGMAN
JOAN SERENO
SHARI BAUM
ADITI LAHIRI
2236
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2236
WALLACE CLEMENT SABINE AWARD
OF THE
ACOUSTICAL SOCIETY OF AMERICA
Ning Xiang
2014
The Wallace Clement Sabine Award is presented to an individual of any nationality who has furthered the knowledge of
architectural acoustics, as evidenced by contributions to professional journals and periodicals or by other accomplishments
in the field of architectural acoustics.
PREVIOUS RECIPIENTS
Vern O. Knudsen
Floyd R. Watson
Leo L. Beranek
Erwin Meyer
Hale J. Sabine
Lothar W. Cremer
Cyril M. Harris
Thomas D. Northwood
1957
1959
1961
1964
1968
1974
1979
1982
Richard V. Waterhouse
A. Harold Marshall
Russell Johnson
Alfred C. C. Warnock
William J. Cavanaugh
John S. Bradley
J. Christopher Jaffe
1990
1995
1997
2002
2006
2008
2011
SILVER MEDAL IN
ARCHITECTURAL ACOUSTICS
The Silver Medal is presented to individuals, without age limitation, for contributions to the advancement of science, engineering,
or human welfare through the application of acoustic principles, or through research accomplishment in acoustics.
PREVIOUS RECIPIENT
Theodore J. Schultz
2237
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
1976
168th Meeting Acoustical Society of America
2237
2238
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting Acoustical Society of America
2238
CITATION FOR NING XIANG
. . . for contributions to measurements and analysis techniques, and numerical simulation
of sound fields in coupled rooms
INDIANAPOLIS, INDIANA • 29 OCTOBER 2014
Ning Xiang, 16th recipient of the Society’s Wallace Clement Sabine Medal, is well
known to members of the Society and the worldwide acoustics community for his work
in binaural scale-model measurement, theory and practice of maximum-length sequences,
and Bayesian signal processing. A consummate theoretician and experimentalist, his work
reflects the growing importance of computational modeling and model-based signal processing across the broader field of acoustics, but is unique for making significant general
contributions while maintaining a strong and specific focus on architectural acoustics.
Ning formally began his career in acoustics in 1984, arriving as a young student from
China at the office of his doctoral supervisor Jens Blauert. Though more-or-less inexperienced in the field and hardly able to communicate in German, his mentors and colleagues of that time well remember his fierce determination and commitment. This earnest
enthusiasm for the work would serve him well over his professional career, becoming one
of the key attributes he sought to instill in the many graduate students he would come to
supervise.
Earning a Masters degree in 1986 (Diplom-Ingenieur) from Ruhr-University Bochum,
Ning went on to earn a Ph.D. in 1990 for his development of a binaural acoustical modeling system. This work, which involved design and fabrication of novel scale-model transducers and a miniature (1/10 scale) binaural artificial head, set early the high standard his
future experimental work would demonstrate. At the same time, his doctoral work firmly
established him as a theorist and signal processor for his research and development of
measurement algorithms and software based on maximum-length sequences. This included
a new and effective factorization method required for application of Fast Hadamard Transforms and development of fast test methods for long maximum-length sequences through
identification of the similarity with Morse-Thue sequences [Signal Processing (1992)].
It was through these important findings in maximum-length sequences that Ning began
a long and fruitful collaborative relationship with Manfred Schroeder [Journal of the
Acoustical Society of America (JASA) (2003)].
After the completion of his doctoral degree, Ning joined the technical staff of Head
acoustics in Herzogenrath, Germany as a research scientist/engineer. Here too he continued to bring together theory and practice and experimental work with signal processing,
forming an on-going and fruitful professional relationship with founder Klaus Genuit that
led to a number of important papers [JASA (1995), ACUSTICA - acta acustica (1996)].
This was followed by an appointment in 1997 as a research scientist at the Fraunhofer
Institute for Building Physics in Stuttgart, Germany. Here the application of binaural
measurement technology to performance spaces remained his focus. While this was to be
Ning’s last appointment in Germany, his many professional relationships remain strong
and he is well remembered by his colleagues for his ability to appealingly distill in his lectures and talks the rigor of his analytical thinking into well-organized, clearly articulated
concepts without sacrificing substance or detail.
In 1998 Ning accepted a position as a Research Scientist and Research Associate Professor with the National Center for Physical Acoustics and the Department of Electrical
Engineering of the University of Mississippi. His work on acoustic/seismic coupling for
buried mine detection, conducted in collaboration with James Sabatier and Paul Goggans,
was a departure in domain from his prior work in room and building acoustics. But, characteristically, it became for Ning an opportunity for fertile cross-pollination between subdisciplines. Advances he had made in measurement by maximum-length sequence transitioned into acoustic/seismic measurement while advances in Bayesian signal processing
for mine detection provided him with an important new approach for parameters estimation from single- and multiple-slope Schroeder decay curves of noisy impulse responses
[JASA (2001, 2003)].
In 2003, Ning was appointed Associate Professor at the Rensselaer Polytechnic Institute
(RPI). Returning his full attention to the field of architectural acoustics, Ning expanded
his on-going work in maximum-length sequences and Bayesian estimation. His work in
2239
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting Acoustical Society of America
2239
parameter and model estimation for systems of acoustically coupled rooms led him naturally
into development of new computational diffusion-equation models for simulation of acoustically coupled rooms and detailed scale-model measurements to validate these models.
From this work—much of it carried out with his Masters and Ph.D. students and conducted
with a worldwide group of collaborators—has grown prodigiously and now encompasses
modeling, measurement, and simulation of scattering, material impedance, and mode distribution in addition to binaural measurement and multiple-slope decay curve analysis.
It is especially fitting that Ning should receive this award directly after J. Christopher
Jaffe, as Ning has been instrumental in bringing to full fruition the work begun by Dr.
Jaffe in founding the Graduate Program in Architectural Acoustics at RPI in 1999 and the
research in coupled rooms that was the initial focus of that program [JASA (2005, 2006,
2008, 2009, 2011, 2013)]. The flourishing and growth of the RPI program directed by Ning
since 2005 is, along with his other scholarly and professional accomplishments, one of his
enduring contributions to the field of architectural acoustics.
This educational and mentoring role cannot be overemphasized; in the close community of architectural and room acoustics Ning’s direct role in training a new generation of
acousticians has been felt across academia, government, and industry in both the U.S. and
abroad. Whether in consulting practice in Turkey, as a Fulbright fellow in Finland, or as a
professor in the United States, a community of young acousticians is daily reaping benefits
of having been educated to pursue the field of architectural acoustics with scientific and
engineering rigor coupled with a bold willingness to investigate new ideas and an openness
to the worldwide acoustics community. Doubtless Ning recalls his own enthusiasm as a
young graduate student in Bochum and seeks to cultivate that in his own students.
For the reasons cited here and those which space does not allow us to mention but are
well known to his colleagues, students, and the Society as a whole, we are pleased and
privileged to present Dr. Ning Xiang with the Wallace Clement Sabine Medal.
JASON E. SUMMERS
JENS BLAUERT
2240
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting Acoustical Society of America
2240
THURSDAY MORNING, 30 OCTOBER 2014
MARRIOTT 7/8, 8:40 A.M. TO 11:40 A.M.
Session 4aAAa
Architectural Acoustics, Speech Communication, and Noise: Room Acoustics Effects on Speech
Comprehension and Recall I
Lily M. Wang, Cochair
Durham School of Architectural Engineering and Construction, University of Nebraska - Lincoln, PKI 101A, 1110 S. 67th St.,
Omaha, NE 68182-0816
David H. Griesinger, Cochair
Research, David Griesinger Acoustics, 221 Mt Auburn St #504, Cambridge, MA 02138
Chair’s Introduction—8:40
Invited Papers
8:45
4aAAa1. Speech recognition in adverse conditions. Ann Bradlow (Linguist, Northwestern Univ., 2016 Sheridan Rd., Evanston, IL,
abradlow@northwestern.edu)
Speech recognition is highly sensitive to adverse conditions at all stages of the speech chain, i.e., the sequence of events that transmits a message from the mind/brain of a speaker through the acoustic medium to the mind/brain of a listener. Adverse conditions can
originate from source degradations (e.g., disordered or foreign-accented speech), environmental disturbances (e.g., background sounds
with or without energetic masking), and/or receiver (i.e., listener) limitations (e.g., impaired or incomplete language models, peripheral
deficiencies, or tasks with high cognitive load). (For more on this classification system, see Mattys, Davis, Bradlow, & Scott, 2012, Language and Cognitive Processes, 27). This talk will present a series of studies focused on linguistic aspects of these various possible sources of adverse conditions for speech recognition. In particular, we will demonstrate separate and combined influences of the talker’s
language background (a possible source degradation), the presence of a background speech masker in either the same or a different language from that of the target speech (a possible environmental degradation), and the listener’s experience with the language of the target
and/or masking speech (a possible receiver limitation). Together, these studies demonstrate strong influences of language and linguistic
experience on speech recognition in adverse conditions.
9:05
4a THU. AM
4aAAa2. Speech intelligibility and sentence recognition memory in noise. Rajka Smiljanic (Linguist, Univ. of Texas at Austin,
Calhoun Hall 407, 1 University Station B5100, Austin, TX 78712-0198, rajka@mail.utexas.edu)
Much of daily communication occurs in adverse conditions impacting various levels of speech processing negatively. These adverse
conditions may originate in talker- (fast, reduced speech), signal- (noise or degraded target signal), and listener- (impeded access or
decoding of the target speech signal) oriented limitations, and may have consequences for perceptual processes, representations, attention, and memory functions (see Mattys et al., 2012 for a review). In this talk, I first discuss a set of experiments that explore the extent
to which listener-oriented clear speech and speech produced in response to noise (noise-adapted speech) by children, young adults and
older adults contribute to enhanced word recognition in challenging listening conditions. Next, I discuss whether intelligibility-enhancing speaking style modifications impact speech processing beyond word recognition, namely recognition memory for sentences. The
results show that effortful speech processing in challenging listening environments can be improved by speaking style adaptations on
the part of the talker. In addition to enhanced intelligibility, a substantial improvement in sentence recognition memory can be achieved
through speaker adaptations to the environment and to the listener when in adverse conditions. These results have implications for the
quality of speech communication in a variety of environments, such as classrooms and hospitals.
9:25
4aAAa3. Reducing cognitive demands on listeners by speaking clearly in noisy places. Kristin Van Engen (Psych., Washington
Univ. in St. Louis, One Brookings Dr., Campus Box 1125, Saint Louis, MO 63130-4899, kvanengen@wustl.edu)
Listeners have more difficulty identifying spoken words in noisy environments when those words have many phonological neighbors
(i.e., similar-sounding words in the lexicon) than when they have few phonological neighbors. This difficulty appears to be exacerbated
in old age, where reductions in inhibitory control presumably make it more difficult to cope with competition from similar-sounding
words. Fortunately, word recognition in noise can generally be improved for a wide range of listeners (e.g., younger and older adults,
individuals with and without hearing impairment) when speakers adopt a clear speaking style. This study investigated whether clear
speech, in addition to generally increasing speech intelligibility, also reduces the inhibitory demands associated with identifying lexically difficult words in noise for younger and older adults. The results show that, indeed, the difference between rates of identification
2241
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2241
for words with many versus few neighbors was eliminated when those words were produced in clear speech. Data on the roles of individual differences (e.g., hearing, working memory, and inhibitory control) that may contribute to word identification in noise will also be
presented.
9:45
4aAAa4. Improved speech understanding and amplitude modulation sensitivity in rooms: Wait a second!. Pavel Zahorik (Div. of
Communicative Disord., Dept. of Surgery, Univ. of Louisville School of Medicine, Psychol. and Brain Sci., Life Sci. Bldg. 317, Louisville, KY 40292, pavel.zahorik@louisville.edu), Paul W. Anderson (Dept. of Psychol. and Brain Sci., Univ. of Louisville, Louisville,
KY), Eugene Brandewie (Dept. of Psych., Univ. of Minnesota, Minneapolis, MN), and Nirmal K. Srinivasan (National Ctr. for Rehabilitative Auditory Res., Portland VA Medical Ctr., Portland, OR)
Sound transmission between source and receiver can be profoundly affected by room acoustics, yet under many circumstances, these
acoustical effects have relatively minor perceptual consequences. This may be explained, in part, by listener adaptation to the acoustics
of the listening environment. Here, evidence that room adaptation improves speech understanding is summarized. The adaptation is
rapid (around 1 s), and observable for a variety of speech materials. It also appears to depend critically on the amplitude modulation
characteristic of the signal reaching the ear, and as a result, similar room adaptation effects have been observed for measurements of amplitude modulation sensitivity. A better understanding of room adaptation effects will hopefully contribute to improved methods for
speech transmission in rooms for both normally hearing and hearing-impaired listeners. [Work supported by NIDCD.]
10:05–10:20 Break
10:20
4aAAa5. The importance of attention, localization, and source separation to speech cognition and recall. David H. Griesinger
(Res., David Griesinger Acoust., 221 Mt Auburn St #504, Cambridge, MA 02138, dgriesinger@verizon.net)
Acoustic standards for speech are based on word recognition. But for successful communication sound must be detected, separated
from noise and other streams, phones, syllables, and words must be recognized and parsed into sentences, meaning must be found by
relating the sentences to previous knowledge, and finally information must be stored in long term memory. All of these tasks require
time and working memory. Acoustical conditions that increases the difficulty of any part of the task reduce recall. But attention is possibly the most important factor in successful communication. There is compelling anecdotal evidence that sound profoundly and involuntarily influences attention. Humans detect in fractions of a second whether a sound source is close, independent of its loudness and
frequency content. When sound is perceived as close it demands a degree of attention that distant sound does not. The mechanism of
detection relies on the phase relationships between harmonics of complex tones in the vocal formant range, properties of sound that also
ease word recognition and source separation. We will present the physics of this process and the acoustic properties that enable it. Our
goal is to increase attention and recall in venues of all types.
10:40
4aAAa6. Release from masking in simulated reverberant environments. Nirmal Kumar Srinivasan, Frederick J. Gallun, Sean D.
Kampel, Kasey M. Jakien, Samuel Gordon, and Megan Stansell (National Ctr. for Rehabilitative Auditory Res., 3710 SW US Veterans
Hospital Rd., Portland, OR 97239, nirmal.srinivasan@va.gov)
It is well documented that older listeners have more difficulty in understanding speech in complex listening environments. In two
separate experiments, speech intelligibility enhancement due to prior exposure to listening environment and spatial release from masking
(SRM) for small spatial separations were measured in simulated reverberant listening environments. Release from masking was measured by comparing threshold target-to-masker ratios (TMR) obtained with a speech target presented directly ahead of the listener and
two speech maskers presented from the same location or in symmetrically displaced spatial configurations in an anechoic chamber. The
results indicated that older listeners required much higher TMR at threshold and obtained decreased benefit from prior exposure to listening environments compared to younger listeners. For the small separation experiment, speech stimuli were presented over headphones and virtual acoustic techniques were used to simulate very small spatial separations (approx. 2 degrees) between target and
maskers. Results reveal, for the first time, the minimum separation required between target and masker to achieve release from speechon-speech masking in anechoic and reverberant conditions. The advantages of including small separations for understanding the functions relating spatial separation to release from masking will be discussed, as well as the value of including older listeners. [Work
supported by NIH R01 DC011828.]
11:00
4aAAa7. Speech-on-speech masking for children and adults. Lauren Calandruccio, Lori J. Leibold (Allied Health Sci., Univ. of North
Carolina, 301 S. Columbia St., Chapel Hill, NC 27599, Lauren_Calandruccio@med.unc.edu), and Emily Buss (Otolaryngology/Head
and Neck Surgery, Univ. of North Carolina at Chapel Hill, Chapel Hill, NC)
Children experience greater difficulty understanding speech in noise compared to adults. This age effect is pronounced when the
noise causes both energetic and informational masking, for example, when listening to speech while other people are talking. As children acquire speech and language, they are faced with multi-speech environments all the time, for example, in the classroom. For adults,
speech perception tends to be worse when the target and masker are matched in terms of talker sex and language, with mismatches
improving performance. It is unknown, however, whether children are able to benefit from these (sex or language) target/masker mismatches. The goal of this project is to further our understanding of the speech-on-speech masking deficit children demonstrate throughout childhood, while specifically investigating whether children’s speech recognition improves when the target and masker are spoken
by talkers of the opposite sex, or when the target and masker speech are spoken in different languages. Normal-hearing children and
adults were tested on word identification and sentence recognition tasks. Differences in SNR needed to equate performance between the
two groups will be reported, as well as data reporting whether children are able to benefit from these target/masker mismatch cues.
2242
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2242
11:20
4aAAa8. The neural basis of informational and energetic masking effects in the perception and production of speech. Samuel
Evans (Inst. of Cognit. Neurosci., Univ. College London, 17 Queen Square, London, London WC1N 3AR, United Kingdom, samuel.
evans@ucl.ac.uk), Carolyn McGettigan (Dept. of Psych., Royal Holloway, Egham, United Kingdom), Zarinah Agnew (Dept. of Otolaryngol., Univ. of California, San Francisco, San Francisco, CA), Stuart Rosen (Dept. of Speech, Hearing and Phonetic Sci., Univ. College
London, London, United Kingdom), Lima Cesar (Ctr. for Psych., Univ. of Porto, Porto, Portugal), Dana Boebinger, Markus Ostarek,
Sinead H. Chen, Angela Richards, Sophie Meekings, and Sophie K. Scott (Inst. of Cognit. Neurosci., Univ. College London, London,
United Kingdom)
When we have spoken conversations, it is usually in the context of competing sounds within our environment. Speech can be masked
by many different kinds of sounds, for example, machinery noise and the speech of others, and these different sounds place differing
demands on cognitive resources. In this talk, I will present data from a series of functional magnetic resonance imaging (fMRI) studies
in which the informational properties of background sounds have been manipulated to make them more or less similar to speech. I will
demonstrate the neural effects associated with speaking over and listening to these sounds, and demonstrate how in perception these
effects are modulated by the age of the listener. The results will be interpreted within a framework of auditory processing developed
from primate neurophysiology and human functional imaging work (Rauschecker and Scott 2009).
THURSDAY MORNING, 30 OCTOBER 2014
SANTA FE, 10:35 A.M. TO 12:05 P.M.
Session 4aAAb
Architectural Acoustics: Uses, Measurements, and Advancements in the Use of Diffusion and Scattering
Devices
David T. Bradley, Chair
Physics Astronomy, Vassar College, Poughkeepsie, NY 12604
Chair’s Introduction—10:35
Invited Papers
10:40
4a THU. AM
4aAAb1. Effect of installed diffusers on sound field diffusivity in a real-world classroom. Ariana Sharma, David T. Bradley, and
Mohammed Abdelaziz (Phys. + Astronomy, Vassar College, 124 Raymond Ave, Poughkeepsie, NY 12604, arsharma@vassar.edu)
An ideal diffuse sound field is both homogeneous (acoustic quantities are independent of position) and isotropic (acoustic quantities
are invariant with respect to direction). Predicting and characterizing sound field diffusivity is essential to acousticians when designing
and using acoustically sensitive spaces. Surfaces with a non-planar geometry, referred to as diffusers, can be installed in these spaces as
a means of increasing and/or controlling the field diffusivity. Although some theoretical and computational modeling work has been carried out to better understand the relationship between these installed diffusers and the resulting field diffusivity, the current state-of-theart does not include a systematic understanding of this relationship. Furthermore, very little work has been done to characterize this relationship in full scale and in the real world. In the current project, the effect of diffusers on field diffusivity has been studied in a full
scale, real-world classroom. Field diffusivity has been measured for various configurations of the diffusers using two measurement techniques. The first technique uses a three-dimensional grid of receivers to characterize the field homogeneity. To characterize field isotropy, a spherical microphone array has also been used. Results and analysis will be presented and discussed.
11:00
4aAAb2. Effect of measurement conditions on sound scattered from a pyramid diffuser in a free field. Kimberly A. Riegel, David
T. Bradley, Mallory Morgan, and Ian Kowalok (Phys. + Astronomy, Vassar College, 124 Raymond Ave., Poughkeepsie, NY 12604,
kiriegel@vassar.edu)
A surface with a non-planar geometry, referred to as a diffuser, can be used in acoustically sensitive spaces to help control or eliminate unwanted effects from strong reflections by scattering the reflected sound. The scattering behavior of a diffuser can be measured in
a free field, according to the standard ISO 17497-2. Many of the measurement conditions discussed in this standard can have an effect
on the measured data; however, these conditions are often not well-specified and/or have not been substantiated. In the current study, a
simple pyramid diffuser has been measured while varying several measurement conditions: surface material, orientation of the surface
geometry, perimeter shape of the surface, and mounting depth of the surface. Reflected polar response and diffusion coefficient data
have been collected and compared for each condition. Data have also been contrasted with those obtained by numerical simulation using
boundary element method (BEM) techniques for an idealized pyramid diffuser. Results and analysis will be presented and discussed.
2243
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2243
Contributed Papers
11:20
4aAAb3. Sound field diffusion by number of peak by continuous wavelet
transform. Yongwon Cha, Muhammad Imran, and Jin Yong Jeon (Dept. of
Architectural Eng., Hanyang Univ., Hanyang University, Seoul 133-791,
South Korea, chadyongwoncha@gmail.com)
The number of peak (Np) in the impulse response signal (IRs) captured
for the real hall have been investigated and measured by using continuous
wavelet transform (CWT). Np has a relationship with perceptual diffusion
as an objective characteristic that is influenced by walls scattering elements.
In addition, as measuring diffuse sound fields, the CWT coefficients are
used for detecting the diffusive sound. Based on the absolute coefficient values calculated from CWT analysis, a practical method of counting reflections is considered. These reflections are specified as diffusive or specular
base on their similarity with the mother wavelet. Temporal and spatial representation of absolute values of CWT is presented. Auditory experiments
using a paired comparison method were conducted to gauge the relationship
between the Np and perceptual sound field diffusion. It is revealed that a
dominant factor influencing the subjective preference in the hall was the Np
that varied with different wall surface treatments.
11:35
4aAAb4. In praise of smooth surfaces: Promoting a balance between
specular and diffuse surfaces in performance space design. Gregory A.
Miller and Scott D. Pfeiffer (Threshold Acoust., LLC, 53 W. Jackson Boulevard, Ste. 815, Chicago, IL 60604, gmiller@thresholdacoustics.com)
Diffusive surfaces are often presented as a panacea for achieving desirable listening conditions in performance spaces. While diffusive surfaces are
a valuable and necessary part of the finish palette in any theater or concert
hall, a significant number of specular surfaces are crucial to the success of
many such spaces. Case studies will be presented in which excessive use of
2244
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
diffusion has resulted in losses of clarity and loudness, including comparisons to the results following the introduction of specular surfaces, either flat
or gently curved. Aural examples will be presented to demonstrate the perceptual differences when specular surfaces are employed as compared to
highly diffusive surfaces at key locations in spaces for music and drama.
11:50
4aAAb5. Scattershot: A look at designing, integrating, and measuring
diffusion. Shane J. Kanter, John Strong, Carl Giegold, and Scott Pfeiffer
(Threshold Acoust., 53 W. Jackson Blvd., Ste. 815, Chicago, IL 60604,
skanter@thresholdacoustics.com)
A primary goal of the small-scale performance venue is to provide the
audience with supportive, well-timed reflections and to energize the space
adequately without overpowering the room volume. The judicious use of
sound-diffusive elements in such venues can lend a pleasing sense of body
and space while avoiding undesirable reflections that disrupt the listener experience. However, while working with architects to develop a space that is
pleasing to both the ear and the eye, it is often necessary to reconcile these
needs with each other. Diffusive elements must integrate seamlessly within
the space visually as well as architecturally. While developing interior room
acoustics for three small spaces for performance/worship, with audience
size ranging from 150 to 299, an exploration of diffusive elements was conducted. As each project required a different method and frequency range of
diffusion, scale models were constructed and tested under varied conditions,
using sometimes unorthodox methods to determine the acoustic effect.
These efforts were focused on limiting coloration caused by the “picket
fence effect,” reducing harsh reflections without rendering a space excessively sound-absorptive, and maintaining coherent reflections from discrete
sections of a prominent wall while leaving other sections diffusive. Methods, experiences, and results will be presented.
168th Meeting: Acoustical Society of America
2244
THURSDAY MORNING, 30 OCTOBER 2014
LINCOLN, 8:00 A.M. TO 12:00 NOON
Session 4aAB
Animal Bioacoustics and Acoustical Oceanography: Use of Passive Acoustics for Estimation of Animal
Population Density I
Tina M. Yack, Cochair
Bio-Waves, Inc., 364 2nd Street, Suite #3, Encinitas, CA 92024
Danielle Harris, Cochair
Centre for Research into Ecological and Environmental Modelling, University of St. Andrews, The Observatory, Buchanan
Gardens, St. Andrews KY16 9LZ, United Kingdom
Chair’s Introduction—8:00
Invited Papers
8:05
4aAB1. Estimating density from passive acoustics: Are we there yet? Tiago A. Marques, Danielle Harris, and Len Thomas (Ctr. for
Res. into Ecological and Environ. Modelling, Univ. of St. Andrews, The Observatory, Buchannan Gardens, St. Andrews, Fife KY16 9
LZ, United Kingdom, tiago.marques@st-andrews.ac.uk)
In the last few years, there have been a considerable number of papers describing methods or case studies involving passive acoustic
density estimation. While this might be interpreted as evidence that density estimates might now be easily and routinely implemented,
the truth is that so far these methods and applications have been essentially proof-of-concept in nature, based in areas and/or species particularly suited for the methods and also often involved assumptions hard to evaluate. We briefly review some of the existing work in
this area concentrating on a few aspects we believe are key for the implementation of density estimation from passive acoustics in a
broader context. These are (1) the development of fundamental research addressing the problem of sound production rate, fundamental
as it allows to convert estimates of density of sounds into density of animals and (2) the development of hardware capable of providing
cheap deployable units capable of ranging, allowing straightforward implementations of distance sampling based approaches. The perfect density estimate is out there waiting to happen, but we have not found it yet.
8:25
4a THU. AM
4aAB2. Use of passive acoustics for estimation of cetacean population density: Realizing the potential. Jay Barlow and Shannon
Rankin (Marine Mammal and Turtle Div., NOAA-SWFSC, 8901 La Jolla Shores Dr., La Jolla, CA 92037, jay.barlow@noaa.gov)
The potential of passive acoustic methods to estimate cetacean population density has seldom been realized. It has been most successfully applied to species that consistently use echo-location during foraging, have very distinctive echo-location signals and forage a
large fraction of the time, notably sperm whale, porpoise, and beaked whales. Research is needed to eliminate some of the impediments
to applying acoustics to estimate the density of other species. For baleen whales, one of the greatest uncertainties is the lack of information on call rates. For delphinids, the greatest uncertainties are in estimating group size and in species recognition. For all species, there
is a need to develop inexpensive recorders that can be distributed in large number at random locations in a study area. For towed hydrophone surveys, there is a need to better localize species in their 3-D environment and to instantaneously localize animals from a single
signal received on multiple hydrophones. While improvements can be made, we may need to recognize that some of impediments cannot
be overcome with any reasonable research budget. In these cases, efforts should be concentrated in improving acoustic methods to aid
visual-based transect methods.
8:45
4aAB3. Acoustic capture-recapture methods for animal density estimation. David Borchers (Dept. of Mathematics & Statistics,
Univ. of St. Andrews, CREEM, Buchannan Gdns, St. Andrews, Fife KY16 9LZ, United Kingdom, dlb@st-andrews.ac.uk)
Capture-recapture methods are one of the two most widely-used methods of estimating wildlife density and abundance. They can be
used with passive acoustic detectors—in which case acoustic detection on a detector constitutes “capture” and detection on other detectors and/or at other times constitute “recaptures.” Unbiased estimation of animal density from any capture-recapture survey requires that
the effective area of the detectors be estimated, and information on detected animals’ locations are essential for this. While locations are
not observed, acoustic data contain information on location in a variety of guises, including time-difference-of arrival, signal strength,
and sometimes directional information. This talk gives an overview of the use of such data with spatially explicit capture-recapture
(SECR) methods, including consideration of some of the particular challenges that acoustic data present for SECR methods, ways of
dealing with these, and an outline of some unresolved issues.
2245
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2245
9:05
4aAB4. U.S. Navy application and interest in passive acoustics for estimation of marine mammal population density. Anu Kumar
(Living Marine Resources, NAVFAC EXWC, 1000 23rd Ave., Code EV, Port Hueneme, CA 93043, anurag.kumar@navy.mil), Chip
Johnson (Environ. Readiness, Command Pacific Fleet, Coronado, CA), Julie Rivers (Environ. Readiness, Command Pacific Fleet, Pearl
Harbor, HI), Jene Nissen (Environ. Readiness, U.S. Fleet Forces, Norfolk, VA), and Joel Bell (Marine Resources, NAVFAC Atlantic,
Norfolk, VA)
Marine species population density estimation from passive acoustic monitoring is an emergent topic of interest to the U.S. Navy.
Density estimates are used by the Navy and other Federal partners in effects modeling for environmental compliance documentation.
Current traditional methods of marine mammal density estimation via visual line transect surveys require expensive ship time and long
days at-sea for an experienced crew to yield limited spatial and temporal coverage. While visual surveys remain an effective means of
deriving density estimates, passive acoustic based density estimation methods have the unique ability to improve on visual density estimates for some key species by: (a) expanding spatial and temporal density coverage, (b) providing coverage in areas too remote or difficult for traditional visual surveys, (c) reduce the statistical uncertainty of a given density estimate, and (d) providing estimates for
species that are difficult to survey visually (e.g., minke and beaked whales). The U.S. Navy has invested in research for the development,
refinement, and scientific validation of passive acoustic methods for cost effective density estimates in the future. The value, importance,
and current development in passive acoustic-based density estimation methods for Navy applications will be discussed.
9:25
4aAB5. Towing the line: Line-transect based density estimation of whales using towed hydrophone arrays. Thomas F. Norris and
Tina M. Yack (Bio-Waves Inc., 364 2nd St., Ste. #3, Encinitas, CA 92024, thomas.f.norris@bio-waves.net)
Towed hydrophone arrays have been used to monitor marine mammals from research vessels since the 1980s. Although towed
hydrophone arrays have now become a standard part of line-transect surveys of cetaceans, density estimation exclusively using passive
acoustic has only been attempted for a few species. We use examples from four acoustic line-transect surveys that we conducted in the
North Pacific Ocean to illustrate the steps involved, and issues inherent, in using data from towed hydrophone arrays to estimate densities of cetaceans. We will focus on two species of cetaceans, sperm whales and minke whales, with examples of beaked whales and
other species as needed. Issues related to survey design, data-collection, and data analysis and interpretation will be discussed using
examples from these studies. We provide recommendations to improve the survey design, data-collection methods, and analyses. We
also suggest areas where additional research and methodological development are required in order to produce robust density estimates
from acoustic based data.
Contributed Papers
9:45
10:15
4aAB6. From clicks to counts: Applying line-transect methods to passive acoustic monitoring of sperm whales in the Gulf of Alaska. Tina M.
Yack, Thomas F. Norris, Elizabeth Ferguson (Bio-Waves Inc., 364 2nd St.,
Ste. #3, Encinitas, CA 92024, tina.yack@bio-waves.net), Brenda K. Rone
(Cascadia Res. Collective, Seattle, WA), and Alexandre N. Zerbini (Alaska
Fisheries Sci. Ctr., Seattle, WA)
4aAB7. Studying the biosonar activities of deep diving odontocetes in
Hawaii and other western Pacific locations. Whitlow W. Au (Hawaii Inst.
of Marine Biology, Univ. of Hawaii, 46-007 Lilipuna Rd., Kaneohe, HI
96744, wau@hawaii.edu) and Giacomo Giorli (Oceanogr. Dept., Univ. of
Hawaii, Honolulu, HI)
A visual and acoustic line-transect survey of marine mammals was conducted in the central Gulf of Alaska (GoA) during the summer of 2013. The
survey area was divided into four sub-strata to reflect four distinct habitats;
“inshore,” “slope,” “offshore,” and “seamount.” Passive acoustic monitoring
was conducted using a towed-hydrophone array system. One of the main
objectives of the acoustic survey was to obtain an acoustic-based density
estimate for sperm whales. A total of 241 acoustic encounters of sperm
whales during 6,304 km of effort were obtained compared to 19 visual
encounters during 4,155 km of effort. Line-transect analytical methods were
used to estimate the abundance of sperm whales. To estimate the detection
function, target motion analysis was used to obtain perpendicular distances
to individual sperm whales. An acoustic-based density and abundance estimate was obtained for each stratum (N = 78; CV = 0.36 offshore; N = 16;
CV = 0.55 seamount; N = 121; and CV = 0.18 slope) and for the entire survey area (N = 215; D = 0.0013; and CV = 0.18). These results will be compared to visual-based estimates. The advantages and disadvantages of
acoustic-based density estimates as well as application of these methods to
other species (e.g., beaked whales) and areas will be discussed.
10:00–10:15 Break
Ecological acoustic recorders (EARs) have been deployed at several
locations in Hawaii and in other western Pacific locations to study the foraging behavior of deep-diving odontocetes. EARs have been deployed at
depths greater than 400 m at five locations around the island of Kauai, one
at Ni’ihau, two around the island of Okinawa, and four in the Marianas (two
close to island of Guam, one close to the island of Saipan and another close
to the island of Tinian). The four groups of deep-diving odontocetes were
blackfish (mainly pilot whales and false killer whales), sperm whales,
beaked whales (Cuvier and Bainsville beaked whales) and Risso’s dolphin.
In all locations, the biosonar signals of blackfish were detected the most followed by either by sperm and beaked whales depending on specific locations with Risso’s dolphin being detected the least. There was a strong
tendency for these animals to forage at night in all locations. The detection
rate indicate much lower populations of these four groups of odontocetes
around Okinawa and in the Marianas then off Kauai in the main Hawaiian
island chain by a factor of about 4–5.
10:30
4aAB8. Fin whale vocalization classification and abundance estimation.
Wei Huang, Delin Wang (Elec. and Comput. Eng., Northeastern Univ., 006
Hayden Hall, 370 Huntington Ave., Boston, MA 02115, huang.wei1@
husky.neu.edu), Nicholas C. Makris (Mech. Eng., Massachusetts Inst. of
Technol., Cambridge, MA), and Purnima Ratilal (Elec. and Comput. Eng.,
Northeastern Univ., Boston, MA)
Several thousand fin whale vocalizations from multiple fin individuals
were passively recorded by a high-resolution coherent hydrophone array
system in the Gulf of Maine in Fall 2006. The recorded fin whale vocalizations have relatively short durations roughly 0.4 s and frequencies ranging
2246
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2246
10:45
4aAB9. Neglect of bandwidth of odontocetes echolocation clicks biases
propagation loss and single hydrophone population estimates. Michael
A. Ainslie, Alexander M. von Benda-Beckmann (Acoust. and Sonar, TNO,
P.O. Box 96864, The Hague 2509JG, Netherlands, michael.ainslie@tno.nl),
Len Thomas (Ctr. for Res. into Ecological and Environ. Modelling, Univ. of
St. Andrews, St Andrews, United Kingdom), and Tyack L. Tyack (Sea
Mammal Res. Unit, Scottish Oceans Inst., Univ. of St. Andrews, St.
Andrews, United Kingdom)
Passive acoustic monitoring with a single hydrophone has been suggested as a cost-effective method to monitor population density of echolocating marine mammals, by estimating the distance at which the
hydrophone is able to distinguish the echolocation clicks from the background. To avoid a bias in the estimated population density, this method
relies on an unbiased estimate of the propagation loss (PL). It is common
practice to estimate PL at the center frequency of a broadband echolocation
click and to assume this narrowband PL applies also to the broadband click.
For a typical situation this narrowband approximation overestimates PL,
underestimates the detection range and consequently overestimates the population density by an amount that for fixed center frequency increases with
increasing pulse bandwidth and sonar figure of merit. We investigate the
detection process for different marine mammal species and assess the magnitude of error on the estimated density due to various simplifying assumptions. Our main purposes are to quantify and, where possible and needed,
correct the bias in the population density estimate for selected species and
detectors due to use of the narrowband approximation, and to understand
the factors affecting the magnitude of this bias to enable extrapolation to
other species and detectors.
11:00
4aAB10. Instantaneous acoustical response of marine mammals to abrupt changes in ambient noise. John E. Joseph, Tetyana Margolina (Oceanogr., Naval Postgrad. School, 833 Dyer Rd, Monterey, CA, jejoseph@
nps.edu), and Ming-Jer Huang (National Kaohsiung Univ. of Appl. Sci.,
Kaohsiung, Taiwan)
Four months of passive acoustic data recorded at Thirtymile Bank in offshore southern California have been analyzed to describe instantaneous
vocal response of marine mammals to abrupt changes in ambient noise.
Main contributors to the distinctive regional soundscape are heavy commercial shipping, military activities in the naval training range, diverse marine
life and natural sources including wind and tectonic activity. Many of these
sources produce intense, irregular and short-term events shaped by local
oceanographic conditions, bathymetry and bottom structure (Thirtymile
Bank blind thrust). We seek to attribute detected changes in cetacean vocal
behavior (loudness, calling rate, and pattern) to these events and differentiate the reaction by noise source, its intensity, frequency and/or duration.
Main target species are blue and fin whales. Initial hypotheses formulated
after data scanning are tested statistically (2D histograms and PCA). To
quantify the vocal behavior variations, an innovative detection approach
based on pattern recognition is applied, which allows for extraction of individual calls with low false alarm and high detection success comparable to
those of a human analyst. Obtained results relate cetacean acoustic behavior
to ambient noise variability and thus help refine existing cue-based formulae
for estimation of whale population density from PAM data.
2247
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
11:15
4aAB11. Measuring whale and dolphin call rates as a function of behavioral, social, and environmental context. Stacy L. DeRuiter, Catriona M.
Harris (School of Mathematics & Statistics, Univ. of St. Andrews, CREEM,
St. Andrews KY169LZ, United Kingdom, sldr@st-andrews.ac.uk), Nicola J.
Quick (Duke University Marine Lab, Duke Univ., Beaufort, NC), Dina
Sadykova, Lindesay A. Scott-Hayward (School of Mathematics & Statistics,
Univ. of St. Andrews, St. Andrews, United Kingdom), Alison K. Stimpert
(Moss Landing Marine Lab., California State Univ., Moss Landing, CA),
Brandon L. Southall (Southall Environ. Assoc., Inc., Aptos, CA), Len
Thomas (School of Mathematics & Statistics, Univ. of St. Andrews, St
Andrews, United Kingdom), and Fleur Visser (Kelp Marine Res., Hoorn,
Netherlands)
Cetacean sound-production rates are highly variable and patchy in time,
depending upon individual behavior, social context, and environmental context. Better quantification of the drivers of this variability should allow more
realistic estimates of expected call rates, improving our ability to convert
between call counts and animal density, and also facilitating detection of
sound-production changes due to acoustic disturbance. Here, we analyze
digital acoustic tag (DTAG) records and visual observations collected during behavioral response studies (BRSs), which aim to assess normal cetacean behavior and measure changes in response to acoustic disturbance;
data sources include SOCAL BRS, the 3S project, and Bahamas BRS, with
statistical contributions from the MOCHA project (http://www.creem.stand.ac.uk/mocha/links). We illustrate use of generalized linear models (and
their extensions) as a flexible framework for sound-production-rate analysis.
In the context of acoustic disturbance, we also detail use of two-dimensional
spatially adaptive surfaces to jointly model effects of sound-source proximity and sound intensity. Specifically, we quantify variability in pilot whale
group sound production rates in relation to behavior and environment, and
individual fin whale call rates in relation to social and environmental context
and dive behavior; with and without acoustic disturbance.
11:30
4aAB12. Estimating relative abundance of singing humpback whales in
Los Cabos, Mexico, using diffuse ambient noise. Kerri Seger, Aaron M.
Thode (Scripps Inst. of Oceanogr., Univ. of California, San Diego, 8880 Biological Grade, MESOM 161, La Jolla, CA 92093-0206, kseger@ucsd.edu),
Diana C. L
opez Arzate, and Jorge Urban (Laboratorio de Mamıferos Marionoma de Baja California Sur, La Paz, BCS,
nos, Universidad Aut
Mexico)
Previous research has speculated that diffuse ambient noise levels can
be used to estimate relative cetacean abundance in certain locations when
baleen whale vocal activity dominates the soundscape (Au et al., 2000; Mellinger et al., 2009). During the 2013 and 2014 humpback whale breeding
seasons off Los Cabos, Mexico, visual point and line transects were conducted alongside two bottom-mounted acoustic deployments. As theorized,
preliminary analysis of ambient noise between 100 and 1,000 Hz is dominated by humpback whale song. It also displays a diel cycle similar to that
found in the West Indies, Australia, and Hawai’i, whereby peak levels occur
near midnight and troughs occur soon after sunrise (Au et al., 2000; McCauley et al., 1996). Depending upon site and year, the median band-integrated
levels fluctuated between 7 and 16 dB re 1 uPa when sampled in one hour
increments. This presentation uses analytical models of wind-generated
noise in an ocean waveguide to analyze potential relationships between
singing whale density and diffuse ambient noise levels. It explores whether
various diel cycle strengths (peak-to-peak measurements and Fourier analysis) correspond with trends observed from concurrent visual censuses.
[Work sponsored by the Ocean Foundation.]
168th Meeting: Acoustical Society of America
2247
4a THU. AM
from 15 to 40 Hz. Here we classify the fin whale vocalizations and apply the
results to estimate the minimum number of vocalizing fin individuals
detected by our hydrophone array. The horizontal azimuth or bearing of
each fin whale vocalization is first determined by beamforming. Each beamformed fin whale vocalization spectrogram is next characterized by several
features such as center frequency, upper and lower frequency limits, as well
as amplitude-weighted mean frequency. The vocalizations are then classified using k-mean clustering into several distinct vocal types. The vocalization clustering result is then combined with the bearing-time trajectory
information for a consecutive sequence of vocalizations to provide an estimate of the minimum number of vocalizing fin individuals detected.
11:45
SAMBAH (Static Acoustic Monitoring of the Baltic Sea Harbor Porpoise) is an EU LIFE + -funded project with the primary goal of estimating
the abundance and distribution of the critically endangered Baltic Sea harbor porpoise. From May 2011 to April 2013, project members in all EU
countries around the Baltic Sea undertook a static acoustic survey using 304
porpoise detectors distributed in a randomly positioned systematic grid in
waters 5–80 m deep. In the recorded data, click trains originating from porpoises have been identified automatically using an algorithm developed specifically for Baltic conditions. To determine the click train C-POD detection
function, a series of experiments have been carried out, including acoustic
tracking of wild free ranging porpoises using hydrophone arrays in an area
with moored C-PODs and playbacks of porpoise-like signals at SAMBAH
C-PODs during various hydrological conditions. Porpoise abundance has
been estimated by counting the number of individuals detected in short time
interval windows (snapshots), and then accounting for false positive detections, probability of animals being silent, and probability of detection of
non-silent animals within a specified maximum range. We describe the
method in detail, and how the auxiliary experiments have enabled us to estimate the required quantities.
4aAB13. Large-scale static acoustic survey of a low-density population—Estimating the abundance of the Baltic Sea harbor porpoise. Jens
C. Koblitz (German Oceanogr. Museum, Katharinenberg 14-20, Stralsund
18439, Germany, Jens.Koblitz@meeresmuseum.de), Mats Amundin (Kolmården Wildlife Park, Kolmarden, Sweden), Julia Carlstr€
om (AquaBiota Water
Res., Stockholm, Sweden), Len Thomas (Ctr. for Res. into Ecological and Environ. Modelling, Univ. of St. Andrews, St. Andrews, United Kingdom), Ida
Carlen (AquaBiota Water Res., Stockholm, Sweden), Jonas Teilmann (Dept.
of BioSci., Aarhus Univ., Roskilde, Denmark), Nick Tregenza (Chelonia Ltd.,
Long Rock, United Kingdom), Daniel Wennerberg (Kolmården Wildlife
Park, Kolmarden, Sweden), Line Kyhn, Signe Svegaard (Dept. of BioSci.,
Aarhus Univ., Roskilde, Denmark), Radek Koza, Monika Kosecka, Iwona
Pawliczka (Univ. of Gdansk, Gdansk, Poland), Cinthia Tiberi Ljungqvist
(Kolmården Wildlife Park, Kolmården, Sweden), Katharina Brundiers (German Oceanogr. Museum, Stralsund, Germany), Andrew Wright (George
Mason Univ., Fairfax, VA), Lonnie Mikkelsen, Jakob Tougaard (Dept. of
BioSci., Aarhus Univ., Roskilde, Denmark), Olli Loisa (Turku Univ. of Appl.
Sci., Turku, Finland), Anders Galatius (Dept. of BioSci., Aarhus Univ., Rosussi (ProMare NPO, Harjumaa, Estonia), and Harald
kilde, Denmark), Ivar J€
Benke (German Oceanogr. Museum, Stralsund, Germany)
THURSDAY MORNING, 30 OCTOBER 2014
INDIANA A/B, 7:55 A.M. TO 12:00 NOON
Session 4aBA
Biomedical Acoustics: Mechanical Tissue Fractionation by Ultrasound: Methods, Tissue Effects, and
Clinical Applications I
Vera A. Khokhlova, Cochair
University of Washington, 1013 NE 40th Street, Seattle, WA 98105
Jeffrey B. Fowlkes, Cochair
Univ. of Michigan Health System, 3226C Medical Sciences Building I, 1301 Catherine Street, Ann Arbor, MI 48109-5667
Chair’s Introduction—7:55
Invited Papers
8:00
4aBA1. Histotripsy: An overview. Charles A. Cain (Biomedical Eng., Univ. of Michigan, 2200 Bonisteel Blvd., 2121 Gerstacker, Ann
Arbor, MI 48105, cain@umich.edu)
Histotripsy produces non-thermal lesions by generating dense highly confined energetic bubble clouds that mechanically fractionate
tissue. This nonlinear thresholding phenomenon has useful consequences. If only the tip of the waveform (P-) exceeds the intrinsic
threshold*, small lesions less than the diffraction limit can be generated. This is called microtripsy (other presentations in this session).
Moreover, side lobes from distorting aberrations can be “thresholded-out” wherein part of the main lobe exceeds the intrinsic threshold
producing a clean bubble cloud (and lesion) conferring significant immunity to aberrations. If a high frequency probe (imaging) waveform intersects a low frequency pump waveform, the compounded waveform can momentarily exceed the intrinsic threshold producing
a lesion with an imaging transducer. Multi-beam histotripsy (other presentations in this session) allows flexible placement of both pump
and probe transducers. Very broadband P- “monopolar” pulses*, ideal for histotripsy, can be synthesized in a generalization of the
multi-beam histotripsy (other presentations in this session) case wherein very short pulses from transducer elements of many different
frequencies are added at the focus of what is called a frequency compounding transducer (other presentations in this session). Ultrasound
image guidance works well with histotripsy. Bubble clouds are easily seen simplifying both lesion targeting and continuous validation
of the ongoing process. Hypoechoic homogenized tissue allows real-time quantification of lesion formation.
2248
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2248
8:20
4aBA2. Boiling histotripsy: A noninvasive method for mechanical tissue disintegration. Adam D. Maxwell (Dept. of Urology,
Univ. of Washington School of Medicine, 1013 NE 40th St., Seattle, WA 98105, amax38@u.washington.edu), Tatiana D. Khokhlova
(Dept. of Gastroenterology, Univ. of Washington, Seattle, WA), George R. Schade (Dept. of Urology, Univ. of Washington School of
Medicine, Seattle, WA), Yak-Nam Wang, Wayne Kreider (Ctr. for Industrial and Medical Ultrasound, Appl. Phys. Lab., Univ. of Washington, Seattle, WA), Petr Yuldashev (Phys. Faculty, Moscow State Univ., Moscow, Russian Federation), Julianna C. Simon (Ctr. for
Industrial and Medical Ultrasound, Appl. Phys. Lab., Univ. of Washington, Seattle, WA), Oleg A. Sapozhnikov (Phys. Faculty, Moscow
State Univ., Moscow, Russian Federation), Navid Farr (Ctr. for Industrial and Medical Ultrasound, Appl. Phys. Lab., Univ. of Washington, Seattle, WA), Ari Partanen (Clinical Sci., Philips Healthcare, Cleveland, OH), Michael R. Bailey (Ctr. for Industrial and Medical
Ultrasound, Appl. Phys. Lab., Univ. of Washington, Seattle, WA), Joo Ha Hwang (Dept. of Gastroenterology, Univ. of Washington,
Seattle, WA), Lawrence A. Crum (Ctr. for Industrial and Medical Ultrasound, Appl. Phys. Lab., Univ. of Washington, Seattle, WA), and
Vera A. Khokhlova (Phys. Faculty, Moscow State Univ., Moscow, Russian Federation)
Boiling histotripsy is an experimental noninvasive focused ultrasound therapy that applies shocked ms-length pulses to achieve mechanical disintegration of a targeted tissue. Localized delivery of high-amplitude shocks causes rapid heating, resulting in boiling of the
tissue. The interaction of incident shocks with the boiling bubble results in tissue disruption and liquefaction without significant thermal
injury. Simulations are utilized to design and characterize therapy sources, predicting focal waveforms, shock amplitudes, and boiling
times. Transducers have been developed to generate focal shock amplitudes >70 MPa and achieve rapid boiling at depth in tissue. Therapy systems including ultrasound-guided single-element sources and clinical MRI-guided phased arrays have been successfully used to
create ex vivo and in vivo lesions at ultrasound frequencies in the 1–3 MHz range. Histological and biochemical analyses show mechanical disruption of tissue architecture with minimal thermal effect, similar to cavitation-based histotripsy. Atomization as observed with
acoustic fountains has been proposed as an underlying mechanism of tissue disintegration. This promising technology is being explored
for several applications in tissue ablation, as well as new areas such as tissue engineering and biomarker detection. [Work supported by
NIH 2T32DK007779-11A1, R01EB007643-05, 1K01EB015745, and NSBRI through NASA NCC 9-58.]
8:40
4aBA3. Bubbles in tissue: Yes or No? Charles C. Church (NCPA, Univ. of MS, 1 Coliseum Dr., University, MS 38677, cchurch@olemiss.edu)
The question of whether bubbles exist in most or all biological tissues rather than being restricted to only a few well-known examples remains a mystery. When Apfel and Holland developed the theoretical background for the mechanical index (MI), they first
assumed that such bubbles did exist and further assumed that some of those bubbles were of a size that would undergo inertial cavitation
at the lowest possible rarefactional pressure. Comparison of cavitation thresholds determined experimentally in various mammalian tissues in vivo with the results of computational studies seems to provide a definitive answer to that question. No, optimally sized bubbles
do not pre-exist in tissue, although very small bubbles, with radii on the order of nm, may be present. However, this answer is inextricably related to the accuracy of the theory used to study the question, in this case a form of the Keller-Miksis equation modified to include
the viscoelastic properties of tissue. Previous analysis has focused on elasticity, assuming that viscosity is constant, but is it? Blood is
known to be shear-thinning, and some soft tissues appear to be as well. The effect of shear rate on cavitation thresholds and implications
for bubble populations in tissue will be discussed.
9:00
4a THU. AM
4aBA4. Benefits and challenges of employing elevated acoustic output in diagnostic imaging. Kathryn Nightingale (Biomedical
Eng., Duke Univ., PO Box 90281, Durham, NC 27708-0281, kathy.nightingale@duke.edu) and Charles C. Church (National Ctr. for
Acoust., Univ. of MS, University, MS)
The acoustic output levels used in diagnostic ultrasonic imaging in the US have been subject to a de facto limitation by guidelines
established by the USFDA in 1976, for which no known bioeffects had been reported. These track-3 guidelines link the Mechanical
Index (MI) and the Thermal Index (TI) to the maximum outputs as of May 28, 1976, through a linear derating process. Subsequently,
new imaging technologies have been developed that employ unique beam sequences (e.g., harmonic imaging and ARFI imaging) which
were not well developed when the current regulatory scheme was put in place, so neither the MI nor the TI takes them into account in an
optimal manner. Additionally, there appears to be a large separation between the maxima in the track-3 guidelines and the acoustic output levels for which cavitation-based bioeffects are observed in tissues not known to contain gas bodies. In this presentation, we summarize the history of and the scientific basis for the MI, define an output regime and specify clinical applications under consideration for
conditionally increased output (CIO), review the potential risks of CIO in this regime based upon existing scientific evidence, and summarize the evidence for the potential clinical benefits of CIO.
9:20
4aBA5. Standards for characterizing highly nonlinear acoustic output from therapeutic ultrasound devices: Current methods
and future challenges. Thomas L. Szabo (Biomedical Dept., Boston Univ., 44 Cummington Mall, Boston, MA 02215, tlszabo@bu.edu)
One of the major challenges of characterizing the acoustic fields and power from diagnostic and high intensity or high pressure therapeutic devices is addressing the impact of amplitude-dependent nonlinear propagation effects. The destructive capabilities of high intensity therapeutic devices (HITU) make acoustic output measurements with conventional fragile sensors used for diagnostic ultrasound
difficult. Different approaches involving more robust measurement devices, scaling and simulation are described in two recent IEC
documents, IEC TS 62556 for the specification and measurement of HITU fields and IEC 62555 for the measurement of acoustic power
from HITU devices. Existing and proposed applications include even higher pressure levels and use of cavitation effects. Promising
hybrid approaches involve a combination of measurement and simulation. In order to meet the challenges of design, verification, and
measurement, standards and consensus are needed to couple the measurements to the prediction of acoustic output in realistic tissue
models as well as associated effects such as acoustic radiation force and temperature elevation.
2249
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2249
9:40
4aBA6. Uncertainties in characterization of high-intensity, nonlinear pressure fields for therapeutic applications. Wayne Kreider
(CIMU, Appl. Phys. Lab., Univ. of Washington, 1013 NE 40th St., Seattle, WA 98105, wkreider@uw.edu), Petr V. Yuldashev (Phys.
Faculty, Moscow State Univ., Moscow, Russian Federation), Adam D. Maxwell (Dept. of Urology, Univ. of Washington, Seattle, WA),
Tatiana D. Khokhlova (CIMU, Appl. Phys. Lab., Univ. of Washington, Seattle, WA), Sergey A. Tsysar (Phys. Faculty, Moscow State
Univ., Moscow, Russian Federation), Michael R. Bailey (CIMU, Appl. Phys. Lab., Univ. of Washington, Seattle, WA), Oleg A. Sapozhnikov, and Vera A. Khokhlova (Phys. Faculty, Moscow State Univ., Moscow, Russian Federation)
A fundamental aspect of developing therapeutic ultrasound applications is the need to quantitatively characterize the acoustic fields
delivered by transducers. A typical approach is to make direct pressure measurements in water. With very high intensities and potentially
shocks, executing this approach is problematic because of the strict requirements imposed on hydrophone bandwidth, robustness, and
size. To overcome these issues, a method has been proposed that relies on acoustic holography and simulations of nonlinear propagation
based on the 3D Westervelt model. This approach has been applied to several therapy transducers including a multi-element phased
array. Uncertainties in the approach can be evaluated for both model boundary conditions determined from linear holography and the
nonlinear focusing gain achieved at high power levels. Neglecting hydrophone calibration uncertainties, errors associated with the holography technique remain less than about 10% in practice. To assess the accuracy of nonlinear simulations, results were compared to independent measurements of focal waveforms using a fiber optic probe hydrophone (FOPH). When relative calibration uncertainties
between the capsule hydrophone and FOPH are mitigated, simulations and FOPH measurements agree within about 15% for peak pressures at the focus. [Work supported by NIH grants EB016118, EB007643, T32 DK007779, DK43881, and NSBRI through NASA NCC
9-58.]
10:00–10:20 Break
10:20
4aBA7. Cavitation characteristics in High Intensity Focused Ultrasound lesions. Gail ter Haar and Ian Rivens (Phys., Inst. of Cancer
Res., Phys. Dept., Royal Marsden Hospital, Sutton, Surrey SM2 5PT, United Kingdom, gail.terhaar@icr.ac.uk)
The acoustic emissions recorded during HIFU lesions fall into three broad categories: those associated with non-inertial cavitation,
those associated with inertial cavitation, and those linked with tissue water boiling. These three mechanisms can be linked with different
lesion shapes, and with characteristic histological appearance. By careful choice of acoustic driving parameters, these effects may be
studied individually.
10:40
4aBA8. The role of tissue mechanical properties in histotripsy tissue fractionation. Eli Vlaisavljevich, Charles Cain, and Zhen Xu
(Univ. of Michigan, 1111 Nielsen Ct. Apt. 1, Ann Arbor, MI 48105, evlaisav@umich.edu)
Histotripsy is a therapeutic ultrasound technique that controls cavitation to fractionate tissue using short, high-pressure ultrasound
pulses. Histotripsy has been demonstrated to successfully fractionate many different tissues, though stiffer tissues such as cartilage or
tendon (Young’s moduli >1 MPa) are more resistant to histotripsy-induced damage than softer tissues such as liver (Young’s moduli ~9
kPa). In this work, we investigate the effects of tissue mechanical properties on various aspects of the histotripsy process including the
pressure threshold required to generate a cavitation cloud, the bubble dynamics, and the stress–strain applied to tissue structures. Ultrasound pulses of 1–2 acoustic cycles at varying frequencies (345 kHz, 500 kHz, 1.5 MHz, and 3 MHz) were applied to agarose tissue
phantoms and ex vivo bovine tissues with varying mechanical properties. Results demonstrate that the intrinsic threshold to initiate a
cavitation cloud is independent of tissue stiffness and frequency. The bubble expansion is suppressed in stiffer tissues, leading to a
decrease in strain to surrounding tissue and an increase in damage resistance. Finally, we investigate strategies to optimize histotripsy
therapy for the treatment of tissues with specific mechanical properties. Overall, this work improves our understanding of how tissue
properties affect histotripsy and will guide parameter optimization for histotripsy tissue fractionation.
11:00
4aBA9. Technical advances for histotripsy: Strategic ultrasound pulsing methods for precise histotripsy lesion formation. KuangWei Lin, Timothy L. Hall, Zhen Xu, and Charles A. Cain (Univ. of Michigan, 2200 Bonisteel Blvd., Gerstacker, Rm. 1107, Ann Arbor,
MI 48109, kwlin@umich.edu)
Conventional histotripsy uses ultrasound pulses longer than three cycles wherein the bubble cloud formation relies on the pressurerelease scattering of the positive shock fronts from sparsely distributed single cavitation bubbles, making the cavitation event unpredictable and sometimes chaotic. Recently, we have developed three new strategic histotripsy pulsing techniques to further increase the
precision of cavitation cloud and lesion formation. (1) Microtripsy: When applying histotripsy pulses shorter than three cycles, the formation of a dense bubble cloud only depends on the applied peak negative pressure (P-) exceeding an intrinsic threshold of the medium.
With a P- not significantly higher than this, very precise sub-vocal-volume lesions can be generated. (2) Dual-beam histotripsy: A subthreshold high-frequency pulse (perhaps from an imaging transducer) is enabled by a sub-threshold low-frequency pump pulse to exceed
the intrinsic threshold and produces very precise lesions. (3) Frequency compounding: a near monopolar pulse can be synthesized using
a frequency-compounding transducer (an array transducer consisting of elements with various resonant frequencies). By adjusting time
delays for individual frequency components and allowing their principal negative peaks to arrive at the focus concurrently, a near
monopolar pulse with a dominant negative phase can be generated (no complicating high peak positive shock fronts).
2250
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2250
11:20
4aBA10. Histotripsy: Urologic applications and translational progress. William W. Roberts (Urology, Univ. of Michigan, 3879
Taubman Ctr., 1500 East Medical Ctr. Dr., Ann Arbor, MI 48109-5330, willrobe@umich.edu), Charles A. Cain (Biomedical Eng., Univ.
of Michigan, Ann Arbor, MI), J. B. Fowlkes (Radiology, Univ. of Michigan, Ann Arbor, MI), Zhen Xu, and Timothy L. Hall (Biomedical Eng., Univ. of Michigan, Ann Arbor, MI)
Histotripsy is an extracorporeal ablative technology based on initiation and control of acoustic cavitation within a target volume.
This mechanical form of tissue homogenization differs from the ablative processes employed by conventional thermoablative modalities
and exhibits a number of unique features (non-thermal, high precision, real-time monitoring/feedback, and tissue liquefaction), which
are potentially advantageous characteristics for ablative applications in a variety of organs and disease processes. Histotripsy has been
applied to the prostate in canine models for tissue debulking as a therapy for benign prostatic hyperplasia and for ablation of ACE-1
tumors, a canine prostate cancer model. Homogenization of normal renal tissue as well as implanted VX-2 renal tumors has been demonstrated with histotripsy. Initial studies assessing tumor metastases in this model did not reveal metastatic potentiation following mechanical homogenization by histotripsy. Enhanced understanding of cavitation and methods for acoustic control of the target volume are
being refined in tank studies for treatment of urinary calculi. Development of novel acoustic pulsing strategies, refinement of technology,
and enhanced understanding of cavitational bioeffects are driving pre-clinical translation of histotripsy for a number of applications. A
human pilot trial is underway to assess the safety of histotripsy as a treatment for benign prostatic hyperplasia.
11:40
4aBA11. Boiling histotripsy of the kidney: Preliminary studies and predictors of treatment effectiveness. George R. Schade, Adam
D. Maxwell (Dept. of Urology, Univ. of Washington, 5555 14th Ave. NW, Apt 342, Seattle, WA 98107, grschade@uw.edu), Tatiana
Khokhlova (Dept. of Gastroenterology, Univ. of Washington, Seattle, WA), Yak-Nam Wang, Oleg Sapoznikov, Michael R. Bailey, and
Vera Khokhlova (Ctr. for Industrial and Medical Ultrasound, Appl. Phys. Lab, Univ. of Washington, Seattle, WA)
Boiling histotripsy (BH), an ultrasound technique to mechanically homogenize tissue, has been described in ex vivo liver and myocardium. As a noninvasive, non-thermal based approach, BH may have advantages over clinically available thermal ablative technologies for renal masses. We aimed to characterize BH exposures in human and porcine ex vivo kidneys using a 7-element 1 MHz
transducer (duty factor 1–3%, 5–10 ms pulses, 98 MPa in situ shock amplitude, 17 MPa peak negative). Lesions were successfully created in both species, demonstrating focally homogenized tissue above treatment thresholds (pulse number) with stark transition between
treated and untreated cells on histologic assessment. Human tissue generally required more pulses to produce similar effect compared to
porcine. Similarly, kidneys displayed tissue specific resistance to BH with increasing resistance from cortex to medulla to the collecting
system. Tissue properties that predict resistance to renal BH were evaluated demonstrating correlation between tissue collagen content
and tissue resistance. Subsequently, the impact of intervening abdominal wall and ribs on lesion generation ex vivo was evaluated.
“Transabdominal” and “transcostal” treatment required approximately 5- and 20-fold greater acoustic power, respectively, to elicit boiling vs. no intervening tissue. [Work supported by NIH T32DK007779, R01EB007643, K01EB015745 and NSBRI through NASA NCC
9-58.]
THURSDAY MORNING, 30 OCTOBER 2014
MARRIOTT 9/10, 8:30 A.M. TO 11:15 A.M.
4a THU. AM
Session 4aEA
Engineering Acoustics: Acoustic Transduction: Theory and Practice I
Richard D. Costley, Chair
Geotechnical and Structures Lab., U.S. Army Engineer Research & Development Center, 3909 Halls Ferry Rd,
Vicksburg, MS 39180
Contributed Papers
8:30
4aEA1. Historic transducers: Balanced armature receiver (BAR). Jont
B. Allen (ECE, Univ. of Illinois, Urbana-Champaign, Urbana, IL) and Noori
Kim (ECE, Univ. of Illinois, Urbana-Champaign, 1085 Baytowner df 11,
Champaign, IL 61822, nkim13@illinois.edu)
The oldest telephone receiver is the Balanced Armature Receiver (BAR)
type, and it is still in use. The original technology goes back to the invention
of telephone receiver by A. G. Bell in 1876. Attraction and release of the armature are controlled by the current from the coils, which generates electromagnetic fields [Hunt (1954) Chapter 7, and Beranek and Mellow (2014)].
2251
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
As the electrical current goes into the electric terminal of the receiver, it
generates an AC magnetic field which direction is perpendicular to the current. Due to the polarity between the permanent (DC) magnet field and the
generated AC magnetic field, an armature (which sits within the core of the
coil and the magnet) feels a force. The very basic principles for explaining
this movement in a gyrator, a fifth circuit element introduced by Tellegen in
1948, along with an inductor, a capacitor, a resistor, and a transformer. This
component represents the anti-reciprocal characteristic of the system. This
study is starting from comparing the BAR type receiver to the moving-coil
loud speaker. We believe that this work will provide a fundamental and
clear insight into this type of BAR system.
168th Meeting: Acoustical Society of America
2251
8:45
4aEA2. Radiation from wedges of a power law profile. Marcel C. Remillieux, Brian E. Anderson, Timothy J. Ulrich, and Pierre-Yves Le Bas (Geophys. Group (EES-17), Los Alamos National Lab., MS D446, Los Alamos,
NM 87545, mcr1@lanl.gov)
The large impedance contrast between bulk piezoelectric disks and air
does not allow for efficient coupling of sound radiation from the piezoelectric into air. Here, we present the idea of using wedges of power law profiles
to more efficiently radiate sound into air. Wedges of power law profiles
have been used to provide absorption of vibrational energy in plates, but
their efficient radiation of sound into air has not been demonstrated. We
present numerical modeling and experimental results to demonstrate the
concept. The wedge shape provides a gradual impedance contrast as the
wave travels down the tapering of the wedge, while the wave speed also
continually slows down. For an ideal wedge that tapers down to zero thickness, the waves become trapped at the tip and the vibrational energy can
only radiate into the surrounding air. [This work was supported by institutional support [Laboratory Directed Research and Development (LDRD)] at
Los Alamos National Laboratory.]
9:00
4aEA3. The self-sustained oscillator as an underwater low frequency
projector: Progress report. Andrew A. Acquaviva and Stephen C. Thompson (Graduate Program in Acoust., The Penn State Univ., c/o Steve Thompson, N-249 Millennium Sci. Complex, University Park, PA, acquavaa@
gmail.com)
Wind musical instruments are examples of pressure operated self-sustained oscillators that act as acoustic projectors. Recent studies have shown
that this type of self-sustained oscillator can also be implemented underwater as a low frequency projector. However, the results of the early feasibility studies were complicated by the existence of cavitation in the high
pressure region of the resonator. A redesign has eliminated the cavitation
and allows better comparison with analytical calculations.
9:15
4aEA4. Design and testing of an underwater acoustic Fresnel zone plate
diffractive lens. David C. Calvo, Abel L. Thangawng, Michael Nicholas,
and Christopher N. Layman, Jr. (Acoust. Div., Naval Res. Lab., 4555 Overlook Ave., SW, Washington, DC 20375, david.calvo@nrl.navy.mil)
Fresnel zone plate (FZP) lenses offer a means of focusing sound based
on diffraction in cases where the thickness of conventional lenses may be
impractical. A binary-profile FZP for underwater use featuring a center
acoustically opaque disk with alternating transparent and opaque annular
regions was fabricated to operate nominally at 200 kHz. The overall diameter of the lens was 13 in. and consisted of 13 opaque annuli. The opaque
regions were 3 mm thick and made from silicone rubber with a high concentration of gas voids. These regions were bonded to an acoustically transparent silicone rubber substrate film that was 1 mm thick. The FZP was
situated in a frame and tested in a 5 x 4 x 4 cu. ft. ultrasonic tank using a piston source for insonification. The measured focal distance for normal incidence of 12.5 cm agreed with finite-element predictions taking into account
the wavefront curvature of the incident field which had to be included given
the finite dimensions of tank. The focal gain was measured to be 20 dB. The
radius to the first null at the focal plane was approximately 4 mm, which
agreed with theoretical resolution predictions. [Work sponsored by the
Office of Naval Research.]
9:30
4aEA5. Acoustical transduction in two-dimensional piezoelectric array.
Ola Nusierat, Lucien Cremaldi (Phys. and Astronomy, Univ. of MS, Oxford,
MS), and Igor Ostrovskii (Phys. and Astronomy, Univ. of MS, Lewis Hall,
Rm. 108, University, MS 38677, iostrov@phy.olemiss.edu)
The acoustical transduction in an array of ferroelectric domains with
alternating piezoelectric coefficients is characterized by multi-frequency
resonances, which occur at the boundary of the acoustic Brillouin zone
2252
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
(ABZ). The resonances correspond to two successive domain excitations in
the first and second ABZ correspondingly, where the speed of ultrasound is
somewhat different. An important parameter for acoustical transduction is
the electric impedance Z. The results of the theoretical and experimental
investigations of Z in a periodically poled LiNbO3 are presented. The magnitude and phase of Z depend on the array parameters including domain resonance frequency and domain number; Z of arrays consisting of up to 88
0.45-mm-long domains in the zx-cut crystal are investigated. The strong
changes in Z-magnitude and phase are observed in the range of 3–4 MHz.
The two resonance zones are within 3.33 6 0.05 MHz and 3.676 0.05
MHz. The change in domain number influences Z and its phase. By varying
the number of inversely poled domains and resonance frequencies, one can
significantly control/change the electrical impedance of the multidomain
array. The findings may be used for developing new acoustic sensors and
transducers.
9:45
4aEA6. A non-conventional acoustic transduction method using fluidic
laminar proportional amplifiers. Michael V. Scanlon (RDRL-SES-P,
Army Res. Lab., 2800 Powder Mill Rd., Adelphi, MD 20783-1197, michael.
v.scanlon2.civ@mail.mil)
Pressure sensing using fluidic laminar proportional amplifiers (LPAs)
was developed at Harry Diamond Laboratories in the late 1970s and was
applied to acoustic detection and amplification. LPAs use a partially constrained laminar jet of low-pressure air as the sensing medium, which is
deflected by the incoming acoustic signal. LPA geometries enable pressure
gain by focusing incoming pressure fluctuations at the jet’s nozzle exit,
thereby applying leverage to create jet deflection over its short transit toward a splitter. With no input signal, the jet is not deflected and downstream
pressures on both sides of the splitter are equal. A differential input signal
of magnitude one, referenced to ambient pressure balancing the opposite
side of the jet, produces an differential output signal of magnitude ten. This
amplified signal can be differentially fed into the inputs on both sides of the
next LPA for additional gain. By cascading LPAs together, very small signals can be amplified a large amount. Originally, a DC pressure amplifier,
LPAs have exceptional infrasound response, and excellent sensitivity since
there is no mass or stiffness associated with a diaphragm, and is matched to
the environment. Standard microphones at the output ports can take advantage of increased sensitivity and gain.
10:00–10:15 Break
10:15
4aEA7. Investigation of piezoelectric bimorph bender transducers to
generate and receive shear waves. Andrew R. McNeese, Kevin M. Lee,
Megan S. Ballard, Thomas G. Muir (Appl. Res. Labs., The Univ. of Texas
at Austin, 10000 Burnet Rd., Austin, TX 78758, mcneese@arlut.utexas.
edu), and R. Daniel Costley (U.S. Army Engineer Res. and Development
Ctr., Vicksburg, MS)
This paper further demonstrates the ability of piezoceramic bimorph
bender elements to preferentially generate and receive near-surface shear
waves for in situ sediment characterization measurements, in terrestrial as
well as marine clay soils. The bimorph elements are housed in probe transducers that can manually be inserted into the sediment and are based on the
work of Shirley [J. Acoust. Soc. Am. 63(5), 1643–1645 (1978)] and of
Richardson et al. [Geo.—Marine Letts. 196–203 (1997)]. The transducers
can discretely generate and receive horizontally polarized shear waves,
within their bimorph directivity patterns. The use of multiple probes allows
one to measure the shear wave velocity and attenuation parameters in the
sediment of interest. Measured shear wave data on a hard clay terrestrial
soil, as well as on soft marine sediments, are presented. These parameters
along with density and compressional wave velocity define the elastic moduli (Poisson’s ratio, shear modulus, and bulk modulus) of the sediment,
which are of interest in various areas of geophysics, underwater acoustics,
and geotechnical engineering. Discussion will focus on use of the probes in
both terrestrial and marine sediment environments. [Work supported by
ARL:UT Austin.]
168th Meeting: Acoustical Society of America
2252
10:30
4aEA8. Multi-mode seismic source for underground application. abderrhamane ounadjela (sonic, Schlumberger, 2-2-1 Fuchinobe, Sagamihara,
Sagamihara, Kanagawa 252-0206, Japan, ounadjela1@slb.com), Henri
Pierre Valero, Jean christophe Auchere (sonic, Schlumberger, SagamiharaShi, Japan), and Olivier Moyal (sonic, Schlumberger, Clamart, France)
A new multi-mode downhole acoustic source has been designed to fulfill
requirements of oil business. Three acoustic modes of radiation, i.e., monopole, dipole, and quadruple modes, respectively, are considered to assess the
properties of the oil reservoir. Because of the geometry of the well it is challenging to design an efficient and effective powerful device. This new
source uses an apparatus to convert the axial motion of the four motors distributed on the azimuth into a radial one. In order to make this conversion
effective, the axial motion transformation into a radial one is performed
thanks to a rod rolling on a cone; this conversion minimizes the loss by friction and is very effective. The conversion apparatus is also exploited to
match the acoustic impedance of the surrounding medium. This new design
is described in this paper as well as intensive modeling which allowed optimizing this multi-mode source device. Experimental data is in a good agreement with numerical modeling.
10:45
4aEA9. Sound characteristics of the caxirola when used by different
uninstructed users. Talita Pozzer and Stephan Paul (UFSM, Tuiuti, 925.
Apto 21, Santa Maria, RS 97015661, Brazil, talita.pozzer@eac.ufsm.br)
11:00
4aEA10. A micro-machined hydrophone using the piezoelectric-gate-offield-effect-transistor for low frequency sounds. Min Sung, Kumjae Shin
(Dept. of Mech. Eng., Pohang Univ. of Sci. and Technology(POSTECH),
PIRO 416, POSTECH, San31, Hyoja-dong, Nam-gu, Pohang, Kyungbuk
790784, South Korea, smmath2@postech.ac.kr), Cheeyoung Joh (Underwater sensor Lab., Agency for Defense Development, Changwon, Kyungnam, South Korea,), and Wonkyu Moon (Dept. of Mech. Eng., Pohang
Univ. of Sci. and Technology(POSTECH), Pohang, Kyungbuk, South
Korea)
The micro-sized piezoelectric body for the miniaturized hydrophone is
known to have the limits in low frequencies due to its high impedance and
low sensitivity. In this study, a new transduction mechanism named as
PiGoFET (piezoelectric gate of field effect transistor) is devised so that its
application for the miniaturized hydrophone could overcome the limits of
the micro-sized piezoelectric body. The PiGoFET transduction can be realized by combining a field effect transistor and a small piezoelectric body on
its gate. A micro-machined silicon membrane of 2 mm diameter was connected to the small piezoelectric body so that acoustic pressure can apply
appropriate forces on the body on the FET gate. The electric field from the
deformed piezoelectric body modulates the channel current of FET directly,
thus the transduction makes the sound pressure transferred to the source–
drain current effectively at very low frequencies with micro-sized piezoelectric body. Under the described concept, a hydrophone was fabricated by
micro-machining and calibrated using the comparison method in low frequencies to investigate its performance. [Research funded by MRCnD.]
4a THU. AM
While originally developed to be the official musical instrument of the
2014 Soccer World Cup the caxirola was banned form the stadiums as could
be thrown into the field by angry spectators. Nevertheless, outside the stadiums the caxirola was still used, thus an already started investigation into the
acoustics of the caxirola was concluded. At a previous ASA meeting we presented the sound power level (SWL) of the caxirola only for the two most
typical ways of use. Now we present data on the sound pressure level close
to the user’s ears (SPLcue) and the SWL, both measured in a reverberation
room, from 30 subjects that used the caxirola according their understanding.
It was found that the total SPLcue vary from 78 dB(A) up to 95 dB(A) and
the global SWL of the caxirola varies from 72 dB until 84 dB. The distribution is not normal, them the SWL has 79 dB(A) as median, that is very similar of the result obtained at the previous study. The SPLcue and the SPL
measured for calculating the SWL are differents. This probably due to the
distance variation between the source and the ear of user causing a near field
some times.
2253
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2253
THURSDAY MORNING, 30 OCTOBER 2014
INDIANA C/D, 8:00 A.M. TO 10:20 A.M.
Session 4aPAa
Physical Acoustics, Underwater Acoustics, Signal Processing in Acoustics, Structural Acoustics and
Vibration, and Noise: Borehole Acoustic Logging and Micro-Seismics for Hydrocarbon Reservoir
Characterization
Said Assous, Cochair
Geoscience, Weatherford, East Leake, Loughborough LE126JX, United Kingdom
David Eccles, Cochair
Weatherford, Geoscience, Loughborough, United Kingdom
Chair’s Introduction—8:00
Invited Papers
8:05
4aPAa1. Generalized collar waves and their characteristics. Xiuming Wang, Xiao He, and Xiumei Zhang (State Key Lab. of
Acoust., Inst. of Acoust. , 21 4th Northwestern Ring Rd., Hadian District, Beijing 100190, China, wangxm@mail.ioa.ac.cn)
A good acoustic logging while drilling (ALWD) tool is difficult to be designed because of collar waves that propagate along the
tool. There always exist such acoustic waves in ALWD. The collar wave arrivals can strongly interfere with formation compressional
waves in wave slowness picking up. In the past years, a considerable research work has been seen in suppressing collar waves in order
to accurately pick up p- and s-wave slowness, and the obtained p- and s-wave slowness accuracy is still a problem. In this work, numerical and physical experiments are conducted to tackle collar wave propagation problems. And the collar wave propagation physics is elaborated and a generalized collar wave concept is proposed. It is shown that collar waves are much complex, and they consist of two
kinds of collar waves, i.e., the direct collar waves and indirect collar waves. Both of these two collar waves make the ALWD data difficult to process for formation wave slowness picking up. Because of drilling string structures, the complicated collar waves cannot be
effectively suppressed only with a groove isolator.
8:20
4aPAa2. Characterizing the nonlinear interaction of S (shear) and P (longitudinal) waves in reservoir rocks. Thomas L. Szabo
(Biomedical Dept., Boston Univ., 44 Cummington Mall, Boston, MA 02215, tlszabo@bu.edu), Thomas Gallot (Sci. Inst., Univ. of the
Republic, Montevideo, Uruguay), Alison Malcolm, Stephen Brown, Dan Burns, and Michael Fehler (Earth Resources Lab., Massachusetts Inst. of Technol., Cambridge, MA)
The nonlinear elastic response of rocks is known to be caused by internal microstructure, particularly cracks and fluids. In order to
quantify this nonlinearity, this paper presents a method for characterizing the interaction of two nonresonant traveling waves: a low-amplitude P-wave probe and a high-amplitude lower frequency S-wave pump with their particle motions aligned. We measure changes in
the arrival time of the P-wave probe as it passes through the perturbation created by a traveling S-wave pump in a sample of room-dry
Berea sandstone (15 15 3 cm). The velocity measurements are made at times too short for the shear wave to reflect back from the
bottom of the sample and interfere with the measurement. The S-wave pump induces strains of 0.3—2.2 10 6, and we observe
changes in the P-wave probe arrival time of up to 100 ns, corresponding to a change in elastic properties of 0.2%. By changing the relative time delay between the probe and pump signals, we record the measured changes in travel time of the P-wave probe to recover the
nonlinear parameters b~ 102 and d ~ 109 at room-temperature. This work significantly broadens the applicability of dynamic acoustoelastic testing by utilizing both S and P traveling waves.
8:35
4aPAa3. A case study of multipole acoustic logging in heavy oil sand reservoirs. Peng Liu, Wenxiao Qiao, Xiaohua Che, Ruijia
Wang, Xiaodong Ju, and Junqiang Lu (State Key Lab. of Petroleum Resources and Prospecting, China Univ. of Petroleum (Beijing),
No. 18, Fuxue Rd., Changping District, Beijing, Beijing 102249, China, liupeng198712@126.com)
The multipole acoustic logging tool (MPAL) was tested in the heavy oil sand reservoirs of Canada. Compared with near shales, the
P-wave slowness of heavy oil sands does not change obviously, with the value of about 125ls/ft; the dipole shear slowness decreases
significantly to 275ls/ft. The heavy oil sands have a Vp/Vs value of less than 2.4. The slowness and amplitude of dipole shear wave are
good lithology discriminators that have great differences between heavy oil sands and shales. The heavy oil sand reservoirs are anisotropic. The crossover phenomenon in the fast and slow dipole shear wave dispersion curves indicates that the anisotropy is induced by
unbalanced horizontal stress in the region.
2254
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2254
8:50
4aPAa4. Borehole sonic imaging applications. Jennifer Market (Weatherford, 19819 Hampton Wood Dr, Tomball, TX 77377, jennifer.market@weatherford.com)
The advent of azimuthal logging-while-drilling (LWD) sonic tools has opened up a surfeit of near-real time applications. Azimuthal
images of compressional and shear velocities allow for geosteering, fracture identification, stress profiling, production enhancement, and
3D wellbore stability analysis. Combining borehole sonic images with electrical, gamma ray, and density images yields a detailed picture of the near- and far-wellbore nature of the stress field and resultant fracturing. A brief review of the physics of azimuthal sonic logging will be presented, paying particular attention to azimuthal resolution and depth of investigation. Examples of combined
interpretations of sonic, density, and electrical images will be shown to illustrate fracture characterization, unconventional reservoir
completion planning, and geosteering. Finally, recommendations for the optimized acquisition of borehole sonic images will be
discussed.
Contributed Papers
9:05
9:35
4aPAa5. Numerical simulations of an electromagnetic actuator in a lowfrequency range for dipole acoustic wave logging. Yinqiu Zhou, Penglai
Xin, and Xiuming Wang (Inst. of Acoust., Chinese Acad. of Sci., 21 North
4th Ring Rd., Haidian District, Beijing 100190, China, zhouyinqiu@mail.
ioa.ac.cn)
4aPAa7. Borehole acoustic array processing methods: A review. Said
Assous and Peter Elkington (GeoSci., Weatherford, East Leake, Loughborough LE126JX, United Kingdom, said.assous@eu.weatherford.com)
9:20
4aPAa6. Phase moveout method for extracting flexural mode dispersion
and borehole properties. Said Assous, David Eccles, and Peter Elkington
(GeoSci., Weatherford, Weatherford, East Leake, Loughborough, United
Kingdom, david.eccles@eu.weatherford.com)
Among the dispersive modes encountered in acoustic well logging applications is the flexural mode associated with dipole source excitations whose
low frequency asymptote provides the only reliable means of determining
shear velocity in slow rock formations. We have developed a phase moveout
method for extracting flexural mode dispersion curves from with excellent
velocity resolution; the method is entirely data-driven, but in combination
with a forward model able to generate theoretical dispersion curves, we are
able to address the inverse problem and extract formation and borehole
properties in addition to the rock shear velocity. The concept is demonstrated using data from isotropic and anisotropic formations.
9:50
4aPAa8. Classifying and removing monopole mode propagating
through drill collar. Naoki Sakiyama (Schlumberger K.K., 2-18-3-406,
Bessho, Hachio-ji 192-0363, Japan, NSakiyama@slb.com), Alain Dumont
(Schlumberger K.K., Kawasaki, Japan), Wataru Izuhara (Schlumberger
K.K., Inagi, Japan), Hiroaki Yamamoto (Schlumberger K.K., Kamakura, Japan), Makito Katayama (Schlumberger K.K., Yamato, Japan), and Takeshi
Fukushima (Schlumberger K.K., Hachio-ji, Japan)
Understanding characteristics of the acoustic wave propagating through
drill collars is important for formation evaluation with logging-while-drilling (LWD) sonic tools. Knowing the frequency-slowness information of
different types of the wave propagating through the collar, we can minimize
the unwanted wave propagating through the collar by processing and
robustly identify formation compressional and shear arrivals. Extensional
modes of the steel drill collar are generally dispersive and range from 180
ls/m to 400 ls/m depending on the frequency band. A fundamental torsional mode of the drill collar is nondispersive, but its slowness is sensitive
to the geometry of the drill collar. Depending on the geometry and shear
modulus of the material, the slowness of the torsional mode can be slower
than 330 ls/m. For identifying slowness of the formation arrivals, those different slownesses of the wave propagating through the collar need to be
identified separately from those of the wave propagating through formations. Examining various types of the acoustic wave propagating through a
drill collar, we determined that the waves can be properly muted by processing for the semblance of waveforms acquired with LWD sonic tools.
10:05–10:20 Panel Discussion
2255
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2255
4a THU. AM
In dipole acoustic logging, transducers are required to work in a low frequency range, such as 0.5–5 kHz, to measure shear wave velocities so as to
accurately analyze the anisotropy parameters of formations. In this paper, an
electromagnetic actuator is designed for more effective low-frequency excitations than conventional piezoelectric bender-bar transducers. A numerical
model has been set up to simulate electromagnetic actuators to generate
flexural waves. The Finite Element Method (FEM) has been applied to simulating the radiation modes and harmonic responses of the actuator in a
fluid, such as air and water. In the frequency range of 0–5 kHz, the first ten
vibration modes are simulated and analyzed. The simulation results of 3-D
harmonic responses of the sound field, such as the deformation, acoustic
sound pressure, and directivity pattern, have been conducted to evaluate the
radiation performance. From the simulation results, it is concluded that the
second asymmetric mode at 670 Hz could be excited more easily than the
others. This oscillated-vibration mode is useful to be applied in a dipole
source. The frequency response curve is broad and flat and the electromagnetic actuator is beneficial to generate the wideband signal in a required low
frequency range, especially below 1 kHz.
In this talk, we review the different borehole acoustic array methods and
compare their effectiveness with simulated and real waveform examples:
Starting from the slowness time coherence (STC) method, weighted semblance method (WSS), and many other common dispersive processing
approaches including: Prony’s method, maximum entropy (ARMA) methods, and predictive array processing and Matrix pencil technique. We also
discuss the Methods include phase minimization or coherency maximization
and phase-based approaches.
THURSDAY MORNING, 30 OCTOBER 2014
INDIANA C/D, 10:30 A.M. TO 12:00 NOON
Session 4aPAb
Physical Acoustics: Topics in Physical Acoustics I
Josh R. Gladden, Cochair
Physics & NCPA, University of Mississippi, 108 Lewis Hall, University, MS 38677
William Slaton, Cochair
Physics & Astronomy, The University of Central Arkansas, 201 Donaghey Ave, Conway, AR 72034
Contributed Papers
10:30
4aPAb1. Faraday waves on a two-dimensional periodic substrate. C. T.
Maki (Phys., Hampden-Sydney College, 1 College Rd., Hampden-Sydney,
VA 23943, MakiC15@hsc.edu), Peter Rodriguez, Purity Dele-Oni, PeiChuan Fu, and R. Glynn Holt (Mech. Eng., Boston Univ., Boston, MA)
A vertically oscillating body of liquid will exhibit Faraday waves when
forced above a threshold interface acceleration amplitude. The patterns and
their wavelengths at driving frequencies of order 100 Hz are well known in
the literature. However, wave interactions influenced by periodic structures
on a driving substrate are less well-studied. We report results of a Faraday
experiment with a specific periodically structured substrate in the strong
coupling regime where the liquid depth is of the order of the structure
height. We observe patterns and pattern wavelengths versus driving frequency over the range of 50–350 Hz. These observations may be of interest
in situations where Faraday waves appear or are applied.
10:45
4aPAb2. Substrate interaction in ranged photoacoustic spectroscopy of
layered samples. Logan S. Marcus, Ellen L. Holthoff, and Paul M. Pellegrino (U.S. Army Res. Lab., 2800 Powder Mill Rd., RDRL-SEE-E, Adelphi,
MD 20783, loganmarcus@gmail.com)
Photoacoustic spectroscopy (PAS) is a useful monitoring technique that
is well suited for ranged detection of condensed materials. Ranged PAS has
been demonstrated using an interferometer as the sensor. Interferometric
measurement of photoacoustic phenomena focuses on the measurement of
changes in path length of a probe laser beam. That probe beam measures,
without discrimination, the acoustic, thermal, and physical changes to the
excited sample and the layer of gas adjacent to the surface of the solid sample. For layered samples, the photoacoustic response of the system is influenced by the physical properties of the substrate as well as the sample under
investigation. We will discuss the affect that substrate absorption of the excitation source has on the spectra collected in PAS. We also discuss the role
that the vibrational modes of the substrate have in photoacoustic signal
generation.
11:00
4aPAb3. Difference frequency scattered waves from nonlinear interactions of a solid sphere. Chrisna Nguon (Univ. of Massachusetts Lowell, 63
Hemlock St., Dracut, MA 01826, chrisna_Nguon@student.uml.edu), Max
Denis (Mayo Clinic, Rochester, MN), Kavitha Chandra, and Charles
Thompson (Univ. of Massachusetts Lowell, Lowell, MA)
In this work, the generation of difference frequency waves arising from
the interaction of dual-incident beams on a solid sphere is considered. The
high-frequency incident beams induce a radiation force onto the fluid-saturated sphere causing the scatterer to vibrate. An analysis on the contribution
between the difference frequency sound and radiation force pressure is of
particular interest. The scattered pressure due to the two primary waves are
2256
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
obtained as solutions to the Kirchhoff–Helmholtz integral equation for the
fluid–solid boundary. Due to the contrasting material properties between the
host fluid and solid sphere, high-order approximations are used to evaluate
the integral equation.
11:15
4aPAb4. Effect of surface irregularities on the stability of Stokes boundary. Katherine Aho, Jenny Au, Charles Thompson, and Kavitha Chandra
(Elec. and Comput. Eng., Univ. of Massachusetts Lowell, 1 University Ave,
Lowell, MA 01854, katherine_aho@student.uml.edu)
In this work, we examine that impact that wall surface roughness plays
on the stability of an oscillatory Stokes boundary layer. The temporal
growth of three-dimensional disturbances excited by wall height variations
is of particular interest. Floquet theory is used to identify linearly unstable
region in parameter space. It is shown that disturbances become unstable at
critical value of the Taylor number for a given surface curvature. The case
of oscillatory flow in a two-dimensional rigid walled channel is considered
in detail.
11:30
4aPAb5. Novel optoacoustic source for arbitrarily shaped acoustic
wavefronts. Weiwei Chan, Yuanxiang Yang, Manish Arora, and ClausDieter Ohl (Phys. and Appl. Phys., Nanyang Technolog. Univ., Nanyang
Link 21 School of Physical and Mathematical Sci. Nanyang Technolog.
University, Singapore 637371, Singapore, chan0700@e.ntu.edu.sg)
We present a novel approach to generate arbitrary acoustic wavefronts
using the optoacoustic effect on custom designed PDMS substrates. PDMS
blocks are casted into the desired shape with a 3D-printed mold and coated
with a layer of an optical absorber. Acoustic wavefront corresponding to the
geometry of coated surface is generated by exposing this structure to nanosecond laser pulse (Nd:YAG, k = 532 nm). For a spherical shell design, pressure pulses of amplitude up to 6.1 bar peak to peak and frequency >30 MHz
could be generated. By utilizing other geometries, we focus the acoustic
waves from different sections of the transmitter onto a single focal point at
different time delay, thus permitting generation of double-peak acoustic
pulse from a single laser pulse. Further modification of the structure permits
designing of multi-foci, multi-peak acoustic pulses from a single optical
pulse.
11:45
4aPAb6. Accuracy of local Kramers–Kronig relations between material
damping and dynamic elastic properties. Tamas Pritz (Budapest Univ. of
Technol. and Economics, Apostol u 23, Budapest 1025, Hungary, tampri@
eik.bme.hu)
The local Kramers–Kronig (KK) relations are the differential form
approximations of the general KK integral equations linking the damping
properties (loss modulus or loss factor) and dynamic modulus of elasticity
(shear, bulk, etc.) of linear solid viscoelastic materials. The local KK
168th Meeting: Acoustical Society of America
2256
relations are not exact; therefore, their accuracy is known to depend on the
rate of frequency variations of material dynamic properties. The accuracy of
the local KK relations is investigated in this paper under the assumption that
the frequency dependence of the loss modulus obeys a simple power law. It
is shown by analytic calculations that the accuracy of prediction of the local
KK relations is better than 10% if the exponent in the loss modulus-
frequency function is smaller than 0.35. This conclusion supports the result
of an earlier numerical study. Some experimental data verifying the theoretical results will be presented. The conclusions drawn in the paper can easily
be extended to acoustic wave propagation, namely to the accuracy of local
KK relations between attenuation and dispersion of phase velocity.
THURSDAY MORNING, 30 OCTOBER 2014
MARRIOTT 1/2, 8:30 A.M. TO 12:00 NOON
Session 4aPP
Psychological and Physiological Acoustics: Physiological and Psychological Aspects of Central Auditory
Processing Dysfunction I
Frederick J. Gallun, Cochair
National Center for Rehabilitative Auditory Research, Portland VA Medical Center, 3710 SW US Veterans Hospital Rd.,
Portland, OR 97239
Adrian KC Lee, Cochair
Box 357988, University of Washington, Seattle, WA 98195
Chair’s Introduction—8:30
Invited Papers
8:35
4aPP1. Auditory processing disorder: Clinical and international perspective. David R. Moore (Commun. Sci. Res. Ctr., Cincinnati
Children’s Hospital, 240 Albert Sabin Way, Rm. S1.603, Cincinnati, OH 45229, david.moore2@cchmc.org)
4a THU. AM
APD may be considered a developmental, hearing or neurological disorder, depending on etiology, but in all cases, it is a listening
difficulty without an abnormality of pure tone sensitivity. It has been variously defined as a disorder of the central auditory system associated with impaired spatial hearing, auditory discrimination, temporal processing, and performance with competing or degraded sounds.
Clinical testing typically examines perception, intelligibility and ordering of both speech and non-speech sounds. While deficits in
higher-order cognitive, communicative, and language functions are excluded in some definitions, recent consensus accepts that these
functions may be inseparable from active listening. Some believe that APD in children is predominantly or exclusively cognitive in origin, while others insist that true APD has its origins within the auditory brainstem. However, children or their carers presenting at clinics
typically complain of difficulty hearing speech in noise, remembering or understanding instructions, and attending to sounds. APD usually occurs alongside other developmental disorders (e.g., language impairment) and may be indistinguishable from them. Consequently,
clinicians are uncertain how to diagnose or manage APD; both test procedures and interventions vary widely, even within a single clinic.
Effective remediation primarily consists of improving the listening environment and providing communication devices.
9:05
4aPP2. Caught in the middle: The central problem in diagnosing auditory-processing disorders in adults. Larry E. Humes (Indiana
Univ., Dept. Speech & Hearing Sci., Bloomington, IN 47405-7002, humes@indiana.edu)
It is challenging to establish the existence of higher-level auditory-processing disorders in military veterans with mild Traumatic
Brain Injury (TBI). Yet, mild TBI appears to be a highly prevalent disorder among U.S. veterans returning from recent military conflicts
in Iraq and Afghanistan. Recent prevalence estimates for mild TBI, for example, among these military veterans have suggest a rate of 7–
9% [Carlson, K.F. et al. (2011), “Prevalence, assessment and treatment of mild Traumatic Brain Injury an Posttraumatic Stress Disorder:
a systematic review of the evidence,” J. Head Trauma Rehabil., 26, 103–115]. A key factor in diagnosing central components for auditory-processing disorders may lie in the potentially confounding influences of concomitant peripheral auditory and cognitive dysfunction
in many veterans with TBI. This situation is strikingly similar to that observed in many older adults. Many older adults, for example, exhibit peripheral hearing loss and typical cognitive-processing deficits often associated with healthy aging. These concomitant problems
make the diagnosis of centrally located auditory-processing problems in older adults extremely difficult. After building a case for many
similarities between young veterans with mild TBI and older adults with presbycusis, this presentation will focus on several of the lessons learned from research with older adults. [Work supported, in part, by research grant R01 AG008293 from the NIA.]
2257
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2257
9:35
4aPP3. Lack of a coherent theory limits the diagnosis and prognostic value of the central auditory processing disorder. Anthony
T. Cacace (Commun. Sci. & Disord., Wayne State Univ., 207 Rackham, 60 Farnsworth, Detroit, MI 48202, cacacea@wayne.edu) and
Dennis J. McFarland (Lab. of Neural Injury and Repair, Wadsworth Labs, NYS Health Dept., Albany, NY)
Spanning almost 6 decades, CAPD, defined as a modality specific perceptual dysfunction not due to peripheral hearing loss, still
remains controversial and requires further development if it is to become a useful clinical entity. Early attempts to quantify the effects of
central auditory nervous system lesions based on the use of filtered-speech material, dichotic presentation of digits, and various nonspeech tests have generally been abandoned due to lack-of-success. Site-of-lesion approaches have given way to functional considerations whereby attempts to understand underlying processes, improve specificity-of-diagnosis, and delineate modality-specific (auditory)
disorders from “non-specific supramodal dysfunctions” like those related to attention and memory have begun to fill the gap. Furthermore, because previous work was generally limited to auditory tasks alone, functional dissociations could not be established and consequently, the need to show the modality-specific nature of the observed deficits has been compromised; further limiting progress in this
area. When viewed as a whole, including information from consensus conferences, organizational guidelines, representative studies,
etc., what is conspicuously absent is a well-defined theory that permeates all areas of this domain, including the neural substrates of auditory processing. We will discuss the implications of this shortcoming and propose ways to move forward in a meaningful manner.
10:05–10:30 Break
10:30
4aPP4. Cochlear synaptopathy and neurodegeneration in noise and aging: Peripheral contributions to auditory dysfunction with
normal thresholds. Sharon G. Kujawa (Dept. of Otology and Laryngology, Harvard Med. School and Massachusetts Eye and Ear Infirmary, Massachusetts Eye and Ear Infirmary, 243 Charles St., Boston, MA 02114, sharon_kujawa@meei.harvard.edu)
Declining auditory performance in listeners with normal audiometric thresholds is often attributed to changes in central circuits,
based on the widespread view that normal thresholds indicate a lack of cochlear involvement. Recent work in animal models of noise
and aging, however, demonstrates that there can be functionally important loss of sensory inner hair cell—afferent fiber communications
that go undetected by conventional threshold metrics. We have described a progressive cochlear synaptopathy that leads to proportional
neural loss with age, well before loss of hair cells or age-related changes in threshold sensitivity. Similar synaptic and neural losses occur
after noise, even when thresholds return to normal. Since the IHC-afferent fiber synapse is the primary conduit for information to flow
from the cochlea to the brain, and since each of these cochlear nerve fibers makes synaptic contact with one inner hair cell only, these
losses should have significant perceptual consequences, even if thresholds are preserved. The prevalence of such pathology in the human
is likely to be high, underscoring the importance of considering peripheral status when studying central contributions to auditory performance declines. [Research supported by R01 DC 008577 and P30 DC 05029.]
11:00
4aPP5. Quantifying supra-threshold sensory deficits in listeners with normal hearing thresholds. Barbara Shinn-Cunningham (Biomedical Eng., Boston Univ., 677 Beacon St., Boston, MA 02215-3201, shinn@bu.edu), Hari Bharadwaj, Inyong Choi, Hannah Goldberg
(Ctr. for Computational Neurosci. and Neural Technol., Boston Univ., Boston, MA), Salwa Masud, and Golbarg Mehraei (Speech and
Hearing BioSci. and Technol., Harvard/MIT, Boston, MA)
There is growing suspicion that some listeners with normal-hearing thresholds may be suffering from a specific form of sensory deficit—a loss of afferent auditory nerve fibers. We believe such deficits manifest behaviorally in conditions where perception depends
upon precise spectro-temporal coding of supra-threshold sound. In our lab, we find striking inter-subject differences in perceptual ability
even among listeners with normal hearing thresholds who have no complaints of hearing difficulty and have never sought clinical intervention. Among such ordinary listeners, those who perform relatively poorly on selective attention tasks (requiring the listener to focus
on one sound stream presented amidst competing sound streams) also exhibit relatively weak temporal coding in subcortical responses
and have poor thresholds for detecting fine temporal cues in supra-threshold sound. Here, we review the evidence for supra-threshold
hearing deficits and describe measures that reveal this sensory loss. Given our findings in ordinary adult listeners, it stands to reason that
at least a portion of the listeners who are diagnosed with central auditory processing dysfunction may suffer from similar sensory deficits, explaining why they have trouble communicating in many everyday social settings.
11:30
4aPP6. Neural correlates of central auditory processing deficits in the auditory midbrain in an animal model of age-related hearing loss. Joseph P. Walton (Commun. Sci. and Disord., Univ. of South Florida, 4202 Fowler Ave., PCD 1017, Tampa, FL 33620, jwalton1@usf.edu)
Age-related hearing loss (ARHL), clinically referred to as presbycusis, affects over 10 million Americans and is considered to be the
most common communication disorder in the elderly. Presbycusis can be associated with at least two underlying etiologies, a decline in
cochlear function resulting in sensorineural hearing loss, and deficits in auditory processing within the central auditory system. Previous
psychoacoustic studies have revealed that aged human listeners display deficits in temporal acuity that worsen with the addition of background noise. Spectral and temporal acuity is essential for following the rapid changes in frequency and intensity that comprise most natural sounds including speech. The perceptual analysis of complex sounds depends to a large extent on the ability of the auditory system
to follow and even sharpen neural encoding of rapidly changing acoustic signals, and the inferior colliculus (IC) is a key auditory nucleus involved in temporal and spectral processing. In this talk, I will review neural correlates of temporal and signal-in-noise processing
at the level of the auditory midbrain in an animal model of ARHL. Understanding the neural substrate of these perceptual deficits will
assist in its diagnosis and rehabilitation, and be crucial to further advances in the design of hearing aids and therapeutic interventions.
2258
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2258
THURSDAY MORNING, 30 OCTOBER 2014
SANTA FE, 8:00 A.M. TO 10:20 A.M.
Session 4aSCa
Speech Communication: Subglottal Resonances in Speech Production and Perception
Abeer Alwan, Cochair
Dept. of Electrical Eng., UCLA, 405 Hilgard Ave., Los Angeles, CA 90095
Steven M. Lulich, Cochair
Speech and Hearing Sciences, Indiana University, 4789 N White River Drive, Bloomington, IN 47404
Mitchell Sommers, Cochair
Psychology, Washington University, Campus Box 1125, 1 Brookings Drive, Saint Louis, MO 63130
Chair’s Introduction—8:00
Invited Papers
8:05
4aSCa1. The role of subglottal acoustics in speech production and perception. Mitchell Sommers (Indiana Univ., Saint Louis, MS),
Abeer Alwan (Psych., Washington Univ., Los Angeles, CA), and Steven Lulich (Psych., Washington Univ., Dept. of Speech and Hearing Sci., Indiana University, Bloomington, IN, slulich@indiana.edu)
In this talk, we present an overview of subglottal acoustics, with emphasis on the significant anatomical structures that define subglottal resonances, and we present results from our experiments incorporating subglottal resonances into automatic speaker normalization and speech recognition technologies. Speech samples used in the modeling and perception studies were obtained from a new speech
corpus (the UCLA-WashU subglottal database) of simultaneous microphone and (subglottal) accelerometer recordings of 50 adult
speakers of American English (AE). We will discuss new findings about the Young’s Modulus of tracheal soft tissue, the viscosity of tracheal cartilage, and the effect of going from a circular cross-section to a rectangular cross-section in the conus elasticus. We also present
results from studies demonstrating a small, but significant, role of subglottal resonances in discriminating speaker height and of the interaction between subglottal resonances and formants in height discrimination.
8:25
4a THU. AM
4aSCa2. The effect of subglottal acoustics on vocal fold vibration. Ingo R. Titze (National Ctr. for Voice and Speech, 136 South
Main St., Ste. 320, Salt Lake City, UT 84101-3306, ingo.titze@ncvs2.org) and Ingo R. Titze (Dept. of Commun. Sci. and Disord., Univ.
of Iowa, Iowa City, IA)
Acoustic pressures above and below the vocal folds produce a push-pull action on the vocal folds which can either help or hinder
vocal fold vibration. The key variable is acoustic reactance, the energy-storage part of the complex acoustic impedance. For the subglottal airway, inertive (positive) reactance does not help vocal fold vibration, but helps to skew the glottal airflow waveform for high frequency harmonic excitation. Compliant (negative) reactance, on the contrary, helps vocal fold vibration but does not skew the
waveform. Thus, the benefit of subglottal reactance is mixed. For supraglottal reactance, the benefit is additive. Inertive supraglottal reactance helps vocal fold vibration and skews the waveform, whereas compliant supraglottal reactance does neither. The effects will be
demonstrated with source-filter interactive simulation.
8:45
4aSCa3. Impact of subglottal resonances on bifurcations and register changes in laboratory models of phonation. David Berry,
Juergen Neubauer, and Zhaoyan Zhang (Surgery, UCLA, 31-24 Rehab, Los Angeles, CA 90095-1794, daberry@ucla.edu)
Many laboratory studies of phonation have failed to fully specify the subglottal system employed during research. Many of these
same studies have reported a variety of nonlinear phenomena, such as bifurcations and vocal register changes. While such phenomena
are often presumed to result from changes in the biomechanical properties of the larynx, such phenomena may also be a manifestation of
coupling between the voice source and the subglottal tract. Using laboratory models of phonation, a variety of examples will be given of
nonlinear phenomena induced by both laryngeal and subglottal mechanisms. Moreover, using tracheal tube lengths commonly reported
in the literature, it will be shown that most of the nonlinear phenomena commonly reported in voice production may be replicated solely
on the acoustical resonances of the subglottal system. Finally, recommendations will be given regarding the experimental design of laboratory experiments which may allow laryngeally induced bifurcations to be distinguished from subglottally induced bifurcations.
9:05–9:25 Break
2259
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2259
9:25
4aSCa4. Subglottal ambulatory monitoring of vocal function to improve voice disorder assessment. Robert E. Hillman, Daryush
Mehta, Jarrad H. Van Stan (Ctr. for Laryngeal Surgery and Voice Rehabilitation, Massachusetts General Hospital, One Bowdoin Square,
11th Fl., Boston, MA 02114, daryush.mehta@alum.mit.edu), Matias Zanartu (Dept. of Electron. Eng., Universidad Tecnica Federico
Santa Mariıa, Valparaiso, Chile), Marzyeh Ghassemi, and John V. Guttag (Comput. Sci. and Artificial Intelligence Lab., Massachusetts
Inst. of Technol., Cambridge, MA)
Many common voice disorders are chronic or recurring conditions that are likely to result from inefficient and/or abusive patterns of
vocal behavior, referred to as vocal hyperfunction. The clinical management of hyperfunctional disorders would be greatly enhanced by
the ability to monitor and quantify detrimental vocal behaviors during an individual’s activities of daily life. This presentation will provide an update about ongoing work that is using a miniature accelerometer on the subglottal neck surface to collect a large set of ambulatory data on patients with hyperfunctional voice disorders (before and after treatment) and matched control subjects. Three types of
analysis approaches are being employed in an effort to identify the best set of measures for differentiating among hyperfunctional and
normal patterns of vocal behavior: (1) previously developed ambulatory measures of vocal function that include vocal dosages; (2)
measures based on estimates of glottal airflow that are extracted from the accelerometer signal using a vocal system model, and (3) classification based on machine learning approaches that have been used successfully in analyzing long-term recordings of other physiologic
signals (e.g., electrocardiograms).
9:45
4aSCa5. Do subglottal resonances lead to quantal effects resulting in the features [back] and [low]?: A review. Helen Hanson
(ECE Dept., Union College, 807 Union St., Schenectady, NY 12308, helen.hanson@alum.mit.edu) and Stefanie Shattuck-Hufnagel
(Speech Commun. Group, Res. Lab. of Electronics, Massachusetts Inst. of Technol., Cambridge, MA)
A question of general interest is why languages have the sound categories that they do. K. N. Stevens proposed the Quantal Theory
of phonological contrasts, suggesting that regions of discontinuity in the articulatory-acoustic mapping serve as category boundaries. H.
M. Hanson and K. N. Stevens [Proc. ICPhS, 182–185, 1995] modeled the interaction of subglottal resonances with the vocal-tract filter,
showing that when a changing supraglottal formant strays into the territory of a stationary tracheal formant, a discontinuity in supraglottal formant frequency and attenuation of the formant peak occurs. They suggested that vowel space and quality could thus be affected.
K. N. Stevens [Acoustic Phonetics, MIT Press, 1998] went further, musing that because the first and second subglottal resonances lead
to instabilities in supraglottal formant frequency and amplitude, vowel systems would benefit by avoiding vowels with formants at these
frequencies. Avoiding the first subglottal resonance would naturally lead to the division of vowels into those with a low vs. non-low
tongue body; avoiding the second would lead to the division of vowels into those having a back vs. front tongue body. We will review
subsequent research that offers substantial support for this hypothesis, justifying inclusion of the effects of subglottal resonances in phonological models.
Contributed Paper
10:05
4aSCa6. Relationship between lung volumes and subglottal resonances.
Natalie E. Duvanenko (Speech and Hearing Sci., Indiana Univ., 2416 Cibuta
Court, West Lafayette, IN 47906, nduvanen@umail.iu.edu) and Steven M.
Lulich (Speech and Hearing Sci., Indiana Univ., Bloomington, IN)
Subglottal resonances are dependent the anatomical structure of the
lungs, but efforts to detect changes in subglottal resonances throughout an
2260
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
utterance have failed to show any effect of lung volume. In this study, we
present the results of an experiment investigating the relationship between
lung volumes and subglottal resonances. The pulmonary subdivisions for
several speakers were established using a whole-body plethysmograph. Subsequently, lung volume and subglottal resonances were recorded simultaneously using a spirometer and an accelerometer while the speakers produced
long sustained vowels.
168th Meeting: Acoustical Society of America
2260
THURSDAY MORNING, 30 OCTOBER 2014
MARRIOTT 5, 8:00 A.M. TO 12:00 NOON
Session 4aSCb
Speech Communication: Learning and Acquisition of Speech (Poster Session)
Maria V. Kondaurova, Chair
Otolaryngology – Head & Neck Surgery, Indiana University School of Medicine, 699 Riley Hospital Drive – RR044,
Indianapolis, IN 46202
All posters will be on display from 8:00 a.m. to 12:00 noon. To allow contributors the opportunity to see other posters, the contributors
of odd-numbered papers will be at their posters from 8:00 a.m. and 10:00 a.m. and contributors of even-numbered papers will be at their
posters from 10:00 a.m. to 12:00 noon.
Contributed Papers
4aSCb1. Labels facilitate the learning of competing abstract perceptual
mappings. Shannon L. Heald, Nina Bartram, Brendan Colson, and Howard
C. Nusbaum (Psych., Univ. of Chicago, 5848 S. University Ave., B406, Chicago, IL 60637, smbowdre@uchicago.edu)
4aSCb3. A comparison of acoustic and perceptual changes in children’s
productions of American English /r/. Sarah Hamilton (Commun. Sci. and
Disord., Univ. of Cincinnati, Cincinnati, OH), Casey Keck (Commun. Sci.
and Disord., Univ. of Cincinnati, 408 Glengarry Way, Fort Wright, KY
41011, stewarce@mail.uc.edu), and Suzanne Boyce (Commun. Sci. and
Disord., Univ. of Cincinnati, Cincinnati, OH)
Listeners are able to quickly adapt to synthetic speech, even though it
contains misleading and degraded acoustic information. Previous research
has shown that testing and training on a given synthesizer using only novel
words leads listeners to form abstract or generalized knowledge for how that
particular synthesizer maps different acoustic patterns onto their pre-existing phonological categories. Prior to consolidation, this knowledge has been
shown to be susceptible to interference. Given that labels have been argued
to stabilize abstract ideas in working memory and to help learners form category representations that are robust against interference, we examined how
learning for a given synthesizer is affected by labeled or unlabeled immediate training on an additional synthesizer, which uses a different acoustic to
phonetic mapping. We demonstrated that the learning of an additional synthesizer interferes with the retention of a previously learned synthesizer but
that this is ameliorated if the additional synthesizer is labeled. Our findings
indicate that labeling may be important in facilitating daytime learning for
competing abstract perceptual mappings prior to consolidation and suggests
that speech perception may be best understood through the lens of perceptual categorization.
Speech-language pathologists rely primarily on their perceptual judgments when evaluating whether children have made progress in speech
sound therapy. Speech sound perception in normal listeners has been characterized as largely categorical, such that slight articulatory changes may go
unnoticed unless they reach a specific acoustic signature assigned to a different category. While perception may be categorical, acoustic phenomena
are largely measured in continuous units, meaning that there is a potential
mismatch between the two methods of recording change. Clinicians, using
perceptual categorization, commonly report that some children make no
progress in therapy, yet acoustically, the children’s productions may be
shifting toward acceptable acoustic characteristics. Using subtle changes in
the acoustic signal during therapy could potentially prevent these clients
from being discharged due to a perceived lack of progress. This poster evaluates acoustic changes compared to perceptual changes in children’s productions of the American English phoneme /r/ after receiving speech
therapy using ultrasound supplemented with telepractice home practice. Preliminary data indicate that there are significant differences between participants’ acoustic values of /r/ and perceptual ratings by clinicians.
4aSCb2. When more is not better: Variable input in the formation of
robust word representations. Andrea K. Davis (Linguist, Univ. of Arizona, 1076 Palomino Rd., Cloverdale, CA 95425, davisak@email.arizona.
edu) and LouAnn Gerken (Linguist, Univ. of Arizona, Tucson, AZ)
4aSCb4. Perceptual categorization of /r/ for children with residual
sound errors. Sarah M. Hamilton, Suzanne Boyce, and Lindsay Mullins
(Commun. Sci. and Disord., Univ. of Cincinnati, 3433 Clifton Ave., Cincinnati, OH 45220, hamilsm@mail.uc.edu)
A number of studies with infants and with young children suggest that
hearing words produced by multiple talkers helps learners to develop more
robust word representations (Richtsmeier et al., 2009; Rost & McMurray,
2009, 2010). Native adult learners, however, do not seem to derive the same
benefit from multiple talkers. A word-learning study with native adults was
conducted, and a second study with second language learners will have been
completed by this fall. Native-speaking participants learned four new minimal English-like minimal pair words either from a single talker or from multiple talkers. They were then tested with (a) a perceptual task, in which they
saw the two pictures corresponding to a minimal pair, heard one of the pair,
and had to choose the picture corresponding to the word they heard; (b) a
speeded production task, in which they had to repeat the words they had just
learned as quickly as possible. Unlike infants, the two groups did not differ
significantly in perceptual accuracy. However, the single talker group had
significantly higher variance in the speeded production task. It is hypothesized that this greater variance is due to individual differences in learning
strategies, which are masked when learning from multiple talkers.
2261
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Many studies have found that children with resistant speech sound errors
(RSSD) show (1) atypical category boundaries, and (2) difficulty identifying
whether their own productions are correct or misarticulated. Historically,
perceptual category discrimination tests use synthesized speech representing
incremental change along an acoustic continuum, while tests of a child’s
self-perception are confined to categorical correct vs. error choices. Thus, it
has not been possible to explore the boundaries of RSSD children’s categorical self-perception in any detail or to customize perceptual training for therapeutic purposes. Following an observation of Hagiwara (1995), who noted
that typical speakers show F3 values for /r/ between 80% and 60% of their
average vowel F3, Hamilton et al (2014) found that this threshold largely
replicates adult listener judgments, such that productions above and below
the 80% threshold sounded consistently “incorrect” or “correct,” but that
productions closest to the 80% threshold were given more ambiguous judgments. In this study, we apply this notion of an F3 threshold to investigate
whether children with RSD respond like adult listeners when presented with
natural-speech stimuli along a continuum of correct and incorrect /r/. Preliminary results indicate that children with RSD do not make adult-like decisions when categorizing /r/ productions.
168th Meeting: Acoustical Society of America
2261
4a THU. AM
8:00
4aSCb5. A child-specific compensatory mechanism in the acquisition of
English /s/. Hye-Young Bang, Meghan Clayards, and Heather Goad (Linguist, McGill Univ., 1085 Dr. Penfield, Montreal, QC H3A 1A7, Canada,
hye-young.bang@mail.mcgill.ca)
This study examines corpus data involving word-initial [sV] productions
from 79 children aged 2–5 (Edwards & Beckman 2008) in comparison with
a corpus of word-initial [sV] syllables produced by 13 adults. We quantified
target-like /s/ production using spectral moment analysis on the frication
portion (high center of gravity, low SD, and low skewness). In adults, we
found that higher vowels (low F1 after normalization) were associated with
more target-like /s/ productions, likely reflecting a tighter constriction. In
children, older subjects produced more target-like outputs overall. However,
unlike adults, children’s outputs before low vowels were more target-like,
regardless of age. This is unexpected given the articulatory challenges of
producing /s/ in low vowel contexts. Further investigation found that high
F1 (low vowels) was associated with louder /s/ (relative to V) and more
encroachment of sibilant noise on the following vowel (high harmonics-tonoise ratio). This finding suggests that young children may be increasing airflow during /s/ production to compensate for a less tight constriction when
the jaw must lower for the following vowel. Thus, children may adopt a
more accessible mechanism, different from adults, to compensate for their
immature lingual gestures, possibly in an attempt to maximize phonological
contrasts in word-initial position.
4aSCb6. Moving targets and unsteady states: “Shifting” productions of
sibilant fricatives by young children. Patrick Reidy (Dept. of Linguist,
The Ohio State Univ., 24A Oxley Hall, 1712 Neil Ave., Columbus, OH
43210, patrick.francis.reidy@gmail.com)
The English voiceless sibilant /s/–/S/ contrast is one that many children
do not acquire until their adolescent years. This protracted acquisition may
be due to the high level of articulatory control that is necessary to the successful production of an adult-like sibilant, which involves the coordination
of lingual, mandibular, and pulmonic gestures. Poor coordination among
these gestures can result in the acoustic properties of the noise source or the
vocal tract filter changing throughout the timecourse of the frication, to the
extent that the phonetic percept of the frication noise changes across its duration. The present study examined such “shifting” productions of sibilant
fricatives by native English-acquiring two- through five-year-old children,
which were identified from the Paidologos corpus as those productions
where the interval of frication was transcribed phonetically as a sequence of
fricative sounds. There were two types of shift in frication quality: (1) a
gradual change in the resonant frequencies in the spectrogram, suggesting a
repositioning of the oral constriction; and (2) an abrupt change in the level
of the frication, suggesting a switch in the noise source. Work is underway
to develop measures that differentiate these two types of shift, and that suggest their underlying articulatory causes.
4aSCb7. Effects of spectral smearing on sentence recognition by adults
and children. Joanna H. Lowenstein (Otolaryngology-Head & Neck Surgery, Ohio State Univ., 915 Olentangy River Rd., Ste. 4000, Columbus, OH
43212, lowenstein.6@osu.edu), Eric Tarr (Audio Eng. Technol., Belmont
Univ., Nashville, TN), and Susan Nittrouer (Otolaryngology-Head & Neck
Surgery, Ohio State Univ., Columbus, OH)
Children’s speech perception depends on dynamic formant patterns
more than that of adults. Spectral smearing of formants, as found with the
broadened auditory filters associated with hearing loss, should disproportionately affect children because of this greater dependence on formant patterns. Making formants more prominent, on the other hand, may result in
improved recognition. Adults (40) and children age 5 and 7 (20 of each age)
listened to 75 four-word syntactically correct, semantically anomalous sentences processed so that excursions around the mean spectral slope were
sharpened by 50% (making individual formants more prominent), flattened
by 50% (smearing individual formants), or left unchanged. These sentences
were presented to children and to half of the adults in speech-shaped noise
at 0 dB SNR. The rest of the adults listened to the sentences at -3 dB SNR.
Results indicate that all listeners did more poorly with the smeared formants, with 5-year-olds showing the largest decrement in performance at 0
dB SNR. However, adults at -3 dB SNR showed an even greater decrement
2262
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
in performance. Making formants more prominent did not improve recognition, perhaps due to harmonic-formant mismatches. Thus, there is reason to
explore processing strategies that might enhance formant prominence for
listeners with hearing loss.
4aSCb8. Acoustic-phonetic characteristics of older children’s spontaneous speech in interactions in conversational and clear speaking styles.
Valerie Hazan, Michèle Pettinato, Outi Tuomainen, and Sonia Granlund
(Speech, Hearing and Phonetic Sci., UCL, Chandler House, 2, Wakefield
St., London WC1N 1PF, United Kingdom, v.hazan@ucl.ac.uk)
This study investigated (a) the acoustic-phonetic characteristics of spontaneous speech produced by talkers aged 9–14 years in an interactive (diapix) task with an interlocutor of the same age and gender (NB condition)
and (b) the adaptations these talkers made to clarify their speech when
speech intelligibility was artificially degraded for their interlocutor (VOC
condition). Recordings were made for 96 child talkers (50 F, 46 M); the
adult reference values came from the LUCID corpus recorded under the
same conditions [Baker and Hazan, J. Acoustic. Soc. Am. 130, 2139–2152
(2011)]. Articulation rate, pause frequency, fundamental frequency, vowel
area, and mean intensity (1–3 kHz range) were analyzed to establish
whether they had reached adult-like values and whether young talkers
showed similar clear speech strategies as adults in difficult communicative
situations. In the NB condition, children (including the 13–14 year group)
differed from adults in terms of their articulation rate, vowel area, median
F0, and intensity. Child talkers made adaptations to their speech in the VOC
condition, but adults and children differed in their use of F0 range, vowel
hyperarticulation, and pause frequency as clear speech strategies. This suggests that further developments in speech production take place during later
adolescence. [Work supported by ESRC.]
4aSCb9. Acoustic characteristics of infant-directed speech to normalhearing and hearing-impaired twins with hearing aids and cochlear
implants: A case study. Maria V. Kondaurova, Tonya R. Bergeson-Dana
(Otolaryngol. – Head & Neck Surgery, Indiana Univ. School of Medicine,
699 Riley Hospital Dr. – RR044, Indianapolis, IN 46202, mkondaur@iupui.
edu), and Neil A. Wright (The Richard and Roxelyn Pepper Dept. of Commun. Sci. and Disord., Northwestern Univ., Evanston, IL)
The study examined acoustic characteristics of maternal speech to normal-hearing (NH) and hearing-impaired (HI) twins who received hearing
aids (HAs) or a unilateral cochlear implant (CI). A mother of female-male
NH twins (NH-NH; age 15.8 months), a mother of two male twins, one NH
and another HI with HAs (NH-HA; age 11.8 months) and a mother of a NH
female twin and a HI male twin with a CI (NH-CI; age 14.8 months) were
recorded playing with their infants during three sessions across a 12-month
period. We measured pitch characteristics (normalized F0 mean, F0 range,
and F0 SD), utterance and pause duration, syllable number, and speaking
rate. ANOVAs demonstrated that speech to NH-NH twins was characterized
by lower, more variable pitch with greater pitch range as compared to
speech to NH-HA and NH-CI pairs. Mothers produced more syllables, had
faster speaking rate and longer utterance duration in speech to NH-NH than
the other pairs. The results suggest that the pediatric hearing loss in one sibling affects maternal speech properties to both NH and HI infants in the
same pair. Future research will investigate vowel space and lexical properties of IDS to three twin pairs as well as their language outcome measures.
4aSCb10. Effects of vowel position and place of articulation on voice
onset time in children: Longitudinal data. Elaine R. Hitchcock (Dept. of
Commun. Sci. and Disord., Montclair State Univ., 1515 BRd. St., Bloomfield, NJ 07444, hitchcocke@mail.montclair.edu) and Laura L. Koenig
(Dept. of Commun. Sci. and Disord., Long Island Univ., Queens, NY)
Voice onset time (VOT) has been found to vary according to phonetic
context, but past studies report varying magnitudes of effect, and no past
work has evaluated the degree to which such effects are consistent over time
for a single speaker. This study explores the relationships between vowel
position, consonant place of articulation [POA], and voice onset time
(VOT) in children, comparing the results to past adult work. VOT in CV/
CVC words was measured in nine children ages 5;3–7;6 every two-four
168th Meeting: Acoustical Society of America
2262
4aSCb11. Longitudinal data on the production of content versus function words in children’s spontaneous speech. Jeffrey Kallay and Melissa
A. Redford (Linguist, Univ. of Oregon, 1455 Moss St., Apt. 215, Eugene,
Ohio 97403, jkallay@uoregon.edu)
Allen and Hawkins (1978; 1980) were among the first to note rhythmic
differences in the speech of children and adults. Sirsa and Redford (2011)
found that rhythmic differences between younger and older children’s
speech was best accounted for by age-related differences in function word
production. In other on-going work (Redford, Kallay & Dilley) we found an
effect of age on the perceived prominence of function words in children’s
speech, but no effect on content words. The current longitudinal study investigated the effect of word class (content versus function words) on the development of reduction in terms of syllable duration and pitch range (a
correlate of accenting). Spontaneous speech was elicited for 3 years from 36
children aged 5; 2–6; 11 at time of first recording. There were effects of
word class (content > function) and of time on median duration, but no
interaction between these factors. The median duration decreased 13% in
function words from the 1st to 3rd year; a similar decrease (15%) was found
for content words. Pitch range only varied systematically with word class.
Other spectral measures are being collected to further investigate the development of reduction in children’s speech. [Work supported by NICHD.]
4aSCb12. Audiovisual speech integration development at varying levels
of perceptual processing. Kaylah Lalonde (Speech and Hearing Sci., Indiana Univ., 200 South Jordan Ave., Bloomington, IN 47405, klalonde@indiana.edu) and Rachael Frush Holt (Speech and Hearing Sci., Ohio State
Univ., Columbus, OH)
There are multiple mechanisms of audiovisual (AV) speech integration
with independent maturational time courses. This study investigated development of both basic perceptual and speech-specific mechanisms of AV
speech integration by examining AV speech integration development across
three levels of perceptual processing. Twenty-two adults and 24 6- to 8year-old children completed three auditory-only and AV yes/no tasks varying only in the level of perceptual processing required to complete them:
detection, discrimination, and recognition. Both groups demonstrated benefits from matched AV speech and interference from mismatched AV speech
relative to auditory-only conditions. Adults, but not children, demonstrated
greater integration effects at higher levels of perceptual processing (i.e., recognition). Adults seem to rely on both general perceptual mechanisms of
speech integration that apply to all levels of perceptual processing and
speech-specific mechanisms of integration that apply when making phonetic
decisions and/or accessing the lexicon; 6- to 8-year-old children seem to
rely only on general perceptual mechanisms of AV speech integration. The
general perceptual mechanism allows children to attain the same degree of
AV benefit to detection and discrimination as adults, but the lack of a
speech-specific mechanism in children might explain why they attain less
AV recognition benefit than adults.
4aSCb13. Developmental and linguistic factors of audiovisual speech
perception across different masker types. Rachel Reetzke, Boji Lam,
Zilong Xie, Li Sheng, and Bharath Chandrasekaran (Commun. Sci. and Disord., Univ. of Texas at Austin, The University of Texas at Austin, 2504A
Whitis Ave., Austin, TX 78751, rreetzke@gmail.com)
Developmental and linguistic factors have been found to influence listeners’ ability to recognize speech-in-noise. However, there is paucity of
evidence exploring how these factors modulate speech perception in
2263
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
everyday listening situations, such as multisensory environments and backgrounds with informational maskers. This study assessed sentence recognition for 30 children (14 monolingual, 16 simultaneous bilingual; ages 6–10)
and 31 adults (21 monolingual, ten simultaneous bilingual; ages 18–22).
Our experimental design included three within-subject variables: (a) masker
type: pink noise or two-talker babble, (b) modality: audio-only and audiovisual, and (c) signal-to-noise ratio (SNR): 0 to -16 dB. Results revealed that
across both modalities and noise types, adults performed better than children, and simultaneous bilinguals performed similarly to monolinguals. The
age effect was largest at the lowest SNRs of -12 and -16 dB in the audiovisual two-talker babble condition. These findings suggest that children experience greater difficulty in segregation of target speech in informational
maskers relative to adults, even with audiovisual cues. This may provide
evidence for children’s less developed higher-level cognitive strategies in
dealing with speech-in-noise (e.g., selective attention). Findings from the
second analysis suggest that despite two competing lexicons, simultaneous
bilinguals do not experience a speech perception-in-noise deficit relative to
monolinguals.
4aSCb14. Experience-independent effects of matching and non-matching visual information on speech perception. D. Kyle Danielson, Alison J.
Greuel, Padmapriya Kandhadai, and Janet F. Werker (Psych., Univ. of Br.
Columbia, 2136 West Mall, Vancouver, BC V6T 1Z4, Canada, kdanielson@psych.ubc.ca)
Infants are sensitive to the correspondence between visual and auditory
speech. Infants exhibit the McGurk effect, and matching audiovisual information may facilitate discrimination of similar consonant sounds in an
infant’s native language (e.g., Teinonen et al., 2008). However, because
most existing research in audiovisual speech perception has been conducted
using native speech sounds with infants in their first year of life, little work
has explored whether this link between the auditory and visual modalities of
speech perception arises due to experience with the native language. In the
present set of studies, English-learning six- and ten-month-old infants are
tested for discrimination of a non-English speech contrast following familiarization with matching and mismatching audiovisual speech. Furthermore,
the looking fixation behaviors of the two age groups are compared between
the two conditions. Although it has been demonstrated that infants in the
younger age range attend preferentially to the eye region when viewing
matched audiovisual speech and that infants in the older age range temporarily attend to the mouth region (Lewkowicz & Hansen-Tift, 2012), here
deviations in this behavior for matching and mismatching non-native speech
are examined (a link that has only been previously explored in the native
language (Tomalski et al., 2013)).
4aSCb15. Switched-dominance bilingual speech production: Continuous usage versus early exposure. Michael Blasingame and Ann R.
Bradlow (Linguist, Northwestern Univ., 2016 Sheridan Rd., Evanston, IL
60208, mblasingame@u.northwestern.edu)
Switched dominance bilinguals (i.e., “heritage speakers,” HS, with L2
rather than L1 dominance) have exhibited native-like heritage language (L1)
sound perception (e.g., Korean three-way VOT contrast discrimination by Korean HS; Oh, Jun, Knightly, & Au, 2003) and sound production (e.g., Spanish
VOT productions by Spanish HS; Au, Knightly, Jun, & Oh, 2002), but far
from native-like proficiency in other aspects of L1 function, including morphosyntax (Montrul, 2010). We investigated whether native-like L1 sound
production proficiency extended to heritage language sentence-in-noise intelligibility. We recorded English and Spanish sentences by Spanish HS (SHS)
and monolingual English controls (English only). Native listeners of each language transcribed these recordings under easy (-4 dB SNR) and hard (-8 dB
SNR) conditions. In easy conditions, SHS English and Spanish intelligibility
were not significantly different, yet in hard conditions, SHS English intelligibility was significantly higher than SHS Spanish intelligibility. Furthermore,
we observed no differences between SHS English and English-control intelligibility in both conditions. These results suggest for SHS, while early Spanish
exposure provided some resistance to heritage language/L1 intelligibility degradation, the absence of continuous Spanish usage impacted intelligibility in
severely degraded conditions. In contrast, the absence of early English exposure was entirely overcome by later English dominance.
168th Meeting: Acoustical Society of America
2263
4a THU. AM
weeks for 10 months, for a total of 18 sessions yielding approximately
18,000 tokens for analysis. Bilabial and velar cognate pairs targeted a frontback vowel difference (/i/-/u/, /e/-/o/), while alveolar cognate pairs targeted
a mid high-low vowel difference (/o/-/A/). VOT variability over time was
also evaluated. Preliminary results suggest that POA yields a robust pattern
of bilabial < alveolar < velar, but vowel effects are less clear. Vowel height
shows the most obvious effect with consistently longer VOT values
observed for mid high vowels. Front-back vowel comparisons yielded no
obvious differences. On the whole, contextual variations based on POA and
vowel context do not show clear correlations with overall VOT variation.
4aSCb16. Genetic variation in catechol-O-methyl transferase activity
impacts speech category learning. Han-Gyol Yi (Commun. Sci. and Disord., The Univ. of Texas at Austin, 2504 Whitis Ave., A1100, Austin, TX
78712, gyol@utexas.edu), W. T. Maddox (Psych., The Univ. of Texas at
Austin, Austin, TX), Valerie S. Knopik (Behavioral Genetics, Rhode Island
Hospital, Providence, RI), John E. McGeary (Providence Veterans Affairs
Medical Ctr. , Providence, RI), and Bharath Chandrasekaran (Commun. Sci.
and Disord., The Univ. of Texas at Austin, Austin, TX)
Learning non-native speech categories is a challenging task. Little is
known about the neurobiology underlying speech category learning. In
vision, two dopaminergic neurobiological learning systems have been identified: a rule-based reflective learning system mediated by the prefrontal cortex, wherein processing is under deliberative control, and an implicit
reflexive learning system mediated by the striatum. During speech learning,
successful learners initially use simple reflective rules but eventually
transition to a multidimensional reflexive strategy during later learning. We
use a neurocognitive-genetic approach to identify intermediate phenotypes
that modulate reflective brain function and examine their effects on speech
learning. We focus on the COMT Val158Met polymorphism, which is
linked to altered prefrontal function. The COMT-Val variant catabolizes dopamine more rapidly and is linked to poorer performance on prefrontallymediated tasks. Adults (Met-Met: = 40; Met-Val= 75; Val-Val = 54) learned
to categorize non-native Mandarin tones over five blocks of feedback-based
training. Learning rates were the highest for the Met-Met genotype; the ValVal genotype was associated with poorer overall learning. Poorer learning
indicates increased perseveration of reflective unidimensional rule use,
thereby preventing the transition to the reflexive system. We conclude that
genetic variation is an important source of individual differences in complex
phenotypes such as speech learning.
THURSDAY MORNING, 30 OCTOBER 2014
INDIANA G, 9:00 A.M. TO 10:00 A.M.
Session 4aSPa
Signal Processing in Acoustics: Imaging and Classification
Grace A. Clark, Chair
Grace Clark Signal Sciences, 532 Alden Lane, Livermore, CA 94550
Contributed Papers
9:00
9:15
4aSPa1. Optimal smoothing splines improve efficiency of entropy imaging for detection of therapeutic benefit in muscular dystrophy. Michael
Hughes (Int. Med./Cardiology, Washington Univ. School of Medicine, 1632
Ridge Bend Dr., St Louis, MO 63108, mshatctrain@gmail.com), John
McCarthy (Dept. of Mathematics, Washington Univ., St. Louis, MO), Jon
Marsh (Int. Med./Cardiology, Washington Univ. School of Medicine, Saint
Louis, MO), and Samuel Wickline (Dept. of Mathematics, Washington
Univ., Saint Louis, MO)
4aSPa2. Waveform processing using entropy instead of energy: A quantitative comparison based on the heat equation. Michael Hughes (Int.
Med./Cardiology, Washington Univ. School of Medicine, 1632 Ridge Bend
Dr., St Louis, MO 63108, mshatctrain@gmail.com), John McCarthy (Mathematics, Washington Univ., St Louis, MO), Jon Marsh (Int. Med./Cardiology, Washington Univ. School of Medicine, Saint Louis, MO), and Samuel
Wickline (Mathematics, Washington Univ., Saint Louis, MO)
We have reported previously on sensitivity comparisons of signal energy
and several entropies to changes in skeletal muscle architecture in experimental muscular dystrophy before and after pharmacological therapeutic intervention [M. S. Hughes, IEEE Trans. UFFC. 54, 2291–2299 (2007)]. The study
was based on a moving window analysis of simple cubic splines that were fit
to the backscattered ultrasound and required that the radio frequency ultrasound (RF) be highly oversampled. The current study employs optimal
smoothing splines instead to determine the effect of analyzing the same data
with increasing levels of decimation. The RF data were obtained from
selected skeletal muscles of muscular dystrophy mice (mdx: dystrophin -/-)
that were randomly blocked into two groups: 4 receiving steroid treatment
over 2 weeks, and 4 untreated positive controls. Ultrasonic imaging was performed on day 15. All mice were anesthetized then each forelimb was imaged
in transverse cross sections using a Vevo-660 with a single-element 40 MHz
wobbler-transducer (model RMV-704, Visualsonics). The result of each scan
was a three dimensional data set 384 8192 # frames in size. We find the
equivalent sensitivity of this new approach for detecting treatment benefits as
before (p<0.03), but now at a decimated sampling rate slightly below the
Nyquist frequency. This implies that optimal smoothing splines are useful for
analysis of data acquired from point of care imaging devices where hardware
cost and power consumption must be minimized.
2264
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Virtually all modern imaging devices function by collecting electromagnetic or acoustic backscattered waves and using the energy carried by these
waves to determine pixel values that build up what is basically an “energy”
picture. However, waves also carry “information” that also may be used to
compute the pixel values in an image. We have employed several measures
of information, most sensitive being the “joint entropy” of the backscattered
wave and a reference signal. Numerous published studies have demonstrated
the advantages of “information imaging,” over conventional methods for
materials characterization and medical imaging. A typical study is comprised of repeated acquisition of backscattered waves from a specimen that
is changing slowing with acquisition time or location. The sensitivity of
repeated experimental observations of such a slowly changing quantity may
be defined as the mean variation (i.e., observed change) divided by mean
variance (i.e., observed noise). Assuming the noise is Gaussian and using
Wiener integration to compute the required mean values and variances, solutions to the Heat equation may be used to express the sensitivity for joint
entropy and signal energy measurements. There always exists a reference
such that joint entropy has larger variation and smaller variance than the
corresponding quantities for signal energy, matching observations of several
studies. A general prescription for finding an “optimal” reference for the
joint entropy emerges, which has been validated in several studies.
168th Meeting: Acoustical Society of America
2264
9:30
9:45
4aSPa3. The classification of underwater acoustic target signals based
on wave structure and support vector machine. Qingxin Meng, Shie
Yang, and Shengchun Piao (Sci. and Technol. on Underwater Acoust. Lab.,
Harbin Eng. Univ., No.145,Nantong St.,Nangang District, Harbin City, Heilongjiang Province 150001, China, mengqingxin005@hrbeu.edu.cn)
4aSPa4. Determination of Room Impulse Response for synthetic data
acquisition and ASR testing. Philippe Moquin (Microsoft, One Microsoft
Way, Redmond, WA 98052, pmoquin@microsoft.com), Kevin Venalainen
(Univ. of Br. Columbia, Vancouver, BC, Canada), and Dinei A. Flor^encio
(Microsoft, Redmond, WA)
The sound of propeller is a remarkable feature of ship-radiated noise,
the loudness and timbre of which are usually applied to identify types of
ships. Since the information of loudness and timbre is indicated in the wave
structure of time series, the feature of wave structure can be extracted to
classify types of various underwater acoustic targets. In this paper, the
method of feature vector extraction of underwater acoustic signals based on
wave structure is studied. The nine-dimension features are constructed via
signal statistical characteristics of zero-crossing wavelength, peek-to-peek
amplitude, zero-crossing wavelength difference, and wave train areas. And
then, the support vector machine (SVM) is applied as a classifier for two
kinds of underwater acoustic target signals. The kernel function is set radial
basis function (RBF). By properly setting the penalty factor and parameter
of RBF, the recognition rate reaches over 89.5%, respectively. The sea-test
data shows the validity of target recognition ability of the method above.
Automatic Speech Recognition (ASR) works best when the speech signal best matches the ones used for training. Training, however, may require
thousands of hours of speech, and it is impractical to directly acquire them
in a realistic scenario. Some improvement can be obtained by incorporating
typical building acoustics measurement parameters such as RT, Cx, LF,
etc., with limited gain. Instead, we estimate Room Impulse Responses
(RIRs), and convolve speech and noise signals with the estimated RIRs.
This produces realistic signals, which can then be processed by the audio
pipeline, and used for ASR training. In our research, we use rooms with
variable acoustics and repeatable source-receiver positions. The receivers
are microphone arrays making the relative phase and magnitude critical. A
standard mouth simulator for voice signals at various positions in the room
is under robot control. A limited corpus of speech data as well as noise sources is recorded and the RIR at these 27 positions is determined using a variety of methods (chirp, MLS, impulse, and noise). The convolved RIR with
the “clean speech” is compared to the actual measurements. Test methods
used, differences from the measurements, and the difficulty of determining
the unique RIR will be presented.
THURSDAY MORNING, 30 OCTOBER 2014
INDIANA G, 10:15 A.M. TO 12:00 NOON
Session 4aSPb
Signal Processing in Acoustics: Beamforming, Spectral Estimation, and Sonar Design
Brian E. Anderson, Cochair
Geophysics Group, Los Alamos National Laboratory, MS D443, Los Alamos, NM 87545
4a THU. AM
R. Lee Culver, Cochair
ARL, Penn State University, PO Box 30, State College, PA 16804
Contributed Papers
10:15
10:30
4aSPb1. Quantifying the depth profile of time reversal focusing in elastic media. Brian E. Anderson, Marcel C. Remillieux, Timothy J. Ulrich,
and Pierre-Yves Le Bas (Geophys. Group (EES-17), Los Alamos National
Lab., MS D446, Los Alamos, NM 87545, bea@lanl.gov)
4aSPb2. Competitive algorithm blending for enhanced source separation of convolutive speech mixtures. Keith Gilbert (Elec. and Comput.
Eng., Univ. of Massachusetts Dartmouth, 36 Walnut St., Berlin, MA 01503,
kgilbert@umassd.edu), Karen Payton (Elec. and Comput. Eng., Univ. of
Massachusetts Dartmouth, N. Dartmouth, MA), Richard Goldhor, and Joel
MacAuslan (Speech Technol. & Appl. Res., Corp., Bedford, MA)
A focus of elastic energy on the surface of a solid sample can be useful
to nondestructively evaluate whether the surface or the near-surficial region
is damaged. Time reversal techniques allow one to focus energy in this manner. In order to quantify the degree to which a time reversal focus can probe
near-surficial features, the depth profile of a time reversal focus must be
quantified. This presentation will discuss numerical modeling and experimental results used to quantify the depth profile. [This work was supported
by the U.S. Dept. of Energy, Fuel Cycle R&D, Used Fuel Disposition (Storage) Campaign.]
2265
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
This work investigates an adaptive filter network in which multiple blind
source separation methods are run in parallel, and the individual outputs are
combined to produce estimates of acoustic sources. Each individual algorithm makes assumptions about the environment (dimensions of enclosure,
reflections, reverberation, etc.) and the sources (speech, interfering noise,
position, etc.), which constitutes an individual hypothesis about the
observed microphone outputs. The goal of this competitive algorithm blending (CAB) approach is to achieve the performance of the “true” method,
i.e., the method that has full knowledge of the environment’s and the sources’ characteristics a priori, without any prior information. Results are given
for time-invariant, critically- and over- determined, convolutive mixtures of
168th Meeting: Acoustical Society of America
2265
speech and interfering noise sources, and the performance of the CAB
method is compared with the “true” method in both the transient adaptation
phase and in steady state.
10:45
4aSPb3. Structural infrasound signals in an urban environment. Sarah
McComas, Henry Diaz-Alvarez, Mike Pace, and Mihan McKenna (US
Army Engineer Res. and Development Ctr., 3909 Halls Ferry Rd., Vicksburg, MS 39180, sarah.mccomas@usace.army.mil)
Historically, infrasound arrays have been deployed in rural environments where anthropological noise sources are limited. As interest in monitoring sources at local distances grows in the infrasound community, it will
be vital to understand how to monitor infrasound sources in an urban environment. Arrays deployed in urban centers have to overcome the decreased
signal to noise ratio and reduced amount of real estate available to deploy
an array. To advance the understanding of monitoring infrasound sources in
urban environments, we deployed local and regional infrasound arrays on
building rooftops of the campus of Southern Methodist University (SMU)
and collected data for one seasonal cycle. The data was evaluated for structural source signals (continuous-wave packets) and when a signal was identified the back azimuth to the source was determined through frequency
wavenumber analysis. This information was used to identify hypothesized
structural sources; these sources were verified through direct measurement,
structural numerical modeling and/or full waveform propagation modeling.
Permission to publish was granted by Director, Geotechnical & Structures
Laboratory.
11:00
4aSPb4. Design of a speaker array system based on adaptive time reversal method. Gee-Pinn J. Too, Yi-Tong Chen, and Shen-Jer Lin (Dept. of
Systems and Naval Mechatronic Eng., National Cheng Kung Univ., No. 1
University Rd., Tainan 701, Taiwan, z8008070@email.ncku.edu.tw)
A system for focusing sound around desired locations by using a speaker
array of controlled sources is proposed. To increase acoustic signal in certain locations where the user is within but to reduce it in the other certain
locations by controlling source signals is the main objective of this study.
Based on adaptive time reversal theory, input weighting coefficients for
speakers are evaluated for the speaker sources. Experiments and simulations
with a speaker array of controlled sources are established in order to observe
the distribution of sound field under different boundary and control conditions. The results indicate that based on the current algorithm, the difference
of sound pressure level between bright point and dark point can be as high
as 12 dB with an eight speakers array system.
11:15
4aSPb5. Focusing the acoustic signal of a maneuvering rotorcraft. Geoffrey H. Goldman (U.S. Army Res. Lab., 2800 Powder Mill Rd., Adelphi,
MD 20783-1197, geoffrey.h.goldman.civ@mail.mil)
An algorithm was developed and tested to blindly focus the acoustic
spectra of a rotorcraft that was blurred by time-varying Doppler shifts and
other effects such atmospheric distortion. First, the fundamental frequency
generated by the main rotor blades of a rotorcraft was tracked using a fixlag smoother. Then, the frequency estimates were used to resample the data
in time using interpolation. Next, the motion compensated data were further
focused using a technique based upon the phase gradient autofocus algorithm. The performance of the focusing algorithm was evaluated by analyz-
2266
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
ing the increase in the amplitude of the harmonics. For most of the data, the
algorithm focused the harmonics between approximately 10–90 Hz to
within 1–2 dB of an estimated upper bound obtained from conservation of
energy and estimates of the Doppler shift. In addition, the algorithm was
able to separate two closely spaced frequencies in the spectra of the rotorcraft. The algorithm developed can be used to preprocess data for classification, nulling, and tracking algorithms.
11:30
4aSPb6. Representing the structure of underwater acoustic communication data using probabilistic graphical models. Atulya Yellepeddi (Elec.
Engineering/Appl. Ocean Phys. and Eng., Massachusetts Inst. of Technology/Woods Hole Oceanographic Inst., 77 Massachusetts Ave., Bldg. 36683, Cambridge, MA 02139, atulya@mit.edu) and James C. Preisig (Appl.
Ocean Phys. and Eng., Woods Hole Oceanographic Inst., Woods Hole,
MA)
Exploiting the structure in the output of the underwater acoustic communication channel in order to improve the performance of the communication
system is a problem that has received much recent interest. Methods such as
physical constraints and sparsity have been used to represent such structure
in the past. In this work, we consider representing the structure of the
received signal using probabilistic graphical models (more specifically Markov random fields), which capture the conditional dependencies amongst a
collection of random variables. In the frequency domain, the inverse covariance matrix of the received signal is shown to have a sparse structure. Under
the assumption that the signal may be modeled as a multivariate Gaussian
random variable, this corresponds to a Markov random field. It is argued
that the underlying cause of the structure is the cyclostationary nature of the
signal. In practice, the received signal is not exactly cyclostationary, but
data from the SPACE08 acoustic communication experiment is used to
demonstrate that field data exhibits exploitable structure. Finally, techniques
to exploit graphical model structure to improve the performance of wireless
underwater acoustic communication are briefly considered.
11:45
4aSPb7. Choice of acoustics signals family in multi-users environment.
Benjamin Ollivier, Frederic Maussang, and Rene Garello (ITI, Institut
Mines-Telecom / Telecom Bretagne - Lab-STICC, 655 Ave. du Technopole,
Plouzane 29200, France, benjamin.ollivier@telecom-bretagne.eu)
Our application concerns a system immersed in an underwater acoustical
context, with Nt transmitters and Nr slowly moving receivers. The objective
is that all receivers detect the transmitted signals, in order to estimate the
time of arrival (TOA) and then to facilitate the localization when several
TOA (more than 3) are present. We have to choose a method to generate a
number Ns of broad-band signals to use the Code Division Multiple Access
(CDMA) modulation, specially adapted to our problem. This work is
devoted to selecting Nt signals among the Ns available. The aim is to choose
the most distinctly detectable ones. First, in a no Doppler context, the criterion of signals selection is based on a ratio between maximum of auto-correlation and cross-correlation. Second, in the presence of Doppler, we rely on
Ambiguity Function which allows representing the correlation function to
several frequency Doppler shifts. The choice of Nt signals is then based on
ratio between maximum of auto-ambiguity and cross-ambiguity. In this paper we will highlight the relevance of the criteria (correlation, ambiguity
function) in the choice of the most appropriate signals in function of the
multi-users context.
168th Meeting: Acoustical Society of America
2266
THURSDAY MORNING, 30 OCTOBER 2014
INDIANA F, 8:00 A.M. TO 11:05 A.M.
Session 4aUW
Underwater Acoustics: Shallow Water Reverberation II
Brian T. Hefner, Chair
Applied Physics Laboratory, University of Washington, Applied Physics Laboratory, University of Washington,
1013 NE 40th Street, Seattle, WA 98105
Chair’s Introduction—8:00
Contributed Paper
8:05
lem at the basic research level, both propagation and scattering physics need
to be properly addressed. A major goal of TREX13 (Target and Reverberation EXperiment 2013) is to quantitatively investigate reverberation with
sufficient environmental measurement to support full modeling of reverberation data. Along a particular reverberation track at the TREX13 site, TL
and direct-path backscatter were separately measured. Environmental data
were extensively collected along this track. This talk will bring together all
the components of the SONAR equation measured separately at the
TREX13 site to provide an assessment of the reverberation process along
with environmental factors impacting each of the components.
4aUW1. SONAR Equation perspective on TREX13 measurements.
Dajun Tang and Brian T. Hefner (Appl. Phys. Lab, Univ of Washington,
1013 NE 40th St., Seattle, WA 98105, djtang@apl.washington.edu)
Modeling shallow water reverberation is a problem that can be approximated as two-way propagation (including multiple forward scatter) and a
single backward scatter. This can be effectively expressed in terms of the
SONAR equation: RL = SL-2 x TL + SS, where RL is reverberation level,
SL is the source level, TL is the one way transmission loss, and SS is the
integrated scattering strength. In order to understand the reverberation prob-
Invited Papers
8:20
4aUW2. Environmental measurements collected during TREX13 to support acoustic modeling. Brian T. Hefner and Dajun Tang
(Appl. Phys. Lab., Univ. of Washington, Appl. Phys. Lab., University of Washington, 1013 NE 40th St., Seattle, WA 98105, hefner@
apl.washington.edu)
4a THU. AM
The major goal of TREX13 (Target and Reverberation EXperiment 2013) was to quantitatively investigate reverberation with sufficient environmental measurements to support full modeling of reverberation data. The collection of environmental data to support reverberation modeling is usually limited by the large ranges (10s of km) involved, the temporal and spatial variability of the environment
and the time variation of towed source/receiver locations within this environment. In order to overcome these difficulties, TREX13 was
carried out in a 20 m deep shelf environment using horizontal line arrays mounted on the seafloor. The water depth and well controlled
array geometry allowed environmental characterization to be focused on the main beam of the array, i.e., along a track roughly 5 km
long and 500 m wide. This talk presents an overview of the efforts made to characterize the sea surface, water column, seafloor, and subbottom along this track to support the modeling of acoustic data collected over the course of the experiment. [Work supported by ONR
Ocean Acoustics.]
8:40
4aUW3. Persistence of sharp acoustic backscatter transitions observed in repeat 400 kHz multibeam echosounder surveys offshore Panama City, Florida, over 1 and 24 months. Christian de Moustier (10dBx LLC, PO Box 81777, San Diego, CA 92138,
cpm@ieee.org) and Barbara J. Kraft (10dBx LLC, Barrington, New Hampshire)
The Target and Reverberation Experiment 2013 (TREX13), conducted offshore Panama City, FL, from April to June 2013, sought
to determine which environmental parameters contribute the most to acoustic reverberation and control sonar performance prediction
modeling for acoustic frequencies between 1 kHz and 10 kHz. In that context, a multibeam echosounder operated at 400 kHz was used
to map the seafloor relief and its high-frequency acoustic backscatter characteristics along the acoustic propagation path of the reverberation experiment. Repeat surveys were conducted a month apart, before and after the main reverberation experiment. In addition, repeat
surveys were conducted at 200 kHz in April 2014. Similar mapping work was also conducted in April 2011 during a pilot experiment
(GulfEx11) near the site chosen for TREX13. Both experiments revealed a persistent occurrence of sharp transitions from high to low
acoustic backscatter at the bottom of swales. Hypotheses are presented for observable differences in bathymetry and acoustic backscatter
in the overlap region between the GulfEx11 survey and the TREX13 surveys conducted 2 y apart. [Work supported by ONR 322 OA.]
2267
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2267
Contributed Papers
9:00
9:30
4aUW4. Roughness measurement by laser profiler and acoustic scattering
strength of a sandy bottom. Nicholas P. Chotiros, Marcia J. Isakson, Oscar
E. Siliceo, and Paul M. Abkowitz (Appl. Res. Labs., Univ. of Texas at Austin,
PO Box 8029, Austin, TX 78713-8029, chotiros@arlut.utexas.edu)
4aUW6. Seabed characterisation using a low cost digital thin line array:
Results from the Target and Reverberation Experiments 2013. Unnikrishnan K. Chandrika, Venugopalan Pallayil (Acoust. Res. Lab, TMSI,
National Univ. of Singapore, Acoust. Res. Lab, 18 Kent Ridge Rd., Singapore 119227, Singapore, venu@arl.nus.edu.sg), Nicholas Chotiros (Appl.
Res. Lab, Univ. of Texas, Austin, TX), and Marcia Isakson (Appl. Res. Lab,
Univ. of Texas, Ausitn, TX)
The roughness of a sandy seabed off Panama City, FL, was measured
with a laser profiler. This was the site of the target and reverberation experiment of 2013 (TREX13) in which propagation loss and reverberation
strength were measured. The area may be characterized as having small
scale roughness due to bioturbation overlaying larger sand ripples due to
current activity. The area was largely composed of sand with shell hash
crossed by ribbons of softer sediment at regular intervals. The roughness
measurements were concentrated in the areas where the ribbons intersected
the designated sound propagation track. Laser lines projected on the sand
were imaged by a high-definition video recorder. The video images were
processed to yield bottom profiles in three dimensions. Finally, the roughness data are used to estimate acoustic bottom scattering strength. [Work
supported by the Office of Naval Research, Ocean Acoustics Program.]
9:15
4aUW5. Seafloor sub-bottom Imaging along the TREX reverberation
track. Joseph L. Lopes, Rodolf Arrieta, Iris Paustian, Nick Pineda (NSWC
PCD, 110 Vernon Ave, Panama City, FL 32407-7001, joseph.l.lopes@navy.
mil), and Kevin Williams (Appl. Phys. Lab. / Univ. of Washington, Seattle,
WA)
The Buried Object Scanning Sonar (BOSS) integrated into a Bluefin12
autonomous underwater vehicle was used to collect seafloor sub-bottom
data along the TREX reverberation track. BOSS is a downward looking sonar and employs an omni-directional source to transmit a 3 to 20 kHz linear
frequency modulated (LFM) pulse. Backscattered signals are received by
two 20-channel linear hydrophone arrays. The BOSS survey was carried out
to support long-range reverberation measurements at 3 kHz. The data were
beamformed in three dimensions and processed into 10cm x 10cm x 10cm
voxel maps of backscattering to a depth of 1 m. Comparison of the BOSS
imagery with 400 kHz multibeam sonar imagery of the seafloor allows
muddy regions to be identified and shows differences rationalized by the differences in sediment penetration of the two frequency ranges utilized. Processed BOSS data are consistent with observations from diver cores and the
reverberation data collected by stationary arrays deployed on the seafloor.
Specifically, stronger and deeper backscattering from muddy regions is
observed (relative to near-by sandy regions). This correlates well with the
large amounts of detritus (e.g., shell fragments) and complicated vertical
layering within cores, and the enhanced reverberation, from those regions.
[Work supported by ONR.]
During TREX-13 experiments in the Gulf of Mexico in May 2013, the
use of a low cost digital thin line array (DTLA) developed at the Acoustic
Research Lab, National University of Singapore was explored towards seabottom characterisation. The array, developed for use from AUV platforms,
was hosted on a Sea-eye ROV from UT Austin and towed using R/V Smith,
as no AUV platform was available during the course of the above experiment. The ROV was also hosting a wide-band acoustic source sending out
chirp waveforms in the frequency range of 3 to 15 kHz. It has been observed
that despite the complexity of set-up used, the array dynamics could be
maintained well during the tow test and also the data collected were useful
in estimating the bottom type from reflection coefficient measurements and
comparing with the models available. Our analysis by matched filtering the
received data and estimating the bottom reflection coefficient showed that
the bottom type at the experimental site was sandy-silt, which fairly compared with observations on the same by other means. Details of experiments
performed and the results from the data analyzed would be presented during
the meeting. Some suggestions for improvement for future experiments will
be discussed.
9:45
4aUW7. Wide-angle reflection measurements (TREX13): Evidence of
strong seabed lateral heterogeneity at two scales. Charles W. Holland,
Chad Smith (Appl. Res. Lab., The Penn State Univ., P.O. Box 30, State College, PA 16804, cwh10@psu.edu), Paul Hines (Elec. and Comput. Eng.,
Dalhousie Univ., Dalhousie, NS, Canada), Jan Dettmer, Stan Dosso (School
of Earth and Ocean Sci., Univ. of Victoria, Victoria, BC, Canada), and
Samuel Pinson (Appl. Res. Lab., The Penn State Univ., State College,
PA)
Broadband wide-angle reflection data possess high information content,
yielding both depth and frequency dependence of sediment wave velocities,
attenuations, and density. Measurements at two locations off Panama City,
FL (TREX13), however, presented a surprise: over the measurement aperture (a few tens of meters) the sediment was strongly laterally variable. This
prevented the usual analysis in terms of depth dependent geoacoustic properties. Only rough estimates could be made. On the other hand, the data provide clear evidence of lateral heterogeneity at O(100-101) m scale. The two
sites were separated by ~6 km, one on a ridge (lateral dimension 102 m) and
one in a swale of comparable dimension; the respective sound speeds are
roughly 1680 m/s and 1585 m/s. The lateral variability, especially at the 1–
10 m scale is expected to impact both propagation and reverberation. Characteristics of the reflection data and its attendant “surprise” suggest the possibility of objectively separating the intermingled angle and range
dependence; this would open the door to detailed geoacoustic estimation in
areas of strong lateral variability. [Research supported by ONR Ocean
Acoustics.]
Invited Papers
2268
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2268
10:00
4aUW8. Modeling reverberation in a complex environment with the finite element method. Marcia J. Isakson and Nicholas P. Chotiros (Appl. Res. Labs., The Univ. of Texas at Austin, 10000 Burnet Rd., Austin, TX 78713, misakson@arlut.utexas.edu)
Acoustic finite element models solve the Helmholtz equation exactly and are customizable to the scale of the discretization of the
environment. This makes them an ideal candidate for reverberation studies in complex environments. In this study, reverberation is calculated for a realistic shallow water waveguide. The environmental parameters are taken from the extensive characterization completed
for the Target and Reverberation Experiment (TREX) conducted off the coast of the Florida panhandle in 2013. Measured sound speed
profiles, sea surface roughness, bathymetry, and measured ocean bottom roughness are included in the model. Measurements of the normal incidence bottom loss are used as a proxy for range dependent sediment density. Results are compared with a closed form solution
for reverberation. [Work sponsored by ONR, Ocean Acoustics.]
10:20–10:35 Break
Contributed Papers
10:35
10:50
4aUW9. Normal incidence reflection measurements (TREX13): Inferences for lateral heterogeneity over a range of scales. Charles W. Holland, Chad Smith (Appl. Res. Lab., The Penn State Univ., P.O. Box 30,
State College, PA 16804, cwh10@psu.edu), and Paul Hines (Elec. and
Comput. Eng., Dalhousie Univ., Dalhousie, NS, Canada)
4aUW10. Acoustic measurements on mid-shelf sediments with cobble:
Implications for reverberation. Charles W. Holland (Appl. Res. Lab., The
Penn State Univ., P.O. Box 30, State College, PA 16804, cwh10@psu.edu),
Gavin Steininger, Jan Dettmer, Stan Dosso (School of Earth and Ocean Sci.,
Univ. of Victoria, Victoria, BC, Canada), and Allen Lowrie (Picayune,
MS)
The vast majority of sediment acoustics research has focused on rather
homogeneous sandy sediments. Measurements for sediments containing
cobbles (grain size greater than 6 cm) are rare. Here, measurements are presented for mid-shelf sediments containing pebbles/cobbles mixed with other
grain sizes spanning 7 orders of magnitude, including silty clay, sand, and
shell hash. The 2 kHz sediment sound speed in two distinct layers with cobble is 153165 m/s and 1800620 m/s at the 95% credibility interval. The
dispersion over the 400–2000 Hz band was relatively weak, 2 and 7 m/s
respectively. The objective is to (1) present results for a sediment type for
which little is known, (2) motivate development of theoretical wave propagation models for wide grain size distributions, and (3) speculate on the possibility of cobble as a scattering mechanism for mid shelf reverberation. The
presence of cobbles from 1 to 3 m (possibly extending to 6 m) sub-bottom
suggest they are the dominant scattering mechanism at this site. Though
sediments with cobbles might be considered unusual, especially on the midshelf, they may be more common than the paucity of measurements would
suggest since typical direct sampling techniques (e.g., cores and grab samples) have fundamental sampling limitations. [Research supported by ONR
Ocean Acoustics.]
4a THU. AM
Normal incidence seabed reflection data suffer from a variety of ambiguities that make quantitative interpretation difficult. The reflection coefficient has an inseparable ambiguity between bulk density and compressional
sound speed. Even more serious, reflection data are a function of other sediment characteristics including interface roughness, volume heterogeneities,
and local bathymetry. Seafloor interface curvature is especially important
and can lead to focusing/defocusing of the reflected field. An attempt is
made with ancillary data including bathymetry, 400 kHz backscatter, and
wide angle seabed reflection data to separate some of the mechanisms.
Resulting analysis of 1–12 kHz reflection data suggest: (1) strong lateral
sediment heterogeneity exists on scales of 10–100 m; (2) there are distinct
geoacoustic regimes on the lee and stoss side of the ridge crest, and also
between crest and the swale, and (3) the ridge crest geoacoustic properties
are similar across distances of 6 km along two perpendicular transects (1
correlation). [Research supported by ONR Ocean Acoustics.]
2269
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2269
THURSDAY AFTERNOON, 30 OCTOBER 2014
INDIANA G, 1:10 P.M. TO 5:45 P.M.
Session 4pAAa
Architectural Acoustics and Speech Communication: Acoustic Trick-or-Treat: Eerie Noises, Spooky
Speech, and Creative Masking
Alexander U. Case, Cochair
Sound Recording Technology, University of Massachusetts Lowell, 35 Wilder St., Suite 3, Lowell, MA 01854
Eric J. Hunter, Cochair
Department of Communicative Sci., Michigan State University, 1026 Red Cedar Road, East Lansing, MI 48824
Chair’s Introduction—1:10
Invited Papers
1:15
4pAAa1. Auditory illusions of supernatural spirits: Archaeological evidence and experimental results. Steven J. Waller (Rock Art
Acoust., 5415 Lake Murray Blvd. #8, La Mesa, CA 91942, wallersj@yahoo.com) and Miriam A. Kolar (Amherst College, Amherst,
MA 01002)
Sound reflection, reverberation, ricochets, and interference patterns were perceived in the past as eerie sounds attributable to invisible echo spirits, thunder gods, ghosts, and sound-absorbing bodies. These beliefs in the supernatural were recorded in ancient myths, and
expressed in tangible archaeological evidence including canyon petroglyphs, cave paintings, and megalithic stone circles including
Stonehenge. Disembodied voices echoing throughout canyons gave the impression of echo spirits calling out from the rocks. Thunderous
reverberation filling deep caves gave the impression of the same thundering stampedes of invisible hoofed animals that were believed to
accompany thunder gods in stormy skies. If you did not know about sound wave reflection, would the inexplicable noise of a ricochet in
a large room have given you the impression of a ghost moaning “BOOoo” over your shoulder? Mysterious silent zones in an open field
gave the impression of a ring of large phantom objects blocking pipers’ music. Complex behaviors of sound waves such as reflection
and interference (which scientists today dismiss as acoustical artifacts) can experimentally give rise to psychoacoustic misperceptions in
which such unseen sonic phenomena are attributed to the supernatural. See https://sites.google.com/site/rockartacoustics/ for further
details.
1:35
4pAAa2. Pututus, resonance and beats: Acoustic wave interference effects at Ancient Chavın de Hu
antar, Per
u. Miriam A. Kolar
(Program in Architectural Studies and Dept. of Music, Amherst College, Barrett Hall, 21 Barrett Hill Dr., AC# 2255, PO Box 5000,
Amherst, MA 01002, mkolar@amherst.edu)
Acoustic wave interference produces audible effects observed and measured in archaeoacoustic research at the 3,000-year-old
Andean Formative site at Chavın de Huantar, Per
u. The ceremonial center’s highly-coupled network of labyrinthine interior spaces is
riddled with resonances excited by the lower-frequency range of site-excavated conch shell horns. These pututus, when played together
in near-unison tones, produce a distinct “beat” effect heard as the result of the amplitude variation that characterizes this linear interaction. Despite the straightforward acoustic explanation for this architecturally enhanced instrumental sound effect, the performative act
reveals an intriguing perceptual complication. While playing pututus inside Chavın’s substantially intact stone-and-earthen-mortar buildings, pututu performers have reported an experience of having their instruments’ tones “guided” or “pulled” into tune with the dominant
spatial resonances of particular locations. In an ancient ritual context, the recognition and understanding of such a sensory component
would relate to a particular worldview beyond the reach of present-day investigators. Despite our temporal distance, an examination of
the intertwined acoustic phenomena operative to this architectural–instrumental–experiential puzzle enriches the interdisciplinary
research perspective, and substantiates perceptual claims.
1:55
4pAAa3. Tapping into the theatre of the mind; creating the eerie scene through sound. Jonathon Whiting (Media and Information,
Michigan State Univ., College of Commun. Arts and Sci., 404 Wilson Rd., Rm. 409, East Lansing, MI 48824, whitin26@msu.edu)
Jaws. Psycho. Halloween. Halo. Movies and video games depend on music and acoustics to evoke certain emotional states in the
audience or game player. But what is the recipe for creating a haunting scene? A creaky door, a scream, a minor chord on a piano. How
and why are certain emotions pulled out of a listener in response to sound? From sound environments to mental expectations, the media
industry uses a variety of techniques to elicit responses from an audience. This presentation will discuss and present examples of the
principles behind the sound of fright.
2270
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2270
2:15
4pAAa4. Disquiet: Epistemological bogeymen and other exploits in audition. Ean White (unaffiliated, 1 Westinghouse Plaza C-216,
Boston, MA 02136-2079, ean@eanwhite.org)
Beginning with an interest in “physiological musics,” Ean White’s sound art exploits interstices in our sensory apparatus with
unnerving results. He will recount a series of audio experiments with effects ranging from involuntary muscle contractions to the creation of sounds eerily unique to each listener. The presentation will include discussion of his techniques and how they inform his artistic
practice.
2:35
4pAAa5. Removing the mask in multitrack music mixing. Alexander U. Case (Sound Recording Technol., Univ. of Massachusetts
Lowell, 35 Wilder St., Ste. 3, Lowell, MA 01854, alex@fermata.biz)
The sound recording heard via stereo loudspeakers and headphones is made up of many dozens—sometimes more than 100—discrete tracks of musical elements. Multiple individual performances across a variety of instruments are fused into the final, two-channel
recording—left and right—that is released to consumers. Achieving sonic success in this many-into-two challenge requires strategic,
creative release from masking. Part of the artistry of multitrack mixing includes finding innovative signal processing approaches that
enable the full arrangement and the associated interaction among the multitrack components of the music to be heard and enjoyed.
Masking among tracks clutters and obscures the music. But audio engineers are not afraid. They want you hear what’s behind the mask.
Hear how. Happy Halloween.
2:55
4pAAa6. Documenting and identifying things that go bump in the night. Eric L. Reuter (Reuter Assoc., LLC, 10 Vaughan Mall, Ste.
201A, Portsmouth, NH 03801, ereuter@reuterassociates.com)
Acoustical consultants are occasionally asked to help diagnose mysterious noises in buildings, and it can be difficult to be present
and ready to make measurements when such noises occur. This paper will presents some of the tools and methods the author uses for recording and analyzing these events. These include the use of tablet-based measurement devices and high-speed playback of long-term
recordings.
3:15–3:30 Break
3:30
4pAAa7. Inaudible information, disappearing declamations, misattributed locations, and other spooky ways your brain fools
you—every day. Barbara Shinn-Cunningham (Biomedical Eng., Boston Univ., 677 Beacon St., Boston, MA 02215-3201, shinn@bu.
edu)
We bumble through life convinced that our senses provide reliable, faithful information about the world. Yet on closer inspection,
our brains constantly misinform us, creepily convincing us of “truths” that are just plain false. We hear information that is not really
there. We are oblivious to sounds that are perfectly audible. For sounds that we do hear, we cannot tell when they actually occurred. We
completely overlook changes that even a simple acoustic analysis would detect with 100% accuracy. In short, we misinterpret the sounds
reaching our ears all the time, and do not even realize it. This talk will review the evidence for how unreliable and biased we are in interpreting the world—and why the chilling failures of our perceptual machinery may be excusable, or even useful, as we navigate the complex world in which we live.
3:50
4pAAa8. The mysterious case of the singing toilets and other nerve wracking tales of unwanted sound. David S. Woolworth
(Oxford Acoust., 356 CR 102, Oxford, MS 38655, dave@oxfordacoustics.com)
4p THU. PM
Lightweight construction nightmares, devilish designs that never see acoustic review, improper purposing of spaces, and other stories
involving the relentless torture of building occupants. Will they survive?
4:10
4pAAa9. Sound effects with AUditory syntaX—A high-level scripting language for sound processing. Bomjun J. Kwon (Hearing,
Speech and Lang., Gallaudet University, 800 Florida Ave NE, Washington, DC 20002, bomjun.kwon@gallaudet.edu)
AUditory syntaX (AUX) is a high-level scripting programming language specifically crafted for the generation and processing of auditory signals (Kwon, 2012; Behav Rev 44, 361–373). AUX does not require knowledge or prior experience in computer programming.
Rather, AUX provides an intuitive and descriptive environment where users focus on perceptual components of the sound, without tedious tasks unrelated to the perception such as memory management or array handling often required in other computer languages such as
C + + or MATLAB that are popularly used in auditory science. This presentation provides a demonstration of AUX for the generation and
processing of various sound effects, particularly “fun” or “spooky” sounds. Processing methods for sound effects widely used in arts,
films and other media, such as reverberation, echoes, modulation, pitch shift, and flanger/phaser, will be reviewed and coding in AUX to
generate those effects and the generated sounds will be demonstrated.
2271
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2271
4:30
4pAAa10. Eerie voices: Odd combinations, extremes, and irregularities. Brad H. Story (Speech, Lang., and Hearing Sci., Univ. of
Arizona, 1131 E. 2nd St., P.O. Box 210071, Tucson, AZ 85721, bstory@email.arizona.edu)
The human voice can project an eerie quality when certain characteristics are present in a particular context. Some types of eerie voices may be derived from physiological scaling of the speech production system that is either humanly impossible or nearly so. By combining previous work on adult speech, and current research on speech development, the purpose of this study was to simulate
vocalizations and speech based on unusual configurations of the vocal tract and vocal folds, and by imposing irregularities on movement
and vibration. The resulting sound contains qualities that are human-like, but not typical, and hence may give the perceptual impression
of eeriness. [Supported in part by NIH R01-DC011275.]
4:50
4pAAa11. Segregation of ambiguous pulse-echo streams and suppression of clutter masking in FM bat sonar by anticorrelation
signal processing. James A. Simmons (Neurosci., Brown Univ., 185 Meeting St., Box GL-N, Providence, RI 02912, james_simmons@
brown.edu)
Big brown bats often fly in conditions where the density and spatial extent of clutter requires a high rate of pulse emissions. Echoes
from one broadcast still are arriving when the next broadcast is sent out, creating ambiguity about matching echoes to corresponding
broadcasts. Biosonar sounds are widely beamed and impinge on the entire surrounding scene. Numerous clutter echoes typically are
received from different directions at similar times. The multitude of overlapping echoes and the occurrence of pulse-to-echo ambiguity
compromises the bat’s ability to peer into the upcoming path and determine whether it is free of collision hazards. Bats have to associate
echoes with their corresponding broadcasts to prevent ambiguity, and off-side clutter echoes have to be segregated from on-axis echoes
that inform the bat about its immediate forward path. In general, auditory streaming to resolve elements of an auditory scene depends on
differences in pitch and temporal pattern. Bats use a combination of temporal and spectral pitch to assign echoes to “target” and “clutter”
categories within the scene, which prevents clutter masking, and they associate incoming echoes with the corresponding broadcast by
treating the mismatch of echoes with the wrong broadcast as a type of clutter. [Supported by ONR.]
5:10
4pAAa12. Are you hearing voices in the high frequencies of human speech and voice? Brian B. Monson (Pediatric Newborn Medicine, Brigham and Women’s Hospital, Harvard Med. School, 75 Francis St., Boston, MA 02115, bmonson@research.bwh.harvard.edu)
The human voice produces acoustic energy at frequencies above 6 kHz. Energy in this high-frequency region has long been known
to affect perception of speech and voice quality, but also provides non-qualitative information about a speech signal. This presentation
will demonstrate how much useful information can be gleaned from the high frequencies with a report on studies where listeners were
presented with only high-frequency energy extracted from speech and singing. Come to test your own abilities and decide if you can
hear strange voices or just chirps and whistles in the high frequencies of human speech and voice.
Contributed Paper
5:30
4pAAa13. Measuring the impact of room acoustics on emotional
responses to music using functional neuroimaging: A pilot study. Martin
S. Lawless and Michelle C. Vigeant (Graduate Program in Acoust., The
Penn State Univ., 201 Appl. Sci. Bldg., University Park, PA 16802,
msl224@psu.edu)
Past cognitive neuroscience studies have established links between
music and an individual’s emotional response. Specifically, music can
induce activations in brain regions most commonly associated with reward
and pleasure (Blood/Zatorre PNAS 2001). To further develop concert hall
design criteria, functional magnetic resonance imaging (fMRI) techniques
can be used to investigate the emotional preferences of room acoustics
2272
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
stimuli. Auralizations were created under various settings ranging from
anechoic to extremely reverberant. These stimuli were presented to five participants in an MRI machine, and the subjects were prompted to rate the
stimuli in terms of preference. Noise stimuli that matched the acoustic stimuli temporally and spectrally were also presented to the participants for the
analysis of main contrasts of interest. In addition, the participants were first
tested in a mock scanner to acclimatize the subjects to the environment and
later validate the results of the study. Voxel-wise region of interest analysis
was used to locate the emotion and reward epicenters of the brain that were
activated when the subjects enjoyed a hall’s acoustics. The activation levels
of these regions, which are associated with positive-valence emotions, were
examined to determine if the activations correlate with preference ratings.
168th Meeting: Acoustical Society of America
2272
THURSDAY AFTERNOON, 30 OCTOBER 2014
MARRIOTT 7/8, 1:15 P.M. TO 3:20 P.M.
Session 4pAAb
Architectural Acoustics, Speech Communication, and Noise: Room Acoustics Effects on Speech
Comprehension and Recall II
Lily M. Wang, Cochair
Durham School of Architectural Engineering and Construction, University of Nebraska - Lincoln, PKI 101A, 1110 S. 67th St.,
Omaha, NE 68182-0816
David H. Griesinger, Cochair
Research, David Griesinger Acoustics, 221 Mt Auburn St #504, Cambridge, MA 02138
Invited Papers
1:15
4pAAb1. Challenges for second-language learners in difficult acoustic environments. Catherine L. Rogers (Dept. of Commun. Sci.
and Disord., Univ. of South Florida, USF, 4202 E. Fowler Ave., PCD1017, Tampa, FL 33620, crogers2@usf.edu)
Most anyone who has lived in a foreign country for any length of time knows that even everyday tasks can become tiring and frustrating when one must accomplish them while navigating a seemingly endless maze of unfamiliar social customs, vocabulary and speech
that seem far removed from one’s language laboratory experience. Add to these challenges noise, reverberation, and/or cognitive
demand (e.g., learning caculus, responding to multiple customer, and co-worker demands) and even experienced learners may begin to
question their proficiency. This presentation will provide an overview of the speech perception and production challenges faced by
second-language learners in difficult acoustic environments that we may encounter every day, such as in large lecture halls, retail or customer service, to name a few. Past and current research investigating the effects of various environmental challenges on both relatively
early and later learners of a second language will be considered, as well as strategies that may mitigate challenges for both speakers and
listeners in some of these conditions.
1:35
4pAAb2. Development of speech perception under adverse listening conditions. Tessa Bent (Dept. of Speech and Hearing Sci., Indiana Univ., 200 S. Jordan Ave., Bloomington, IN 47405, tbent@indiana.edu)
4p THU. PM
Speech communication success is dependent on interactions among the talker, listener, and listening environment. One such important interaction is between the listener’s age and the noise and reverberation in the environment. Previous work has demonstrated that
children have greater difficulty than adults in noisy and highly reverberant environments, such as those frequently found in classrooms. I
will review research that considers how a talker’s production patterns also contribute to speech comprehension, focusing on nonnative
talkers. Studies from my lab have demonstrated that children have more difficulty than adults perceiving speech that deviates from
native language norms, even in quiet listening conditions in which adults are highly accurate. When a nonnative talker’s voice was combined with noise, children’s word recognition was particularly poor. Therefore, similar to the developmental trajectory for speech perception in noise or reverberation, the ability to accurately perceive speech produced by nonnative talkers continues to develop well into
childhood. Metrics to quantify speech intelligibility in specific rooms must consider both listener characteristics, talker characteristics,
and their interaction. Future research should investigate how children’s speech comprehension is influenced by the interaction between
specific types of background noise and reverberation and talker production characteristics. [Work supported by NIH-R21DC010027.]
1:55
4pAAb3. Measurement and prediction of speech intelligibility in noise and reverberation for different sentence materials, speakers, and languages. Anna Warzybok, Sabine Hochmuth (Cluster of Excellence Hearing4All, Medical Phys. Group, Universit€at Oldenburg, Oldenburg D-26111, Germany, a.warzybok@uni-oldenburg.de), Jan Rennies (Cluster of Excellence Hearing4All, Project Group
Hearing, Speech and Audio Technol., Fraunhofer Inst. for Digital Media Technol. IDMT, Oldenburg, Germany), Thomas Brand, and
Birger Kollmeier (Cluster of Excellence Hearing4All, Medical Phys. Group, Universit€at Oldenburg, Oldenburg, Germany)
The present study investigates the role of the speech material type, speaker, and language for speech intelligibility in noise and reverberation. The experimental data are compared to predictions of the speech transmission index. First, the effect of noise only, reverberation only, and the combination of noise and reverberation was systematically investigated for two types of sentence tests. The hypothesis
to be tested was that speech intelligibility is more affected by reverberation when using an open-set speech material consisting of everyday sentences than when using a closed-set test with syntactically fixed and semantically unpredictable sentences. In order to distinguish
between the effect of speaker and language on speech intelligibility in noise and reverberation, the closed-set speech material was
recorded using bilingual speakers of German-Spanish and German-Russian. The experimental data confirmed that the effect of
2273
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2273
reverberation was stronger for an open-set test than for a closed-set test. However, this cannot be predicted by the speech transmission
index. Furthermore, the inter-language differences in speech reception thresholds were on average up to 5 dB, whereas inter-talker differences were of about 3 dB. The Spanish language suffered more under reverberation than German and Russian, what again challenged
the predictions of the speech transmission index.
2:15
4pAAb4. Speech comprehension in realistic classrooms: Effects of room acoustics and foreign accent. Zhao Peng, Brenna N.
Boyd, Kristin E. Hanna, and Lily M. Wang (Durham School of Architectural Eng. and Construction, Univ. of Nebraska-Lincoln, 1110
S. 67th St., Omaha, NE 68182, zpeng@huskers.unl.edu)
The current classroom acoustics standard (ANSI S12.60) recommends that core learning spaces shall not exceed reverberation time
(RT) of 0.6 second and background noise level (BNL) of 35 dBA, based on speech intelligibility performance mainly by the native English-speaking population. This paper presents two studies on the effects of RT and BNL on more realistic classroom learning experiences. How do native and non-native English-speaking listeners perform on speech comprehension tasks under adverse acoustic
conditions, if the English speech is produced by talkers whose native language is English (Study 1) versus Mandarin Chinese (Study 2)?
Speech comprehension materials were played back in a listening chamber to individual listeners: native and non-native English-speaking
in Study 1; native English, native Mandarin Chinese, and other non-native English-speaking in Study 2. Each listener was screened for
baseline English proficiency for use as a covariate in the statistical analysis. Participants completed dual tasks simultaneously (speech
comprehension and adaptive dot-tracing) under 15 different acoustic conditions, comprised of three BNL conditions (RC-30, 40, and 50)
and five RT scenarios (0.4–1.2 s). Results do show distinct differences between the listening groups. [Work supported by a UNL Durham
School Seed Grant and the Paul S. Veneklasen Research Foundation.]
Contributed Papers
2:35
4pAAb5. Speech clarity in lively theatres. Gregory A. Miller and Carl
Giegold (Threshold Acoust., LLC, 53 W. Jackson Boulevard, Ste. 815, Chicago, IL 60604, gmiller@thresholdacoustics.com)
By their very nature, theatres must be “lively” acoustic spaces. The audience must hear one another, so laughter and applause can ripple around the
room, and they must have the aural sensation of being in a large space
heightens the excitement of being at a live performance. Similarly, the theatre must reflect sound back to the actors in a way that helps them to gauge
how well their voices are filling the room, and to gauge audience response
throughout the performance. And yet this liveliness runs counter to much of
conventional wisdom regarding the acoustic conditions to support speech
clarity. This paper will describe ways in which the acoustic response of a
room can be built up to support both speech clarity and liveliness, with a
particular emphasis on theatre spaces in which the actors are placed in the
same volume as the audience (thrust and surround stages).
2:50
4pAAb6. Speech communication in noise to valid the virtual sound capturing system. Hyung Suk Jang, Seongmin Oh, and Jin Yong Jeon (Dept.
of Architectural Eng., Hanyang Univ., Seoul 133-791, South Korea, janghyungs@gmail.com)
The microphone systems were designed to capture the real sound field
for the creation of the remote virtual coexistence space: omnidirectional
microphone, binaural dummy head, linear array microphones, and spherical
microphone. The captured signals were applied to synthesize into the binaural signal. These binaural cues were generated using head-related transfer
function (HRTF) through headphone. For the validation, the sentence
2274
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
recognition tests were carried out to quantify the ability of speech perception with the sentence lists for normal listeners. In addition, the readability
and the naturalness were used to assess the quality of the synthesized
sounds. The different noise environments were applied with different signal
to noise ratio and an efficient sound capturing system was suggested by the
comparing the results of the sentence recognition tests.
3:05
4pAAb7. Quantifying a measure and exploring the effect of varying
reflection densities from realistic room impulse responses. Hyun Hong
and Lily M. Wang (Durham School of Architectural Eng. and Construction,
Univ. of Nebraska-Lincoln, 1110 S. 67th St., Omaha, NE 68182-0816,
hhong@huskers.unl.edu)
Perceptual studies using objective acoustic metrics calculated from
room impulse responses, such as reverberation time and clarity index, are
common. Less work has been conducted looking explicitly at the reflection
density, or the number of reflections per second. The reflection density,
though, may well have its own perceptual influence when reverberation
time and source-receiver distances are controlled, particularly in relation to
room size perception. This paper presents first an investigation into quantifying the reflection density from realistic room impulse responses that may
be measured or simulated. The resolution of the sampling frequency, time
window applied, and cut-off level for including a reflection in the count are
considered. The quantification method is subsequently applied to select a
range of realistic RIRs for use in a perceptual study on determining the maximum audible reflection density by humans, using both speech and clapping
signals. Results from this study are compared to those from similar previous
work by the authors which used artificially simulated impulse responses
with constant reflection densities over time.
168th Meeting: Acoustical Society of America
2274
THURSDAY AFTERNOON, 30 OCTOBER 2014
LINCOLN, 1:15 P.M. TO 5:15 P.M.
Session 4pAB
Animal Bioacoustics and Acoustical Oceanography: Use of Passive Acoustics for Estimation of Animal
Population Density II
Tina M. Yack, Cochair
Bio-Waves, Inc., 364 2nd Street, Suite #3, Encinitas, CA 92024
Danielle Harris, Cochair
Centre for Research into Ecological and Environmental Modelling, University of St. Andrews, The Observatory, Buchanan
Gardens, St. Andrews KY16 9LZ, United Kingdom
Chair’s Introduction—1:15
Invited Papers
1:20
4pAB1. Estimating singing fin whale population density using frequency band energy. David K. Mellinger (Cooperative Inst. for
Marine Resources Studies, Oregon State Univ., 2030 SE Marine Sci. Dr., Newport, OR 97365, David.Mellinger@oregonstate.edu),
Elizabeth T. K€
usel (NW Electromagnetics and Acoust. Res. Lab., Portland State Univ., Portland, OR), Danielle Harris, Len Thomas
(Ctr. for Res. into Ecological and Environ. Modelling, Univ. of St. Andrews, St. Andrews, United Kingdom), and Luis Matias (Instituto
Dom Luiz, Faculdade de Ci^encias, Universidade de Lisboa, Lisbon, Portugal)
Fin whale (Balaenoptera physalus) song occurs in a narrow frequency band between approximately 15 and 25 Hz. During the breeding season, the sound from many distant fin whales in tropical and subtropical parts of the world may be seen as a “hump” in this band
of the ocean acoustic spectrum. Since a higher density of singing whales leads to more energy in the band, the size of this hump—the
total received acoustic energy in this frequency band—may be used to estimate the population density of singing fin whales in the vicinity of a sensor. To estimate density, a fixed density of singing whales is simulated; using acoustic propagation modeling, the energy they
emit is propagated to the sensor, and the received level calculated. Since received energy in the fin whale band increases proportionally
with the density of whales, the density of whales may then be estimated from the measured received energy. This method is applied to a
case study of sound recorded on ocean-bottom recorders southwest of Portugal; issues covered include variance due to acoustic propagation modeling, reception area, variation in whale song acoustic level and frequency, and elimination of interfering sounds. [Funding
from ONR.]
1:40
4pAB2. Large-scale passive-acoustics-based population estimation of African forest elephants. Yu Shiu, Sara Keen, Peter H.
Wrege, and Elizabeth Rowland (BioAcoust. Res. Program, Cornell Univ., 159 Sapsucker Woods Rd, Ithaca, NY 14850, atoultaro@
gmail.com)
2275
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
4p THU. PM
African forest elephants (Loxodonta cyclotis) live in tropical rainforests in Central Africa and often use low-frequency vocalizations
for long-distance communication and coordination of group activities. There is great interest in monitoring population size in this species; however, the dense rainforest canopy severely limits visibility, making it difficult to estimate abundance using traditional methods
such as aerial surveys. Passive acoustic monitoring offers an alternative approach of estimating its abundance in a low visibility environment. The work we present here can be divided into three steps. First, we apply an automatic elephant call detector, which enables the
processing of large-scale acoustic signals in a reasonable amount of time. Second, we apply a density estimation method we designed
for a single microphone. Because microphones are often positioned far apart in order to cover a large area in the rainforest, meaning that
the same call will not produce multiple arrivals on different recording units. Lastly, we examine results from our historic data across five
years in six locations in central Africa, which includes over 1000 days of sound stream. We will address the feasibility of long-term population monitoring and also the potential impact of human activity on elephant calling behavior.
2275
2:00
4pAB3. A generalized random encounter model for estimating animal density with remote sensor data. Elizabeth Moorcroft, Tim
C. D. Lucas (Ctr. for Mathematics, Phys. and Eng. in the Life Sci. and Experimental Biology, UCL, CoMPLEX, University College
London, Gower St., London WC1E 6BT, United Kingdom, e.moorcroft@ucl.ac.uk), Robin Freeman, Marcus J. Rowcliffe (Inst. of Zoology, Zoological Society of London, London, United Kingdom), and Kate E. Jones (Ctr. for Biodiversity and Environment Res., UCL,
London, United Kingdom)
Acoustic detectors are commonly being used to monitor wildlife. Current estimators of abundance or density require recognition of
individuals or the distance of the animal from the sensor, which is often difficult. The random encounter model (REM) has been successfully applied to count data without these requirements. However, count data from acoustic detectors do not fit the assumptions of the
REM due to the directionality of animal signals. We developed a generalized REM (gREM), to estimate animal density from count data,
derived for different combinations of sensor detection widths and animal signal widths. We tested the accuracy and precision of this
model using simulations for different combinations of sensor detection and animal signal widths, number of captures, and animal movement models. The gREM produces accurate estimates of absolute animal density. However, larger sensor detection and animal signal
widths, and larger number of captures give more precise estimates. Different animal movement models had no effect on the gREM. We
conclude that the gREM provides an effective method to estimate animal densities in both marine and terrestrial environments. As
acoustic detectors become more ubiquitous, the gREM will be increasingly useful for monitoring animal populations across broad spatial, temporal, and taxonomic scales.
2:20
4pAB4. Using sound propagation modeling to estimate the number of calling fish in an aggregation from single-hydrophone
sound recordings. Mark W. Sprague (Phys., East Carolina Univ., M.S. 563, Greenville, NC 27858, spraguem@ecu.edu) and Joseph J.
Luczkovich (Biology, East Carolina Univ., Greenville, NC)
Many fishes make sounds during spawning events that can be used to estimate abundance. Spawning stock size is a measure of fish
population size that is used by fishery biologists to manage harvests levels. It is desirable that such an estimate be assessed easily and
remotely using passive acoustics. Passive acoustics techniques (hydrophones) can be used to identify sound-producing species, but it is
difficult to count individual sound sources in the sea, where it is dark, background noise levels can be high, but species can be identified
by their sounds. We have developed a method that can estimate the density of calling fish in an aggregation from single-hydrophone
recordings. Our method requires a sound propagation model for the area in which the aggregation is located. We generate a library of
modeled sounds of virtual Monte-Carlo generated distributions of fish to determine the range of fish population densities that match the
characteristics of single-hydrophone sound recording. Such a model could be used from a fixed station (e.g., an observatory) to estimate
the population size of the sound producers. In this presentation, we will present some calculations made using this method and will
examine the benefits and limitations of the technique.
Contributed Papers
2:40
3:15
4pAB5. An experimental evaluation of the performance of acoustic recording systems for estimating avian species richness and abundance.
Antonio Celis Murillo (Natural Resources and Environmental Sci., Univ. of
Illinois at Urbana-Champaign, 1704 Harrington Dr., Champaign, IL 61821,
celismu1@illinois.edu), Jill Deppe (Biological Sci., Eastern Illinois Univ.,
Champaign, IL), Jason Riddle (Natural Resources, Univ. of Wisconsin at
Stevens Point, Stevens Point, WI), Michael P. Ward (Natural Resources and
Environmental Sci., Univ. of Illinois at Urbana-Champaign, Champaign,
IL), and Theodore Simons (USGS cooperative fish and Wildlife Res. unit,
North Carolina State Univ., Raleigh, NC)
4pAB6. Spatial variation of the underwater soundscape over coral reefs
in the Northwestern Hawaiian Islands. Simon E. Freeman (Marine Physical Lab., Scripps Inst. of Oceanogr., 7038 Old Brentford Rd., Alexandria,
VA 22310, simon.freeman@gmail.com), Lauren A. Freeman (Marine Physical Lab., Scripps Inst. of Oceanogr., La Jolla, CA), Marc O. Lammers
(Oceanwide Sci. Inst., Honolulu, HI), and Michael J. Buckingham (Marine
Physical Lab., Scripps Inst. of Oceanogr., La Jolla, CA)
Comparisons between field observers and acoustic recording systems
have shown great promise for sampling birds using acoustics methods.
Comparisons provide information about the performance of recording systems and field observers but do not provide a robust validation of their true
sampling performance—i.e., precision and accuracy relative to known population size and richness. We used a 35-speaker bird song simulation system
to experimentally test the accuracy and precision of two stereo (Telinga and
SS1) and one quadraphonic recording system (SRS) for estimating species
richness, abundance, and total abundance (across all species) of vocalizing
birds. We simulated 25 bird communities under natural field conditions by
placing speakers in a wooded area at 4–119 m from the center of the survey
at differing heights and orientations. We assigned recordings randomly to
one of eight skilled observers. We found a significant difference among
microphones in their ability to accurately estimate richness (p = 0.0019) and
total bird abundance (p = < 0.0001). Our study demonstrates that acoustic
recording systems can potentially estimate bird abundance and species richness accurately; however, their performance is likely to vary by its technical
characteristics (recording pattern, microphone arrangement, etc.).
Coral reefs create a complex acoustic environment, dominated by
sounds produced by benthic creatures such as crustaceans and echinoderms.
While there is growing interest in the use of ambient underwater biological
sound as a gauge of ecological state, extracting meaningful information
from recordings is a challenging task. Single hydrophone (omnidirectional)
recorders can provide summary time and frequency information, but as the
spatial distribution of reef creatures is heterogeneous, the properties of reef
sound arriving at the receiver vary with position and arrival angle. Consequently, the locations and acoustic characteristics of individual sound producers remain unknown. An L-shaped hydrophone array, providing
direction-and-range sensing capability, can be used to reveal the spatial variability of reef sounds. Comparisons can then be made between sound sources and other spatially referenced information such as photographic data.
During the summer of 2012, such an array was deployed near four different
benthic ecosystems in the Northwestern Hawaiian Islands, ranging from
high-latitude coral reefs to communities dominated by algal turf. Using conventional and adaptive acoustic focusing (equivalent to curved-wavefront
beamforming), time-varying maps of sound production from benthic organisms were created. Comparisons with the distribution of nearby sea floor
features, and the makeup of benthic communities, will be discussed.
2:55–3:15 Break
2276
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2276
4pAB7. Density estimates of odontocetes in an active military base using
passive acoustic monitoring. Bethany L. Roberts (School of Biology,
Univ. of St. Andrews, Sea Mammal Res. Unit, St. Andrews, Fife KY16
8LB, United Kingdom, blr2@st-andrews.ac.uk), Zach Swaim, and Andrew
J. Read (Duke Marine Lab, Duke Univ., Beaufort, NC)
We deployed passive acoustic monitoring devices in Camp Lejeune,
North Carolina, USA, to estimate density of odontocete populations. Four
C-PODs (echolocation click detectors) were deployed in water depths ranging from 13 to 21 meters from 30 November 2012 to 13 November 2013.
Two species of odontocetes are known to inhabit the survey area: bottlenose
dolphins and Atlantic spotted dolphins. These methods incorporate (i) the
rate at which the animals produce echolocation cues, (ii) the probability of
detecting cues, and (iii) the false positive rate of detections. To determine
the cue rate of bottlenose dolphins, we attached DTAGs to 14 bottlenose
dolphins during 2012 and 2013 in Sarasota, Florida. To determine cue rate
of spotted dolphins, we used six recordings of focal follows from 2001-2003
in an area adjacent to C-POD deployment locations. Echolocation playbacks
to C-PODs were used to obtain false positive rate and detection radius of
each C-POD. Furthermore, we obtained proportions of bottlenose and spotted dolphins in the survey area from concurrent line transect surveys. Preliminary results indicate that dolphins were detected on all four C-PODs
during every month of the survey period. Future studies in areas where multiple species are present could potentially use methods described here.
3:45
4pAB8. Preliminary calculation of individual echolocation signal emission
rate of Franciscana dolphins (Pontoporia blainvillei). Artur Andriolo (Zoology Dept., Federal Univ. of Juiz de Fora, Universidade Federal de Juiz de
Fora, Rua Jose Lourenço Kelmer, s/n - Campus Universitario Bairro S~ao
Pedro, Juiz de Fora, Minas Gerais 36036-900, Brazil, artur.andriolo@ufjf.edu.
br), Federico Sucunza (Ecology Graduate Program, Federal Univ. of Juiz de
Fora, Juiz de Fora, Brazil), Alexandre N. Zerbini (Ecology, Instituto Aqualie,
Juiz de Fora, Brazil), Daniel Danilewicz (Zoology Graduate Program, State
Univ. of Santa Cruz, Ilheus, Brazil), Marta J. Cremer (Biological Sci., Univ. of
Joinville Region, Joinville, Brazil), and Annelise C. Holz (Graduate Program
in Health and Environment, Univ. of Joinville Region, Joinville, Brazil)
Calculation of echolocation signals emission rate is necessary to estimate how many individuals are vocalizing, especially if passive acoustic
density estimation methods are to be implemented. We calculated the individual emission rate of echolocation signals of franciscana dolphin. Fieldwork was between 22 and 31 January of 2014 at Babitonga Bay, Brazil.
Acoustic data and group size were registered when animals were within visual range at maximum distance of 50 meters. We used a Cetacean
ResearchTM hydrophone. The sound was digitized by Analogic/Digital
IOtech, stored as wav-files and analyzed with Raven software. A band limited energy detector was set to automatically extract echolocation signals.
The emission rate was calculated dividing the clicks registered for each file
by the file duration and by the number of individuals in the group. We analyzed 240 min of sound of 36 groups. A total of 29,164 clicks were detected.
The median individual click rate was 0.290 clicks/s (10th = 0.036 and
90th = 1.166 percentiles). The result is a general approximation of the individual echolocation signal emission rate. Sound production rates are potentially dependent on a number of factors, like season, group size, sex, or even
density itself. [This study was supported by IWC/Australia, Petrobras,
Fundo de Apoio a Pesquisa/UNIVILLE.]
4:00
4pAB9. Investigating the potential of a wave glider for cetacean density
estimation—A Scottish study. Danielle Harris (Ctr. for Res. into Ecological and Environ. Modelling, Univ. of St. Andrews, The Observatory, Buchanan Gardens, St. Andrews KY16 9LZ, United Kingdom, dh17@standrews.ac.uk) and Douglas Gillespie (Sea Mammal Res. Unit, Univ. of St.
Andrews, St. Andrews, United Kingdom)
A major advantage of autonomous vehicles is their ability to provide both
spatial and temporal coverage of an area during a survey. However, there is a
need to assess whether these technologies are suitable for monitoring cetacean
population densities. Data are presented from a Wave Glider deployed off the
2277
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
east coast of Scotland between March and April 2014. Key areas of survey
design, data collection, and analysis were investigated. First, the ability of the
glider to complete a designed line transect survey was assessed. Second, the
encounter rates of all detected species were estimated. Harbour porpoise (Phocoena phocoena) was the most commonly encountered species and became
the focal species in this study. Using the harbor porpoise encounter rate, the
amount of survey effort required to estimate density with a suitable level of
uncertainty was estimated. A separate experiment was designed to estimate
the average probability of harbor porpoise detection by the glider. The glider
was deployed near an array of nine C-PODs (odontocete detection instruments) and the same harbor porpoise click events were matched across instruments. Such matches can be analyzed using spatially explicit capture recapture
methods, which allow the detection efficiency of the glider to be estimated.
4:15
4pAB10. Toward acoustically derived population estimates in marine
conservation: An application of the spatially-explicit capture-recapture
methodology for North Atlantic right whales. Danielle Cholewiak, Steven
Brady, Peter Corkeron, Genevieve Davis, and Sofie Van Parijs (Protected
Species Branch, NOAA Northeast Fisheries Sci. Ctr., 166 Water St., Woods
Hole, MA 02543, danielle.cholewiak@noaa.gov)
Passive acoustics provide a flexible tool for developing understanding of
the ecology and behavior of vocalizing marine animals. Yet despite a robust
capacity for detecting species presence, our ability to estimate population
abundance from acoustics still remains poor. Critically, abundance estimates
are precisely what conservation practitioners and policymakers often
require. In the current study, we explored the application of acoustic data in
the spatially-explicit capture-recapture (SECR) methodology, to evaluate
whether acoustics can be used to infer abundance in the endangered North
Atlantic right whale. We sub-sampled a year-long acoustic dataset from
archival recorders deployed in Massachusetts Bay. Multichannel data were
reviewed for the presence of up-calls. A total of 1659 unique up-calls were
detected. Estimates of up-call density ranged from zero to 608 (6 70 SE)
up-calls/hour. Estimates of daily abundance, when corrected for average
calling rate, ranged from 0—69 (6 21 SE) individuals per day. These results
qualitatively compare well with patterns in right whale occurrence reported
from aerial-based visual surveys. Since acoustic abundance calculations are
affected by variation in calling behavior, estimates should be interpreted
cautiously; however, these results indicate that passive acoustics has the
potential to directly inform conservation and management strategies.
4:30
4pAB11. Statistical mechanics techniques applied to the analysis of
humpback whale inter-call intervals. Gerald L. D’Spain (Scripps Inst. of
Oceanogr., Univ. of California, San Diego, 291 Rosecrans St., San Diego,
CA 92106, gdspain@ucsd.edu), Tyler A. Helble (SPAWAR SSC Pacific,
San Diego, CA), Heidi A. Batchelor, and Dennis Rimington (Scripps Inst.
of Oceanogr., Univ. of California, San Diego, San Diego, CA)
Techniques developed in statistical mechanics recently have been applied
to the analysis of the topology of complex human communication networks.
These methods examine the network’s macroscopic statistical properties
rather than the details of individual interactions. Here, these methods are
applied to the analysis of the time intervals between humpback whale calls
detected in passive acoustic monitoring data collected by the bottom-mounted
hydrophones on the Pacific Missile Range Facility (PMRF) west of Kauai,
Hawaii. Recently developed localization and tracking algorithms for use with
PMRF data have been applied to separate the calls of an individual animal
from those of a collection of animals. As with the distributions of time intervals between human communications, the distributions of time intervals
between humpback whale call detections are distinctly different than those
expected for a purely independent, random (Poisson) process. This conclusion
holds both for time intervals between calls from individual animals and from
the collection of animals vocalizing simultaneously. although significant differences in these probability distributions occur. A model based on the migration of clusters of animals is developed to fit the distributions. Possible
mechanisms giving rise to aspects of the distributions are discussed. [Work
supported by the Office of Naval Research, Code 322-MMB.]
4:45–5:15 Panel Discussion
168th Meeting: Acoustical Society of America
2277
4p THU. PM
3:30
THURSDAY AFTERNOON, 30 OCTOBER 2014
INDIANA A/B, 1:30 P.M. TO 5:15 P.M.
Session 4pBA
Biomedical Acoustics: Mechanical Tissue Fractionation by Ultrasound: Methods, Tissue Effects, and
Clinical Applications II
Vera A. Khokhlova, Cochair
University of Washington, 1013 NE 40th Street, Seattle, WA 98105
Jeffrey B. Fowlkes, Cochair
Univ. of Michigan Health System, 3226C Medical Sciences Building I, 1301 Catherine Street, Ann Arbor, MI 48109-5667
Invited Papers
1:30
4pBA1. High intensity focused ultrasound-induced bubbles stimulate the release of nucleic acid cancer biomarkers. Tatiana
Khokhlova (Medicine, Univ. of Washington, Harborview Medical Ctr., 325 9th Ave. Box 359634, Seattle, WA 98104, tdk7@uw.edu),
John R. Chevillet (Inst. for Systems Biology, Seattle, WA), George R. Schade (Urology, Univ. of Washington, Seattle, WA), Maria D.
Giraldez (Medicine, Univ. of Michigan, Ann Arbor, MI), Yak-Nam Wang (Appl. Phys. Lab., Univ. of Washington, Seattle, WA), Joo
Ha Hwang (Medicine, Univ. of Washington, Seattle, WA), and Muneesh Tewari (Medicine, Univ. of Michigan, Ann Arbor, MI)
Recently, several nucleic acid cancer biomarkers, e.g., microRNA and mutant DNA, have been identified and shown promise for
improving cancer diagnostics. However, the abundance of these biomarker classes in the circulation is low, impeding reliable detection
and adoption into clinical practice. Here, the ability of HIFU-induced bubbles to stimulate release of cancer-associated microRNAs by
tissue fractionation or permeabilization was investigated in a heterotopic syngeneic rat prostate cancer model. A 1.5 MHz HIFU transducer was used to either mechanically fractionate subcutaneous tumor with boiling histotripsy (BH) (~20 kW/cm2, 10 ms pulses, and
duty factor 0.01) or to permeabilize tumor tissue with inertial cavitation activity (p- = 16 MPa, 1 ms pulses, duty factor 0.001). Blood
was collected immediately prior to and serially up to 24-hours after treatments. Plasma concentrations of microRNAs were measured by
quantitative RT-PCR. Both exposures resulted in a rapid (within 15 min), short (3 h) and dramatic (over ten-fold) increase in relative
plasma concentrations of tumor-associated microRNAs, Histologic examination of excised tumor confirmed complete fractionation of
targeted tumor by BH and localized areas of intraparenchymal hemorrhage and tissue disruption by cavitation-based treatment. These
data suggest a clinically useful application of HIFU-induced bubbles for non-invasive molecular biopsy. [Grant support: NIH
1K01EB015745, R01CA154451, R01DK085714.]
1:50
4pBA2. Tissue decellularization with boiling histotripsy and the potential in regenerative medicine. Yak-Nam Wang (APL, CIMU,
Univ. of Washington, 1013 NE 40th St., Seattle, WA 98105, ynwang@u.washington.edu), Tatiana Khokhlova (Dept. of Medicine, Univ.
of Washington, Seattle, WA), Adam Maxwell (Dept. of Urology, Univ. of Washington, Seattle, WA), Wayne Kreider (APL, CIMU,
Univ. of Washington, Seattle, WA), Ari Partanen (Clinical Sci. MR Therapy, Philips Healthcare, Andover, Maryland), Navid Farr
(Dept. of BioEng., Univ. of Washington, Seattle, WA), George Schade (Dept. of Urology, Univ. of Washington, Seattle, WA), Michael
Bailey (APL, CIMU, Univ. of Washington, Seattle, WA), and Vera Khokhlova (Dept. of Acoust., Phys. Faculty, Moscow State Univ.,
Moscow, Russian Federation)
There have been major advances in the development of replacement organs by tissue engineering (TE); however, one of the holy
grails is still in the development of biomimetic structures that replicate the complex 3-D vasculature. Creation of bioartificial organs by
decellularization shows greater promise in reaching the clinic compared to TE. However, current decellularization techniques require
the use of chemical and biological agents, often in combination with physical force, which could result in damage to the matrix. Here
we evaluate the use of boiling histotripsy (BH) to selectively decellularize large volumes of tissue. BH lesions (10–20 mm diameter)
were produced in bovine liver with a clinical 1.2 MHz MR-HIFU system (Sonalleve, Philips, Finland), using thirty 10 ms pulses, and
pulse repetition frequencies of 1–10 Hz. Peak acoustic powers corresponding to an estimated in situ shock front amplitude of 65 MPa
were used. Macroscopic and histological evaluation revealed treatment conditions that produced decellularized lesions in which major fibrous structures such as stroma and vasculature remained intact while parenchymal cells were mostly lyzed. With further tailoring of the
pulsing scheme parameters, this treatment modality could potentially be optimized for organ decellularization. [Work supported by NIH
EB007643, K01-EB-015745-01, T32-DK007779, and NSBRI NASA-NCC 9-58.]
2278
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2278
2:10
4pBA3. Destruction of microorganisms by high-energy pulsed focused ultrasound. Timothy A. Bigelow (Elec. and Comput. Eng.,
Mech. Eng., Iowa State Univ., 2113 Coover Hall, Ames, IA 50011, bigelow@iastate.edu)
The use of high-energy ultrasound pulses to generate and excite clouds of microbubbles has shown great potential to mechanically
destroy soft tissue in a wide range of clinical applications. In our work, we have focused on extending the application of cavitation based
histotripsy to the destruction of microorganisms such as bacteria biofilms and microalgae. Bacteria biofilms pose a significant problem
when treating infections on medical implants while the fractionation of microalgae in an efficient manner could lower the production
cost of biofuels. In the past, we have shown a 4.4-log10 reduction of viable Escherichia coli bacteria capable of forming a colony in a
biofilm following a high-energy pulsed focused ultrasound exposure. We have also shown complete removal of Pseudomonas aeruginosa biofilms from a Pyrolytic graphite substrate based on fluorescence imaging following live/dead staining. We also showed minimal
temperature increase when the appropriate ultrasound pulse parameters were utilized. Recently, we have shown that high-energy pulsed
ultrasound at 1.1 MHz can fractionate the microalgae model system Chlamydomonas reinhardtii for lipid extraction/biofuel production
in both flow and stationary exposure systems with improved efficiency over traditional sonicators. In these studies, the fractionation of
the cells was quantified by protein and chlorophyll release following exposure.
Contributed Papers
2:30
3:15
4pBA4. Dependence of ablative ability of high-intensity focused ultrasound cavitation-based histotripsy on mechanical properties of agar. Jin
Xu (Eng., John Brown Univ., Siloam Springs, AR), Timothy Bigelow (Elec.
and Comput. Eng., Iowa State Univ., Iowa State University, 2113 Coover
Hall, Ames, IA 50011, bigelow@iastate.edu), Gabriel Davis, Alex Avendano, Pranav Shrotriya, Kevin Bergler (Mech. Eng., Iowa State Univ.,
Ames, IA), and Zhong Hu (Elec. and Comput. Eng., Iowa State Univ.,
Ames, IA)
4pBA6. Acoustic field characterization of the Waterlase2: Acoustic characterization and high speed photomicrography of a clinical laser generated shock wave therapy device for the treatment of periodontal biofilms
in orthodontics and periodontics. Camilo Perez, Yak-Nam Wang (BioEng.
and Ctr. for Industrial and Medical Ultrasound, CIMU, Appl. Phys. Lab.,
Univ. of Washington, 1013 NE 40th St., Seattle, WA 98105-6698, camipiri@
uw.edu), Alina Sivriver, Dmitri Boutoussov, Vladimir Netchitailo (Biolase
Inc., Irvine, CA), and Thomas J. Matula (Ctr. for Industrial and Medical
Ultrasound, Appl. Phys. Lab., Univ. of Washington, Seattle, WA)
2:45
4pBA5. Shear waves induced by Lorentz force in soft tissues. Stefan
Catheline, Graland-Mongrain Pol, Ali Zorgani, Remi Souchon, Cyril Lafon,
and Jean-yves Chapelon (LabTAU, INSERM, Univ. of Lyon, 151 cours
albert thomas, Lyon 69003, France, stefan.catheline@inserm.fr)
This study presents the observation of elastic shear waves generated in
soft solids using a dynamic electromagnetic field. The first and second
experiments of this study show that Lorentz force can induce a displacement
in a soft phantom and that this displacement is detectable by an ultrasound
scanner using speckle-tracking algorithms. For a 100 mT magnetic field and
a 10 ms, 100 mA peak-to-peak electrical burst, the displacement reached a
magnitude of 1 m. In the third experiment, we show that Lorentz force can
induce shear waves in a phantom. A physical model using electromagnetic
and elasticity equations is proposed and computer simulations are in good
agreement with experimental results. The shear waves induced by Lorentz
force are used in the last experiment to estimate the elasticity of a swine
liver sample.
3:00–3:15 Break
2279
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Recent applications in endodontics and periodontics use erbium solid
state lasers with fiber delivery in order to effectively kill bacteria and biofilms. In this paper, the acoustic field together with the bubble dynamics of
a clinical portable Er,Cr:YSGG laser-generating device (Waterlase 2) was
characterized. Field mapping with a calibrated PVDF hydrophone together
with high speed imaging were performed in water for two different tip geometries (flat or tapered), three different tip diameters (200, 300, or 400mm),
and two different laser pulse durations (60 or 700ms) at several laser pulse
energy settings (5 mJ–400 mJ) for individual pulses and at different pulse
repetition frequencies (5, 20, and 100 Hz). Peak positive pressures 5–50 mm
away from the tip ranged from 0.1 to 2MPa, while peak negative pressures
ranged from 0.1 to 1.2 MPa. There was a strong correlation between the
acoustic emissions generated by the bubble and the high speed imaging dynamics of the bubble. An initial thermoelastic response, initial bubble collapse and further rebounds where analyzed individually and compared
across different test parameters. For the initial thermoelastic pulse (laser
generated), pulse rise times ranged from 40 to 200 ns. Differences between
flat and tapered tips will be discussed.
3:30
4pBA7. Simulations of focused shear shock waves in soft solids and the
brain. Bruno Giammarinaro, François Coulouvrat, and Gianmarco Pinton
(Institut Jean le Rond d’Alembert UMR 7190, CNRS, Universite Pierre et
Marie Curie, d’alembert, case 162, 4, Pl. Jussieu, Paris cedex 05 75252,
France, bruno.giam@hotmail.fr)
Because of a very small speed, shear waves in soft solids are extremely
nonlinear, with nonlinearities four orders of magnitude larger than in classical solids. Consequently, these nonlinear shear waves can transition from a
smooth to a shock profile in less than one wavelength. We hypothesize that
traumatic brain injuries (TBI) could be caused by the sharp gradients resulting from shear shock waves. However, shear shock waves are not currently
modeled by simulations of TBI. The objective of this paper is to describe
shear shock wave propagation in soft solids within the brain, with source geometry determined by the skull. A 2D nonlinear paraxial equation with
cubic nonlinearities is used as a starting point. We present a numerical
scheme based on a second order operator splitting which allows the application of optimized numerical methods for each terms. We then validate the
scheme with Guiraud’s nonlinear self-similarity law applied to cusped caustics. Once validated, the numerical scheme is then applied to a blast wave
168th Meeting: Acoustical Society of America
2279
4p THU. PM
Cavitation-based histotripsy uses high-intensity focused ultrasound
(HIFU) at low duty factor to create bubble clouds inside tissue to liquefy a
region and provides better fidelity to planned lesion coordinates and the
ability to perform real-time monitoring. The goal of this study was to identify the most important mechanical properties for predicting lesion dimensions, among these three: Young’s modulus, bending strength, and fracture
toughness. Lesions were generated inside tissue-mimicking agar, and correlations were examined between the mechanical properties and the lesion
dimensions, quantified by lesion volume and by the width and length of the
equivalent bubble cluster. Histotripsy was applied to agar samples with varied properties. A cuboid of 4.5 mm width (lateral to focal plane) and 6 mm
depth (along beam axis) was scanned in a raster pattern with respective step
sizes of 0.75 mm and 3 mm. The exposure at each treatment location was 15
s, 30 s, or 60 s long. Results showed that only Young’s modulus influenced
histotripsy’s ablative ability and was significantly correlated with lesion volume and bubble cluster dimensions. The other two properties had negligible
effects on lesion formation. Also, exposure time differentially affected the
width and depth of the bubble cluster volume.
problem. A CT measurement of the human skull is used to determine the
initial conditions and shear shock wave simulations are presented to demonstrate the focusing effects of the skull geometry.
3:45
4pBA8. Tissue damage produced by cavitation: The role of viscoelasticity. Eric Johnsen (Mech. Eng., Univ. of Michigan, 1231 Beal Ave., Ann
Arbor, MI 48104, ejohnsen@umich.edu) and Matthew Warnez (Eng. Phys.,
Univ. of Michigan, Ann Arbor, MI)
Cavitation may cause damage at the cellular level in a variety of medical
applications, e.g., therapeutic and diagnostic ultrasound. While cavitation
damage to bodies in water has been studied for over a century, the dynamics
of bubbles in soft tissue remain vastly unexplored. One difficulty lies in the
viscoelasticity of tissue, which introduces additional physics and time
scales. We developed a numerical model to investigate acoustic cavitation
in soft tissue, which accounts for liquid compressibility, full thermal effects,
and viscoelasticity (including nonlinear relaxation and elasticity). The bubble dynamics are represented by a Keller-Miksis formulation and a spectral
collocation method is used to solve for the stresses in the surrounding medium. Our numerical studies of a gas bubble exposed to a relevant waveform
indicate that under inertial conditions high pressures and velocities are generated at collapse, though they are lower than those observed in water due to
the elasticity and viscosity of the medium. We further find that significant
deviatoric stresses and increased heating in tissue are attributable to viscoelasticity, due to material properties and different bubble responses compared
to water.
4:00
4pBA9. Comparison of Gilmore-Akulichev’s, Keller-Miksis’s and Rayleigh-Plesset’s equations on therapeutic ultrasound bubble cavitation.
Zhong Hu (Elec. and Comput. Eng., Mech. Eng., Iowa State Univ., 2201
Coover Hall, Ames, IA 50011, zhonghu@iastate.edu), Jin Xu (Eng., John
Brown Univ., Siloam Springs, AR), and Timothy A. Bigelow (Elec. and
Comput. Eng., Mech. Eng., Iowa State Univ., Ames, IA)
Many models have been utilized to simulate inertial cavitation for ultrasound therapies such as histotripsy. The models range from the very simple
Rayleigh-Plesset model to the complex Gilmore-Akulichev model. The
computational time increases with the complexity of the model, so it is important to know when the results from the simpler models are sufficient. In
this paper the simulation performance of the widely used Rayleigh-Plesset
model, Keller-Miksis model, and Gilmore-Akulichev model both with and
without gas diffusion are compared by calculating the bubble radius
response and bubble wall velocity as a function the ultrasonic pressure and
frequency. The bubble oscillates similarly with the three models within the
first collapse for small pressures (<3MPa), but the Keller-Miksis model
diverges at higher pressures. In contrast, the maximum expansion radius of
the bubble is similar at all pressures with Rayleigh-Plesset and GilmoreAKulichev although the collapse velocity is unrealistically high with Rayleigh-Plesset model. After multiple cycles, the Rayleigh-Plesset model starts
to behave disparately both in the expansion and collapse stages. The inclusion of rectified gas diffusion lengthens the collapse time and increases the
expansion radius. However, for frequency smaller than 1 MHz, the impact
of gas diffusion is not significant.
4:15
4pBA10. Removal of residual bubble nuclei to enhance histotripsy soft
tissue fractionation at high rate. Alexander P. Duryea, Charles A. Cain
(Biomedical Eng., Univ. of Michigan, 2131 Gerstacker Bldg., 2200 Bonisteel Blvd., Ann Arbor, MI 48109, duryalex@umich.edu), William W. Roberts (Urology, Univ. of Michigan, Ann Arbor, MI), and Timothy L. Hall
(Biomedical Eng., Univ. of Michigan, Ann Arbor, MI)
Previous work has shown that the efficacy of histotripsy soft tissue fractionation is dependent on pulse repetition frequency, with histotripsy delivered at low rates producing more efficient homogenization of the target
volume in comparison to histotripsy delivered at high rates. This is attributed to the cavitation memory effect: microscopic residual cavitation nuclei
that persist for hundreds of milliseconds following bubble cloud collapse
can seed the repetitive nucleation of cavitation at a discrete set of sites
2280
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
within the target volume, producing heterogeneous lesion development. To
mitigate this effect, we have developed low amplitude (MI<1) acoustic
pulses to actively remove residual nuclei from the field. These bubble removal pulses utilize the Bjerknes forces to stimulate the aggregation and
subsequent coalescence of remnant nuclei, consolidating the population
from a very large number to a countably small number of remnant bubbles
within several milliseconds. The effect is attainable in soft tissue mimicking
phantoms following a very minimal degree of fractionation (within the first
ten histotripsy pulses). Incorporation of this bubble removal scheme in histotripsy tissue phantom treatments at high rate (100 pulses/second) resulted
in highly homogeneous lesions that closely approximated those achieved
using an equal number of pulses applied at low rate (1 pulse/second); lesions
generated at high rate without bubble removal had heterogeneous structure
with increased collateral damage.
4:30
4pBA11. Two-dimensional speckle tracking using zero phase crossing
with Riesz transform. Mohamed Khaled Almekkawy (Elec. Eng., Western
New England, 2056 Knapp St., Saint Paul, MN 55108, alme0078@umn.
edu), Yasaman Adibi, Fei Zheng (Elec. Eng., Univ. of Minnesota, Minneapolis, MN), Mohan Chirala (Samsung Res. America, Richardson, TX), and
Emad S. Ebbini (Elec. Eng., Univ. of Minnesota, Minneapolis, MN)
Ultrasound speckle tracking provides robust estimates of fine tissue displacements along the beam direction due to the analytic nature of echo data.
We introduce a new multi-dimensional ST method (MDST) with subsample
accuracy in all dimensions. The algorithm based on the gradient of the magnitude and the zero-phase crossing of 2D complex correlation of the generalized analytic signal. The generalization method utilizes the Riesz
transform which is the vector extension of the Hilbert transform. Robustness
of the tracking algorithm is investigated using a realistic synthetic data
sequences created with (Field II) for which the bench mark displacement
was known. In addition, the new MDST method is used in the estimation of
the flow and surrounding tissue motion on human carotid artery in vivo. The
data was collected using a linear array probe of a Sonix RP ultrasound scanner at 325 fps. The vessel diameter has been calculated from the upper and
lower vessel walls displacement, and clearly shows a blood pressure wave
like pattern. The results obtained show that using Riesz transform produces
more robust estimation of the true displacement of the simulated model
compared to previously published results. This could have significant impact
on strain calculations near vessel walls.
4:45
4pBA12. 1-MHz ultrasound stimulates in vitro production of cardiac
and cerebrovascular endothelial cell vasodilators. Azzdine Y. Ammi
(Knight Carviovascular Inst., OHSU, 3181 SW Sam Jackson Park Rd., Portland, OR 97239, ammia@ohsu.edu), Catherine M. Davis (Dept. of Anesthesiology and Perioperative Medicine, OHSU, Portland, OR), Brian Mott
(Knight Carviovascular Inst., OHSU, Portland, OR), Nabil J. Alkayed
(Dept. of Anesthesiology and Perioperative Medicine, OHSU, Portland,
OR), and Sanjiv Kaul (Knight Carviovascular Inst., OHSU, Portland,
OR)
Ultrasound exposure of the heart and brain during vessel occlusion
reduces infarct size. Our aim was to study the production of vasodilatory
compounds by endothelial cells after ultrasound stimulation. A 1.05-MHz
single element transducer was used to insonify primary mouse endothelial
cells (ECs) from heart and brain with a 50-cycle tone burst at a pulse repetition frequency of 50 Hz. Two time points were studied after ultrasound exposure: 15 and 45 minutes. In heart ECs, EETs levels increased significantly
with 0.5 MPa (139 6 16%, p<0.05) and 0.3 MPa (137 6 15%, p<0.05) at
15 and 45 min post stimulation, respectively. HETEs and DHETs did not
change significantly. There was a trend toward increased adenosine, with
maximum release at 0.5 MPa (332 6 73% vs. 100% control, p<0.05). The
trend toward increased eNOS phosphorylation was greater at 15 than 45
min. In brain ECs adenosine release was increased, however increased
eNOS phosphorylation was not significant. 11, 12- and 14, 15- EETs were
increased while 5- and 15-HETEs were decreased. Pulsed ultrasound at 1.05
MHz has the ability to increase adenosine, p-eNOS, and EET production by
cardiac and cerebrovascular ECs. Interestingly, in brain ECs, the vasoconstricting HETEs were decreased.
168th Meeting: Acoustical Society of America
2280
5:00
4pBA13. Ultrasound-induced fractionation of the intervertebral disk.
Delphine Elbes, Olga Boubriak, Shan Qiao, Michael Molinari (Inst. of Biomedical Eng., Dept. of Eng. Sci., Univ. of Oxford, Oxford, United Kingdom), Jocelyn Urban (Dept. of Physiol., Anatomy and Genetics, Univ. of
Oxford, Oxford, United Kingdom), Robin Cleveland, and Constantin Coussios (Inst. of Biomedical Eng., Dept. of Eng. Sci., Univ. of Oxford, Inst. of
Biomedical Eng., Old Rd. Campus Res. Bldg., Oxford, Oxfordshire, United
Kingdom, constantin.coussios@eng.ox.ac.uk)
Current surgical treatments for lower back pain, which is strongly associated with degeneration of the intervertebral disk, are highly invasive and
have low long-term success rates. The present work thus aims to develop a
novel, minimally invasive therapy for disk replacement without the need for
surgical incision. Using ex vivo bovine coccygeal spinal segments as an experimental model, two confocally aligned 0.5 MHz HIFU transducers were
positioned with their focus inside the disc and used to generate peak rarefactional pressures in the range of 1–12 MPa. Cavitation activity was monitored, characterized, and localized in real time using both a single-element
passive cavitation detector and a 2D Passive Acoustic Mapping array. The
inertial cavitation threshold in the central portion of the disk, the nucleus
pulposus (NP), was first determined both in the absence and in the presence
of externally injected cavitation nuclei. HIFU exposure parameters were
subsequently optimized to maximize sustained inertial cavitation over 10
min and achieve fractionation of the NP. Following sectioning of treated
disks, staining of live and dead cells as well as microscopy under polarized
light were used to assess the impact of the treatment on cell viability and
collagen structure within the NP, inner annulus and outer annulus.
THURSDAY AFTERNOON, 30 OCTOBER 2014
MARRIOTT 9/10, 1:30 P.M. TO 4:00 P.M.
Session 4pEA
Engineering Acoustics: Acoustic Transduction: Theory and Practice II
Roger T. Richards, Chair
US Navy, 169 Payer Ln, Mystic, CT 06355
Contributed Papers
4pEA1. Vibration sensitivity measurements of silicon and acoustic-gradient microphones. Marc C. Reese (Harman Embedded Audio, Harman
Int.., 6602 E 75th St. Ste. 520, Indianapolis, IN 46250, marc.reese@harman.
com)
Microphones are often required to record audio while in a vibration
environment. Therefore, it is important to maximize the acoustic-to-vibration sensitivity of such microphones. It has previously been shown that the
vibration sensitivity of a microphone is, to first order, proportional to the
mass per unit area of the diaphragm including the air loading effect.
Although the air loading is generally minimal for omnidirectional condenser
microphones with thick diaphragms, these measurements show that it cannot
be ignored for newer silicon-based micro-electro-mechanical-system
(MEMS) and acoustic-gradient microphones. Additionally, since microphone vibration sensitivities are typically not reported by microphone manufacturers, nor measured using standardized equipment, the setup of an
inexpensive vibration measurement apparatus and associated challenges are
discussed.
1:45
4pEA2. Non-reciprocal acoustic devices based on spatio-temporal angular-momentum modulation. Romain Fleury, Dimitrios Sounas, and Andrea
Alu (ECE Dept., The Univ. of Texas at Austin, 1 University Station C0803,
Austin, TX 78712, romain.fleury@utexas.edu)
Acoustic devices that break reciprocity, for instance acoustic isolators or
circulators, may find exciting applications in a variety of fields, including
imaging, acoustic communication systems, and noise control. Non-reciprocal acoustic propagation has typically been achieved using non-linear phenomena, which require high input power levels and introduce distorsions. In
contrast, we have recently demonstrated compact linear isolation for audible
airborne sound by means of angular momentum bias [Fleury et al., Science
343, 516 (2014)], exploiting modal splitting in a ring cavity polarized by an
internal, constantly circulating fluid, whose motion is imparted using lownoise CPU fans. We present here an improved design with no moving parts,
2281
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
which is directly scalable to ultrasonic frequencies and fully integrable.
Instead of imparting angular momentum in the form of a moving medium as
in our previous approach, we make use of spatio-temporal acoustic modulation of three coupled acoustic cavities, a strategy that can be readily implemented in integrated ultrasonic devices, for instance, using piezoelectric
effects. In this new paradigm, the required modulation frequency is orders
of magnitude lower than the signal frequency, and the modulation efficiency
is maximized. This constitutes a pivotal step towards practically realizing
compact, linear, noise-free, tunable non-reciprocal acoustic components for
full-duplex acoustic communications and isolation.
2:00
4pEA3. An analysis of multi-year acoustic and energy performance
data for bathroom and utility residential ventilation fans. Wongyu Choi,
Antonio Gomez, Michael B. Pate, and James F. Sweeney (Mech. Eng.,
Texas A&M Univ., 2401 Welsh Ave. Apt. 615, 615, College Station, TX
77845, wongyuchoi@tamu.edu)
Loudness levels have been established as a new requirement in residential ventilation standards and codes including ASHRAE and IECC. Despite
the extensive application of various standards and codes, the control of loudness has not been a common target in past whole-house ventilation standards
and codes. In order to evaluate the appropriate loudness of ventilation fans,
especially in terms of leading standards and codes, a statistical analysis is
necessary. Therefore, this paper provides statistical data for bathroom and
utility ventilation fans over a nine year period from 2005 to 2013. Specifically, this paper presents an evaluation of changes in fan loudness over the 9
year test period and the relevance of loudness to leading standards including
HVI and ASHRAE. The loudness levels of brushless DC-motor fans are
also evaluated in comparison to the loudness of AC-motor fans. For AC and
DC motor fans, relationships between loudness and efficacy was determined
and then explained with regression models. Based on observations, this paper introduces a new “loudness-to-energy ratio” coefficient, L/E, which is a
measure of the acoustic and energy performance of a fan. Relationships
between acoustic and energy performances are established by using L/E
coefficients with supporting statistics for bathroom and utility fans.
168th Meeting: Acoustical Society of America
2281
4p THU. PM
1:30
2:15
4pEA4. Non contact ultrasound stethoscope. Nathan Jeger, Mathias Fink,
and Ros Kiri Ing (Institut Langevin, ESPCI ParisTech, 1 rue Jussieu, Paris
75005, France, nathan.jeger@espci.fr)
Heartbeat and respiration are very important vital signs that indicate
health and psychological states of a person. Recent technologies allow to
detect both physical parameters on a human subject by using different techniques with and without contact. Noncontact systems often use electromagnetic waves for contactless measurement but approaches based on
ultrasound waves, laser or video processes are also proposed. In this abstract
an alternative ultrasound system for non-contact and local measurement is
presented. The system works in echographic mode and ultrasound signals
are processed using two methods. The experimental setup uses an elliptic
mirror to focus ultrasonic waves onto the skin surface. Backscattered waves
are recorded by a microphone located close to the emitting transducer.
Heartbeat and respiration signals are determined from the skin displacement
caused by the chest-wall motion. For comparison purpose, the cross-correlation method, which uses broadband signal, and the Doppler method, which
uses narrowband signal, are applied to measure the skin displacement. Sensitivity and accuracy parameters of the two methods are compared. At least,
as the measurement is local, the system can act as a noncontact stethoscope
to listen the internal sounds of the human body even through the light
clothes of the patient.
2:30
4pEA5. High sensitivity imaging of resin-rich regions in graphite/epoxy
laminates using joint entropy. Michael Hughes (Int. Med./Cardiology,
Washington Univ. School of Medicine, School of Medicine Campus Box
8215, St. Louis, MO 63108, mshatctrain@gmail.com), John McCarthy
(Mathematics, Washington Univ., St. Louis, MO), Jon Marsh, and Samuel
Wickline (Int. Med./Cardiology, Washington Univ. School of Medicine,
Saint Louis, MO)
The continuing difficulty of detecting critical flaws in advanced materials requires novel approaches that enhance sensitivity to defects that might
impact performance. This study compares different approaches for imaging
a near-surface resin-rich defect in a thin graphite/epoxy plate using backscattered ultrasound. The specimen, having a resin-rich void immediately
below the top surface ply, was scanned with a 1 in. dia., 5 MHz center frequency, and 4 in. focal length transducer. A computer controlled apparatus
comprised of an x-y-z motion controller, a digitizer (LeCroy 9400A), and
an ultrasonic pulser/receiver (Panametrics 5800) was used to acquire data
on a 100 100 grid of points covering a 3 3 in. square. At each grid point
256 512-word, 8-bit backscattered waveforms, were digitized, signal averaged, and then stored on computer for off-line analysis. The same backscattered waveforms were used to produce peak-to-peak, signal energy, as well
as entropy images. All of the entropy images exhibit better border delineation and defect contrast than the either peak-to-peak or signal energy. The
best results are obtained using the joint entropy of the backscattered waveforms with a reference function. Two different references are examined: a
reflection from a stainless steel reflector, and an approximate optimum
obtained from an iterative parametric search. The joint entropy images produced using the optimum reference exhibit ~3 times the contrast obtained in
previous studies.
2:45
4pEA6. New compensation factors for the apparent propagation speed
In transmission line matrix uniform grid meshes. Alexandre Brandao
(Graduate Program in Elec. and Telecommunications Eng., Universidade
Federal Fluminense, Rua Passo da Patria, 156, Sao Domingos, Niteroi, RJ
24210-240, Brazil, abrand@operamail.com), Edson Cataldo (Appl. Mathematics Dept., Universidade Federal Fluminense, Niteroi, RJ, Brazil), and
Fabiana R. Leta (Mech. Eng. Dept., Universidade Federal Fluminense,
Niteroi, RJ, Brazil)
Numerical models consisting of two-dimensional (2D) and three-dimensional (3D) uniform grid meshes for the Transmission Line Matrix Method
(TLM), use sqrt(2) and sqrt(3), respectively, to compensate for the apparent
sound speed. In this work, new compensation factors are determined from a
2282
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
priori simulations, performed without compensation, in 2D and 3D TLM
one-section cylindrical waveguide acoustic models. The mistuned resonance
peaks obtained from these simulations are substituted in the corresponding
equations for the resonance frequencies in one-section cylindrical acoustical
waveguides to find the mesh apparent sound speed and, thus, the necessary
compensation. The TLM meshes are constructed over the voxels (Volumetric Picture Elements) of segmented MRI volumes, so that the extracted
mesh fits the segmented object. The TLM method provides a direct simulation approach instead of solving a PDE by variational methods that must
consider the plane wave assumption to run properly. Results confirm the
improvement over the conventional compensation factors, particularly for
frequencies above 4 kHz, providing a concrete reduction of the topology-dependent numerical dispersion for both 2D and 3D TLM lattices. Since this
dispersion problem is common to all TLM applications using uniform grids,
investigators in other areas of wave propagation can also benefit from these
findings.
3:00–3:15 Break
3:15
4pEA7. A low-cost alternative power supply for integrated electronic
piezoelectric transducers. Ricardo Brum, Sergio L. Aguirre, Stephan Paul,
and Fernando Corr^ea (Centro de Tecnologia, Universidade Federal de Santa
Maria, Rua Erly de Almeida Lima, 650, Santa Maria, RS 97105-120, Brazil,
ricardozbrum@yahoo.com.br)
Comercial hardware compatible with IEPE precision sensors normally
are expensive and often coupled to proprietary and expensive software packages. commercially available sound cards are a low cost option for AD, but
are incompatible with IEPE sensors. To create 4 mA constant current for
IEPE transducers commercial solutions are available and labs also have created such solutions, e.g., ITA at RWTH Aachen University. Unfortunately,
commercially available circuits are still to expensive for large scale classroom use in Brazil and circuits created elsewhere contain parts subject to
US export restrictions or require machines for creation of circuits. Thus,
based on a previous project, a new low-cost prototype was mounted on phenolic board. The circuit was tested with an IEPE microphone connected to a
commercial soundcard and ITA-Toolbox software and compared to a commercial hardware/software package. The results were very similar in the frequency range between 20 Hz and 10 kHz. The difference below 20 Hz
probably occurs due the different high pass filters in the AD-cards. The differences in the high frequency range are very likely due to differences in the
electrical background noise. The results suggest the device works well and
is a good alternative to make measurements with IEPE sensors.
3:30
4pEA8. Determination of the characteristic impedance and the complex
wavenumber of an absorptive material used in dissipative silencer. Key
F. Lima, Nilson Barbieri (Mech. Eng., PUCPR, Imaculada Conceiç~ao, 1155,
Curitiba, Parana 80215901, Brazil, keyflima@gmail.com), and Renato Barbieri (Mech. Eng., UDESC, Joinville, Brazil)
The silencers are acoustic filters that have the purpose of reducing
unwanted noise emitted by engines or equipment to acceptable levels. The
vehicular silencers have large volume and dissipative properties. Dissipative
silencers have absorptive material inside. These materials are typically fibrous and have good acoustic dissipation. However, few works depict the
acoustic behavior of silencers with absorptive materials. The difficulty in
evaluating this type of silencer is determining the acoustic properties of the
absorptive material: the characteristic impedance and the complex wavenumber. This work shows an inverse methodology for determining the
acoustic properties of the absorptive material used in silencers. First, it is
found the silencer’s acoustic efficiency in terms of the experimental sound
transmission loss. Second, the absorptive material properties are determined
with a parameters adjustment through a direct search optimization algorithm. In this step, the adjustment is done by applying The Finite Element
Method in the search for the silencer’s computational efficiency. The final
step is to verify the difference between the experimental and computational
results. For this work is used the acoustic efficiency of a silencer that has already been published in the literature. The results show good agreement.
168th Meeting: Acoustical Society of America
2282
3:45
mm thickness is biaxially prestretched and fixed on a polyurethane ring as
the vibrator, then ionic gel is painted on the center region of the membrane
as electrodes, finally, conducting wires which are also made by ionic gel is
attached to the edge of the electrodes for applying the AC voltage with a
DC bias. The ultrahigh transmittance of the VHB4905, gel, and polyurethane makes the transducer totally transparent, which is of great interest in
advanced media technology. The dynamic properties of the membrane are
studied experimentally along with its acoustic performance. It has been
found that the behavior of the dielectric elastomer membrane is quite complicated, both of the in plane and out of plane vibration mode exist. The
transducer shows better performance below 10 kHz for the low elastic modular of the membrane.
4pEA9. Flat, lightweight, transparent acoustic transducers based on
dielectric elastomer and gel. Kun Jia (The State Key Lab. for Strength and
Vib. of Mech. Structures, Xian Jiaotong Univ., South 307 Rm., 1st Teaching
Bldg.,West of the Xianning Rd. No.28, Xian, Shannxi 710049, China, kunjia@mail.xjtu.edu.cn)
The advances in flat-panel displays and Super Hi-Vision with a 22.2
multichannel sound system exhibit an entirely new viewing and listening
environment for the audience; however, flat and lightweight acoustic transducers are required to fulfill this prospect. In this paper, a flat lightweight
acoustic transducer with a rather simple structure is proposed. Polyacrylic
elastomer membrane (VHB4905, 3M corporation) with 4 mm diameter, 0.5
THURSDAY AFTERNOON, 30 OCTOBER 2014
SANTA FE, 1:00 P.M. TO 4:10 P.M.
Session 4pMU
Musical Acoustics: Assessing the Quality of Musical Instruments
Andrew C. H. Morrison, Chair
Joliet Junior College, 1215 Houbolt Rd., Natural Science Department, Joliet, IL 60431
Invited Papers
1:00
4pMU1. Bamboo musical instruments: Some physical and mechanical properties related to quality. James P. Cottingham (Phys.,
Coe College, 1220 First Ave., Cedar Rapids, IA 52402, jcotting@coe.edu)
Bamboo is one of the most widely used materials in musical instruments, including string instruments and percussion as well as
wind instruments. Bamboo pipe walls are complex, composed of a layered structure of fibers. The pipe walls exhibit non-uniformity in
radial structure and density, and there is a significant difference between the elastic moduli parallel to and perpendicular to the bamboo
fibers. This paper presents a summary of results from the literature on bamboo as a material for musical instruments. In addition, results
are presented from recent measurements of the physical and mechanical properties of materials used in some typical instruments. In particular, a case study will be presented comparing measurements made on reeds and pipes from two Southeast Asian khaen. Of the two
khaen discussed, one is a high quality khaen made by craftsmen in northeastern Thailand, while the other is an inexpensive instrument
purchased at an import shop. For this pair of instruments, analysis and comparison have been made of the material properties of the bamboo pipes and the composition and mechanical properties of the metal alloy reeds.
1:20
4p THU. PM
4pMU2. Descriptive maps to illustrate the quality of a clarinet. Whitney L. Coyle (The Penn State Univ., 201 Appl. Sci. Bldg., University Park, PA 16802, wlc5061@psu.edu), Philippe Guillemain, Jean-Baptiste Doc, Alexis Guilloteau, and Christophe Vergez (Laboratroire de mecanique et d’acoustique, Marseille, France)
Generally, subjective opinions and decisions are made when judging the quality of musical instruments. In an attempt to become
more objective, this research presents methods to numerically and experimentally create maps, over a range of control parameters, that
describe instrument behavior for a variety of different sounds features or “quality markers” (playing regime, intonation, loudness, etc.).
The behavior of instruments is highly dependent on the control parameters that are adjusted by the musician. Observing this behavior as
a function of one control parameter (e.g., blowing pressure) can hide diversity of the overall behavior. An isovalue quality marker can
be obtained for a multitude of control parameter combinations. Using multidimensional maps, where quality markers are a function of
two or more control parameters, can solve this problem. Numerically: in two dimensions, a regular discretization on a subspace of control parameters can be implemented while conserving a reasonable calculation time. However, in higher dimensions (if, for example,
aside from the blowing pressure and the lip force, we vary the reed parameters), it is necessary to use auto-adaptive sampling methods.
Experimentally: the use of an artificial mouth allows us to maintain control conditions while creating these maps. We can also use an
instrumented mouthpiece: this allows us to measure simultaneously and instantly these control parameters and create the maps “on the
fly.”
2283
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2283
1:40
4pMU3. Recent works on the (psycho-)acoustics of wind instruments. Adrien Mamou-Mani (IRCAM, 1 Pl. Stravinsky, Paris 75004,
France, adrien.mamou-mani@ircam.fr)
Two experiments aiming at linking acoustical properties and perception of wind instruments will be presented. The first one is a
comparison between five oboes of the same model type. An original methodology is proposed, based on discrimination tests in playing
conditions and default detection using acoustical measurements. The second experiment has been done on a simplified bass clarinet with
an embedded active control system. A comparison of perceptual attributes, like sound color and playability, for different acoustical configurations (frequency and damping of resonances) is possible to test using a single system. A specific methodology and first results will
be presented.
2:00
4pMU4. The importance of structural vibrations in brass instruments. Thomas R. Moore (Dept. of Phys., Rollins College, 1000
Holt Ave., Winter Park, FL 32789, tmoore@rollins.edu) and Wilfried Kausel (Inst. of Musical Acoust., Univ. of Music and Performing
Arts, Vienna, Austria)
It is often thought that the input impedance uniquely determines the quality of a brass wind instrument. However, it is known that
structural vibrations can also affect the playability and perceived sound produced by these instruments. The processes by which the
structural vibrations affect the quality of brass instruments are not completely understood, but it is likely that vibrations of the metal couple to the lips as well as introducing small changes in the input impedance. We discuss the mechanisms by which structural vibrations
can affect the quality of a brass instrument and suggest methods of incorporating these effects into an objective assessment of instrument
quality.
2:20–2:40 Break
Contributed Papers
2:40
3:10
4pMU5. Investigating the colloquial description of sound by musicians
and non-musicians. Jack Dostal (Phys., Wake Forest Univ., P.O. Box 7507,
Winston-Salem, NC 27109, dostalja@wfu.edu)
4pMU7. Modeling the low-frequency response of an acoustic guitar.
Micah R. Shepherd (Appl. Res. Lab, Penn State Univ., PO Box 30, mailstop
3220B, State College, PA 16801, mrs30@psu.edu)
What is meant by the words used in a subjective judgment of sound?
Interpreting these words accurately allows these musical descriptions of
sound to be related to scientific descriptions of sound. But do musicians, scientists, instrument makers, and others mean the same things by the same
words? When these groups converse about qualities of sound, they often use
an expansive lexicon of terms (bright, brassy, dark, pointed, muddy, etc.). It
may be inaccurate to assume that the same terms and phrases have the same
meaning to these different groups of people or even remain self-consistent
for a single individual. To investigate the use of words and phrases in this
lexicon, subjects with varying musical and scientific backgrounds were surveyed. The subjects were asked to listen to different pieces of recorded
music and asked to use their own colloquial language to describe the musical qualities and differences they perceived in these pieces. In this talk, I
describe some qualitative results of this survey and identify some of the
more problematic terms used by these various groups to describe sound
quality.
The low-frequency response of an acoustic guitar is strongly influenced
by the combined behavior of the air cavity and the top plate. The sound
hole–air cavity resonance (often referred to as the Helmholtz resonance)
interacts with the first elastic mode of the top plate creating a coupled oscillator with two resonance frequencies that are shifted away from the frequencies of the two original, uncoupled oscillators. This effect was modeled
using finite elements for the top plate and boundary elements for the air cavity with rigid sides and back and no strings. The natural frequencies of the
individual and combined oscillators were then predicted and compared to
measurements. The model predicts the mode shapes, natural frequencies,
and damping well thus validating the modeling procedure. The effect of
changing the cavity volume was then simulated to predict the behavior for a
deeper air cavity.
2:55
4pMU6. Chaotic behavior of the piccolo? Nicholas Giordano (Phys.,
Auburn Univ., College of Sci. and Mathematics, Auburn, AL 36849,
njg0003@auburn.edu)
A direct numerical solution of the Navier-Stokes equations has been
used to calculate the sound produced by a model of the piccolo. At low to
moderate blowing speeds and at appropriate blowing angles, the sound pressure is approximately periodic with the expected frequency. As the blowing
speed is increased or as the blowing angle is varied, the time dependence of
the sound pressure becomes more complicated, and examination of the spectrum and the sensitivity of the sound pressure to initial conditions suggest
that the behavior becomes chaotic. Similarities with the behavior found in
Taylor-Couette and Rayleigh-Bènard instabilities of fluids are noted and
possible implications for the nature of the piccolo tone are discussed.
2284
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
3:25
4pMU8. Experiment to evaluate musical qualities of violin strings. Maxime Baelde (Acoust. & Environ. HydroAcoust. Lab, Universite Libre de
Bruxelles, 109 rue Barthelemy Delespaul, Lille 59000, France, maxime.
baelde@centraliens-lille.org), Jessica De Saedeleer (Jessica De Saedeleer
Luthier, Brussels, Belgium), and Jean-Pierre Hermand (Acoust. & Environ.
hydroAcoust. lab, Universite Libre de Bruxelles, Brussels, Belgium)
Most of violin strings on the market are made of different materials and
size. They have different musical qualities: full, mellow, warm, and round,
for example. Nevertheless, this description is subjective and related to string
manufacturers. The aim of this study is to provide an experiment which
gives an evaluation of the musical qualities of strings. This study is based
on “musical descriptors,” which gives information about a musical sound
and psychoacoustics in order to match the musician point of view. “Musical
descriptors” are also used for music classification. We use two sets of topend strings model from two different brands. These strings are mounted on
two similar violins and the strings are excited on their normal modes with
harpsichord damper mechanism like and other means. The sound radiated is
168th Meeting: Acoustical Society of America
2284
3:55
recorded with a microphone and the vibration of the string with a coil-magnet device so as to have intrinsic and extrinsic string properties. Some musicians tried these strings and expressed what they thought about it. These
acoustical and psychoacoustical analyzes will give information to the luthiers to know what string property allow one adjustment, in order to provide
better advice aside from string manufacturers descriptions.
4pMU10. Experimental investigation of crash cymbal acoustic quality.
Devyn P. Curley (Mech. Eng., Tufts Univ., 200 College Ave., Medford, MA
02155), Zachary A. Hanan (Elec. Eng., Univ. of Colorado, Boulder, CO),
Dan Luo (Mech. Eng., Tufts Univ., Medford, MA), Christopher W. Penny
(Phys., Tufts Univ., Medford, MA), Christopher F. Rodriguez (Elec. and
Comput. Eng., Tufts Univ., Medford, MA), Paul D. Lehrman (Music, Tufts
Univ., Medford, MA), Chris B. Rogers, and Robert D. White (Mech. Eng.,
Tufts Univ., Medford, MA, r.white@tufts.edu)
3:40
4pMU9. Vibration study of Indian folk instrument sambal. Ratnaprabha
F. Surve (Phys., Nowrosjee Wadia College, 15 Tulip, Neco Gardens, Viman
Nagar, Pune 21 411001, India, rfsurve@hotmail.com), Keith Desa (Phys.,
Nowrosjee Wadia College, 27, Maharashtra, India), and Dilip S. Joaj (Phys.,
Univ. of Pune, Pune, Maharashtra, India)
A methodology to quantitatively evaluate the quality of the transmitted
acoustic signature of cymbals is under development. High speed video
recordings of a percussionist striking both a Zildjian 14 in. A-custom crash
cymbal and a Zildjian Gen 16 low volume 16 in. crash cymbal were
recorded and used to determine biometrically accurate crash and ride striking motions. A two degree of freedom robotic arm has been developed to
mimic human striking motion. The robotic arm includes a high torque elbow
joint driven in closed loop trajectory tracking and an impedance controlled
wrist joint to approximate the variable stiffness of the stick grip. A quantitative comparison of robotic and human strikes will be made using high speed
video. Repeatable strikes will be carried out using the robotic system in an
anechoic chamber for different grades of Zildjian cymbals, including low
volume Gen 16 cymbals. Acoustic features of the measured sound output
will be compared to seek quantitative metrics for evaluating cymbal sound
quality that compare favorably with the results of qualitative human assessments that are currently in use by the industry. Preliminary results indicate
noticeable differences in cymbal acoustic output including variations in
modal density, decay time, and beating phenomena.
The percussion instruments family, in its folk category has many instruments like Dholki, Dimdi, Duff, Halagi, and Sambal. The Sambal is a folk
membranophone made up of wood, played mainly in western India. Sambal
a traditional drum, which is used in some religious functions. It is played by
the people who are believed to be servants of goddess Mahalaxmi Devi.
This instrument is made up of two approximately cylindrical wooden drums
united along a common edge, having skin membranes stretched over their
mouths. This instrument is played using two wooden sticks, of which one
has a curved end. The right hand side drum’s pitch is higher than the left. Its
membrane is excited by striking repeatedly to generate sound of a constant
pitch. This paper relates to vibrational analysis of the Sambal. A study has
been carried out to check it’s vibrational properties like modes of the vibration. The study is done by spectrum analysis (Fast Fourier Transform) using
a simple Digital Storage Oscilloscope. The tonal quality of wood used for
the cylinders and membrane is compared.
THURSDAY AFTERNOON, 30 OCTOBER 2014
MARRIOTT 3/4, 1:15 P.M. TO 4:20 P.M.
Session 4pNS
Noise: Virtual Acoustic Simulation
Stephen A. Rizzi, Cochair
NASA Langley Research Center, 2 N Dryden St, MS 463, Hampton, VA 23681
Patricia Davies, Cochair
Ray W. Herrick Labs., School of Mechanical Engineering, Purdue University, 177 South Russell Street,
West Lafayette, IN 47907-2099
4p THU. PM
Chair’s Introduction—1:15
Invited Papers
1:20
4pNS1. Recent advances in aircraft source noise synthesis. Stephen A. Rizzi (AeroAcoust. Branch, NASA Langley Res. Ctr., 2 N
Dryden St., MS 463, Hampton, VA 23681, stephen.a.rizzi@nasa.gov), Daniel L. Palumbo (Structural Acoust. Branch, NASA Langley
Res. Ctr., Hampton, VA), Jonathan R. Hardwick (Dept. of Mech. Eng., Virginia Tech, Blacksburg, VA), and Andrew Christian (National
Inst. of Aerosp., Hampton, VA)
For several decades, research and development has been conducted at the NASA Langley Research Center directed at understanding
human response to aircraft flyover noise. More recently, a technology development effort has focused on the simulation of aircraft flyover noise associated with future, large commercial transports. Because recordings of future aircraft are not available, the approach
taken utilizes source noise predictions of engine and airframe components which serve as a basis for source noise syntheses. Human subject response studies have been conducted aimed at determining the fidelity of synthesized source noise, and the annoyance and
2285
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2285
detectability once the noise is propagated (via simulation) to the ground. Driven by various factors, human response to less common
noise sources are gaining interest. Some have been around for a long time (rotorcraft), some have come and gone, and are back again
(open rotors), and some are entirely new (distributed electric driven propeller systems). Each has unique challenges associated with
source noise synthesis. Discussed in this work are some of those challenges including source noise characterization from wind tunnel
data, flight data, or prediction; factors affecting perceptual fidelity including tonal/broadband separation, and amplitude and frequency
modulation; and a potentially expansive range of operating conditions.
1:40
4pNS2. An open architecture for auralization of dynamic soundscapes. Aric R. Aumann (Analytical Services & Mater., Inc., 107
Res. Dr., Hampton, VA 23666-1340, aric.r.aumann@nasa.gov), William L. Chapin (AuSIM, Inc., Mountain View, CA), and Stephen A.
Rizzi (AeroAcoust. Branch, NASA Langley Res. Ctr., Hampton, VA)
An open architecture for auralization has been developed by NASA to support research aimed at understanding human response to
sound within a complex and dynamic soundscape. The NASA Auralization Framework (NAF) supersedes an earlier auralization tool set
developed for aircraft flyover noise auralization and serves as a basis for a future auralization plug-in for the NASA Aircraft Noise Prediction Program (ANOPP2). It is structured as a set of building blocks in the form of dynamic link libraries, so that other soundscapes,
e.g., those involving ground transportation, wind turbines, etc., and other use cases, e.g., inverse problems, may easily be accommodated
The NAF allows users to access auralization capabilities in several ways. The NAF’s built-in functionality may be exercised utilizing either basic (e.g., console executable) or advanced (e.g., MATLAB, LabView, etc.) host environments. The NAF’s capabilities can also be
extended by augmenting or replacing major activities through programming its open architecture. In this regard, it is envisioned that
third parties will develop plug-in capabilities to augment those included in the NAF.
2:00
4pNS3. Simulated sound in advanced acoustic model videos. Kenneth Plotkin (Wyle, 200 12th St. South, Ste. 900, Arlington, VA
22202, kenneth.plotkin@wyle.com)
The Advanced Acoustic Model (AAM) and other time-step aircraft noise simulation models developed by Wyle can generate video
animations of the noise environment. The animations are valuable for understanding details of noise footprints and for community outreach. Using algorithms developed by NASA, audio simulation for jet aircraft noise has recently been added to the video capability.
Input data for the simulations consist of AAM’s one-third octave band sound level time history output, plus flight path geometry and
ground properties. Working at an audio sample rate of 44.1 kHz and a sample “hop” period of 0.0116 s, a random phase narrow band
sample is shaped to match spectral amplitudes. Ground reflection and low frequency oscillation are added to the hops, which are merged
into a WAV file. The WAV file is then mixed with an existing animation generated from the same AAM run. The process takes place in
near-real time, based on a location that a user selects from a site map. The presentation includes demonstrations of the results for a simple level flyover and for the departure of a high performance jet aircraft from an airbase.
2:20
4pNS4. Combining local source propagation modeling results with a global acoustic ray tracer. Michael Williams, Darrel Younk,
and Steve Mattson (Great Lakes Sound and Vib., 47140 N. Main St., Houghton, MI 49931, mikew@glsv.com)
A common method of sound auralization in large virtual environments is through acoustic ray tracing. The purpose of an acoustic
ray tracer is to supply accurate source to listener impulse response functions for a virtual scene. Currently, sources are modeled as an
omnidirectional point source in the ray tracer. This limits the fidelity of the results and is not accurate for complicated noise sources
involving multiple audible parts. The proposed method is to simulate local source propagation to a sphere using various energy modeling
techniques. These results may be used to increase the fidelity of a ray trace by giving directionality to the source and allowing for source
audio to be mixed from recordings of components of the source. This is especially relevant when a full source has not yet been constructed. Because of this, there are many real world applications in engineering, architecture, and other fields that need high fidelity auralization of future products.
2:40
4pNS5. Modelling sound propagation in the presence of atmospheric turbulence for the auralization of aircraft noise. Frederik
€
Rietdijk, Kurt Heutschi (Acoust. / Noise Control, Empa, Uberlandstrasse
129, D€
ubendorf, Zurich 8600, Switzerland, frederik.rietdijk@
empa.ch), and Jens Forssen (Appl. Acoust., Chalmers Univ. of Technol., Gothenburg, Sweden)
A new tool for the auralization of aircraft noise in an urban environment is in development. When listening to aircraft noise sound
level fluctuations caused by atmospheric turbulence are clearly audible. Therefore, to create a realistic auralization of aircraft noise,
atmospheric turbulence needs to be included. Due to spatial inhomogeneities of the wind velocity and temperature in the atmosphere
acoustic scattering occurs, affecting the transfer function between source and receiver. Both these inhomogeneities and the aircraft position are time-dependent, and therefore the transfer function varies with time resulting in the audible fluctuations. Assuming a stationary
(frozen) atmosphere, the movement of the aircraft alone gives rise to fluctuations. A simplified model describing the influence of turbulence on a moving elevated source is developed, which can then be used to simulate the influence of atmospheric turbulence in the auralization of aircraft noise.
3:00–3:20 Break
2286
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2286
3:20
4pNS6. Simulation of excess ground attenuation for aircraft flyover noise synthesis. Brian C. Tuttle (Analytical Mech. Assoc., Inc.,
1318 Wyndham Dr., Hampton, VA 23666, btuttle1@gmail.com) and Stephen A. Rizzi (AeroAcoust. Branch, NASA Langley Res. Ctr.,
Hampton, VA)
Subjective evaluations of noise from proposed aircraft and flight operations can be performed using simulated flyover noise. Such
simulations typically involve three components: generation of source noise, propagation of that noise to a receiver on or near the ground,
and reproduction of that sound in a subjective test environment. Previous work by the authors focused mainly on development of high-fidelity source noise synthesis techniques and sound reproduction methods while assuming a straight-line propagation path with standard
atmospheric absorption and simple (plane-wave) ground reflection models. For aircraft community noise applications, this is usually sufficient because the aircraft are nearly overhead. However, when simulating noise sources at low elevation angles, the plane-wave
assumption is no longer valid and must be replaced by a model that takes into account the reflection of spherical waves from a ground
surface of finite impedance. Recent additions to the NASA Community Noise Test Environment (CNoTE) software suite have improved
real-time simulation capabilities of ground-plane reflections for low incidence angles. The models are presented along with the resulting
frequency response of the filters representing excess ground attenuation. Discussion includes an assessment of the performance and limitations of the filters in a real-time simulation.
3:40
4pNS7. Evaluation of the perceptual fidelity of a novel rotorcraft noise synthesis technique. Jonathan R. Hardwick (Dept. of Mech.
Eng., Virginia Polytechnic Inst. and State Univ., Blacksburg, VA), Andrew Christian (National Inst. of Aerosp., 100 Exploration Way,
Hampton, VA 23666, andrew.christian@nasa.gov), and Stephen A. Rizzi (AeroAcoust. Branch, NASA Langley Res. Ctr., Hampton,
VA)
A human subject experiment was recently conducted at the NASA Langley Research Center to evaluate the perceptual fidelity of
synthesized rotorcraft source noise. The synthesis method combines the time record of a single blade passage (i.e., of a main or tail rotor)
with amplitude and frequency modulations observed in recorded rotorcraft noise. Here, the single blade passage record can be determined from a time-averaged recording or from a modern aeroacoustic analysis. Since there is no predictive model available, the amplitude and frequency modulations were derived empirically from measured flyover noise. Thus, one research question was directed at
determining the fidelity of four synthesis implementations (unmodulated and modulated main rotor only, and unmodulated and modulated main and tail rotor) under thickness and loading noise dominated conditions, using modulation data specific to those conditions. A
second research question was aimed at understanding the sensitivity of fidelity to the choice of modulation method. In particular, can
generic modulation data be used in lieu of data specific to the condition of interest, and how do modifications of generic and specific
modulation data affect fidelity? The latter is of importance for applying the source noise synthesis to the simulation of complete flyover
events.
4:00
4pNS8. A comparison of subjects’ annoyance ratings of rotorcraft noise in two different testing environments. Andrew McMullen
and Patricia Davies (Purdue Univ., 177 S Russel Dr, West Lafayette, IN 47906, almvz5@mail.missouri.edu)
4p THU. PM
Two subjective tests were conducted to investigate people’s responses to rotorcraft noise. In one test subjects heard the sounds in a
room designed to simulate aircraft flyovers. The frequency range of the Exterior Effects Room (EER) at NASA Langley is 17 Hz to
18,750 Hz. In the other test, subjects heard the sounds over earphones and the frequency range of the playback was 25 Hz–16 kHz.
Some of the sounds in this earphone test, high-pass filtered at 25 Hz, were also played in the EER. Forty subjects participated in each of
the tests. Subjects’ annoyance responses in each test were highly correlated with EPNL, ASEL, and Loudness exceeded 20% of the time
(correlation coefficient close to 0.9). However, at some metric values there was a large variation in response levels, which could be
linked to characteristics of harmonic families present in the sound. While the results for both tests are similar, subjects in the EER generally found the sounds less annoying than the subjects who heard the sounds over earphones. Certain groups of signals were rated similarly in one test environment, but differently in the other. This may be due to playback method, subject population, or other factors.
2287
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2287
THURSDAY AFTERNOON, 30 OCTOBER 2014
INDIANA C/D, 1:30 P.M. TO 4:45 P.M.
Session 4pPA
Physical Acoustics: Topics in Physical Acoustics II
Josh R. Gladden, Cochair
Physics & NCPA, University of Mississippi, 108 Lewis Hall, University, MS 38677
William Slaton, Cochair
Physics & Astronomy, The University of Central Arkansas, 201 Donaghey Ave., Conway, AR 72034
Contributed Papers
1:30
4pPA1. Aeroacoustic response of coaxial Helmholtz resonators in a lowspeed wind tunnel. William Slaton (Phys. & Astronomy, The Univ. of Central Arkansas, 201 Donaghey Ave., Conway, AR 72034, wvslaton@uca.
edu)
The aeroacoustic response of coaxial Helmholtz resonators with different neck geometries in a low-speed wind tunnel has been investigated. Experimental test results of this system reveal strong aeroacoustic response
over a Strouhal number range of 0.25–0.1 for both increasing and decreasing
the flow rate in the wind tunnel. Ninety-degree bends in the resonator necks
does not significantly change the aeroacoustic response of the system. Aeroacoustic response in the low-amplitude range has been successfully modeled
by describing-function analysis. This analysis, coupled with a turbulent flow
velocity distribution model, gives reasonable values for the location in the
flow of the undulating stream velocity that drives vortex shedding at the resonator mouth. Having an estimate for the stream velocity that drives the
flow-excited resonance is crucial when employing the describing-function
analysis to predict aeroacoustic response of resonators.
1:45
4pPA2. Separation of acoustic waves in isentropic flow perturbations.
Christian Henke (ATLAS ELEKTRONIK, Sebaldsbruecker Heerstrasse
235, Bremen 28309, Germany, christian.henke@atlas-elektronik.com)
The present contribution investigates the mechanisms of sound generation and propagation in the case of highly-unsteady flows. It is based on the
linearisation of the isentropic Navier-Stokes equation around a new pathline-averaged base flow. As a consequence of this unsteady and non-radiating base flow, the perturbation equations satisfy a conservation law. It is
demonstrated that this flow perturbations can be split into acoustic and vorticity modes, with the acoustic modes being independent of the vorticity
modes. Moreover, we conclude that the present acoustic perturbation is
propagated by the convective wave equation and fulfills Lighthill’s acoustic
analogy. Therefore, we can define the deviations from the convective wave
equation as the “true” sound sources. In contrast to other authors, no
assumptions on a slowly varying or irrotational flow are necessary.
2:00
4pPA3. The sliding mode controller on the rijke-type combustion systems with mean temperature gradients. Dan Zhao and Xinyan Li (Mech.
and Aerosp. Eng., Nanyang Technolog. Univ., 50 Nanyang Ave. Singapore,
Singapore 639798, Singapore, xli037@e.ntu.edu.sg)
Thermoacoustic instabilities are typically generated due to the dynamic
coupling between unsteady heat release and acoustic pressure waves. To
eliminate thermoacoustic instability, the coupling must be somehow interrupted. In this work, we designed and implemented a sliding mode controller to mitigate self-sustained thermoacoustic oscillations in a Rijke-type
combustion system. An acoustically-compact heat source is confined and
2288
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
modeled by using a modified King’s Law. The mean temperature gradient is
considered by expanding the acoustic waves via Galerkin series. Coupling
the unsteady heat release with the acoustic model enables the flow disturbances to be calculated, thus providing a platform on which to evaluate the
performance of the controller. As the controller is actuated, the limit cycle
oscillations are quickly dampened and the thermoacoustic system with multiple eigenmodes is stabilized. The successful demonstration indicates that
the sliding mode controller can be applied to stabilize unstable thermoacoustic systems.
2:15
4pPA4. Feedback control of thermoacoustic oscillation transient growth
of a premixed Laminar flame. Dan Zhao and Xy Li (Aerosp. Eng. Div.,
Nanyang Technolog. Univ., 50 Nanyang Ave., Singapore, Singapore,
XLI037@e.ntu.edu.sg)
Transient growth of combustion-excited oscillations could trigger thermoacoustic instability in a combustion system with nonorthogonal eigenmodes. In this work, feedback control of transient growth of combustionexcited oscillation in a simplified thermoacoustic system with Dirichlet
boundary conditions is considered. For this a thermoacoustic model of a premixed laminar flame with an actuator is developed. It is formulated in statespace by expanding acoustic disturbances via Galerkin series and linearizing
flame model and recasting it into the classical time-lag N-s for controllers
implementation. As a linear-quadratic-regulator (LQR) controller is implemented, the system becomes asymptotically stable. However, it is associated
with transient growth of thermoacoustic oscillations, which may potentially
trigger combustion instability. To eliminate the oscillations transient
growth, a strict dissipativity controller is then implemented. Comparison is
then made between the performances of these controllers. It is found that
the strict dissipativity controller achieves both exponential decay of the
oscillations and unity maximum transient growth.
2:30
4pPA5. Nonlinear self-sustained thermoacoustic instability in a combustor with three bifurcating branches. Dan Zhao and Shihuai Li (Aerosp.
Eng. Div., Nanyang Technolog. Univ., 50 Nanyang Ave., Singapore, Singapore, LISH0025@e.ntu.edu.sg)
In this work, experimental investigations of a bifurcating thermoacoustic
system are conducted first. It has a mother tube splitting into three bifurcating branches. It is surprisingly found that the flow oscillations in the bifurcating branches resulting from unsteady combustion in the bottom stem are
at different temperatures. Flow visualization reveals that one branch is associated with “cold” pulsating flow, while the other two branches are “hot.”
Such unique flow characteristics cannot be predicted by simply assuming
the bifurcating combustor consisting of three curved Rijke tube. 3D Numerical investigations are then conducted. Three parameters are identified and
studied one by one: (1) the heat source location, (2) the heat flux, and (3)
the flow direction in the bifurcating branches. As each of the parameters is
168th Meeting: Acoustical Society of America
2288
varied, the heat-driven acoustics signature is found to change. The main
nonlinearity is identified in the heat fluxes. Comparing the numerical and
experimental results reveals that good agreement is obtained in terms of
mode frequencies, mode shapes, sound pressure level and supercritical Hopf
bifurcating behavior.
problem, we use the Bayesian approximation error method which reduces
the overall computational demand. In this study, results in the two-dimensional case with simulated data are presented.
2:45
4pPA8. Surfactant-free emulsification in microfluidics using strongly
oscillating bubbles. Siew-Wan Ohl, Tandiono Tandiono, Evert Klaseboer
(Inst. of High Performance Computing, 1 Fusionopolis Way, #16-16 Connexis North, Singapore 138632, Singapore, ohlsw@ihpc.a-star.edu.sg),
Dave Ow, Andre Choo (Bioprocessing Technol. Inst., Singapore, Singapore), Fenfang Li, and Claus-Dieter Ohl (Division of Phys. and Appl. Phys.,
School of Physical and Mathematical Sci., Nanyang Technol. Univ., Singapore, Singapore)
Mach stem is a well-known structure typically observed in the process
of strong (acoustical Mach numbers greater than 0.4) step-shock waves
reflection from a rigid boundary. However, this phenomenon has been much
less studied for weak shocks in nonlinear acoustic fields where Mach numbers are in the range from 0.001 to 0.01 and pressure waveforms have more
complicated temporal structure than step shocks. In this work, the results
are reported for Mach stem formation observed in the experiment of Nwave reflections from a plane surface. Spherically divergent N-waves were
generated by a spark source in air and were measured using a Mach-Zehnder
interferometer. Pressure waveforms were reconstructed using the inverse
Abel transform applied to the phase of the interferometer measurement signal. Temporal resolution of 0.4 ls was achieved. Regular and irregular types
of reflection were observed. It was shown that the length of the Mach stem
increased linearly while the N-wave propagated along the surface. In addition, preliminary results of the influence of surface roughness on the Mach
stem formation will be presented. [Work supported by the President of Russia MK-5895.2013.2 grant, student stipend from the French Government,
and by LabEx CeLyA ANR-10-LABX-60/ANR-11-IDEX-0007.]
3:00–3:15 Break
3:15
4pPA7. Statistical inversion approach to estimating water content in an
aquifer from seismic data. Timo L€ahivaara (Appl. Phys., Univ. of Eastern
Finland, P.O. Box 1627, Kuopio 70211, Finland, timo.lahivaara@uef.fi),
Nicholas F. Dudley Ward (Otago Computational Modelling Group Ltd.,
Dunedin, New Zealand), Tomi Huttunen (Kuava Ltd, Kuopio, Finland), and
Jari P. Kaipio (Mathematics, Univ. of Auckland, Auckland, New
Zealand)
This study focuses on developing computational tools to estimate water
content in an aquifer from seismic measurements. The poroelastic signature
from an aquifer is simulated and methods that use this signature to estimate
the water table level and aquifer thickness are investigated. In this work, the
spectral-element method is used to solve the forward model that characterizes the propagation of seismic waves. The inverse problem is formulated in
the Bayesian framework, so that all uncertainties are explicitly modelled as
probability distributions, and the solution is given as summary statistics
over the posterior distribution of parameters relative to data. For the inverse
2289
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
In this study, two immiscible liquids in a microfluidics channel has been
successfully emulsified by acoustic cavitation bubbles. These bubbles are
generated by the attached piezo transducers which are driven to oscillate at
resonant frequency of the system (about 100 kHz) [1, 2]. The bubbles oscillate and induce strong mixing in the microchamber. They induce the rupture
of the liquid thin layer along the bubble surface due to the high shear stress
and fast liquid jetting at the interface. Also, they cause the big droplets to
fragment into small droplets. Both water-in-oil and oil-in-water emulsions
with viscosity ratio up to 1000 have been produced using this method without the application of surfactant. The system is highly efficient as submicron
monodisperse emulsions (especially for water-in-oil emulsion) could be created within milliseconds. It is found that with a longer ultrasound exposure,
the size of the droplets in the emulsions decreases, and the uniformity of the
emulsion increases. Reference: [1] Tandiono, SW Ohl et al., “Creation of
cavitation activity in a microfluidics device through acoustically driven capillary waves,” Lab Chip 10, 1848–1855 (2010). [2] Tandiono, SW Ohl et
al., “Sonochemistry and sonoluminescence in microfluidics,” Proc. Natl.
Acad. Sci. U.S.A. 108(15), 5996–5998 (2011).
3:45
4pPA9. Ultrasonic scattering from poroelastic materials using a mixed
displacement-pressure formulation. Max denis (Mayo Clinic, 200 First
St. SW, Rochester, MN 55905, denis.max@mayo.edu), Chrisna Nguon,
Kavitha Chandra, and Charles Thompson (Univ. of Massachusetts Lowell,
Lowell, MD)
In this work, a numerical technique suitable for evaluating the ultrasonic
scattering from a three-dimensional poroelastic material is presented. Following Biot’s derivation of the macroscopic governing equations for a fluid
saturated poroelastic material, the predicted two propagating wave equations are formulated in terms of displacement and pressure. Assuming that
porosity variations on a microscopic scale have a cumulative effect in generating a scattered field, the scattering attenuation coefficient of a Biot medium can be determined. The scattered fields of the wave equations are
numerically evaluated as Neumann series solutions of the Kirchhoff-Helmholtz integral equation. A Pade approximant technique is employed to
extrapolate beyond the Neumann series’ radius of convergence (weak scattering regime). In the case of bovine trabecular bone, the relationship
between the scattering attenuation coefficient and the structural and mechanical properties of the trabecular bone is of particular interest. The
results demonstrate the validity of the linear frequency-dependent assumption of attenuation coefficient n the low frequency range. Further comparisons, between measured observations and the numerical results will be
discussed.
168th Meeting: Acoustical Society of America
2289
4p THU. PM
4pPA6. Application of Mach-Zehnder interferometer to measure irregular reflection of a spherically divergent N-wave from a plane surface
in air. Maria M. Karzova (LMFA UMR CNRS 5509, Ecole Centrale de
Lyon, Universite Lyon I, Leninskie Gory 1/2, Phys. Faculty, Dept. of
Acoust., Moscow 119991, Russian Federation, masha@acs366.phys.msu.
ru), Petr V. Yuldashev (Phys. Faculty, M.V. Lomonosov Moscow State
Univ., Moscow, Russian Federation), Sebastien Ollivier (LMFA UMR
CNRS 5509, Ecole Centrale de Lyon, Universite Lyon I, Lyon, France),
Vera A. Khokhlova (Phys. Faculty, M.V. Lomonosov Moscow State Univ.,
Moscow, Russian Federation), and Philippe Blanc-Benon (LMFA UMR
CNRS 5509, Ecole Centrale de Lyon, Universite Lyon I, Lyon, France)
3:30
4:00
4pPA10. High temperature resonant ultrasound spectroscopy study on
Lead Magnesium Niobate—Lead Titanate (PMN-PT) relaxor ferroelectric material. Sumudu P. Tennakoon and Joseph R. Gladden (Phys. and
Astronomy, Univ. of MS, 1 Coliseum Dr., Phys.& NCPA, Univ. of MS,
University, MS 38677, sptennak@go.olemiss.edu)
Lead magnesium niobate-lead titanate [(1-x)PbMg1/3Nb2/3O3-xPbTiO3]
is a perovskite relaxor ferroelectric material exhibiting superior electromechanical coupling compared to the conventional piezoelectric materials. In
this work, non-poled single crystal PMN-PT material with the composition
near morphotropic phase boundary (MPB) was investigated in the temperature range of 400 K—800 K where the material is reported to be in the cubic
phase. High temperature resonant ultrasound spectroscopy (HT-RUS) technique was used to probe temperature dependency of elastic constants
derived from the measured resonant modes. Non-monotonic resonant frequency trends in the temperature regime indicate stiffening of the material,
followed by gradual softening typically observed in heated materials. Elastic
constants confirmed this stiffening in the temperature range of 400 K—673
K, where the stiffness constants C11 and C44 increased approximately by
40% and 33% respectively. Acoustic attenuation, derived from the quality
factor (Q), exhibits a minimum around the temperature where the stiffness
is maximum and, significantly higher attenuation observed at temperatures
below 400 K. The temperature range 395 K—405 K was identified as a transition temperature range, where the material showed an abrupt change in the
resonant spectrum and, the material emerges from the MPB characterized
by this very high acoustic attenuation. This transition temperature compares
favorably with dielectric constant measurements reported in the literature.
4:15
4pPA11. Structure of cavitation zones in a heavy magma under explosive character of its decompression. Valeriy Kedrinskiy (Physical HydroDynam.., Lavrentyev Inst. of HydroDynam.., Russian Acad. of Sci.,
Lavrentyev prospect 15, Novosibirsk 630090, Russian Federation, kedr@
hydro.nsc.ru)
The paper is devoted to the investigation of a dynamics of state and
structure of compressed magma flow saturated by gas and microcrystallites
which is characterized by phase transitions, diffusive processes, by increase
of a magma viscosity magnitude by the orders and bubbly cavitation development behind the decompression wave front formed in the result of volcanic channel depressurization. The multi-phase mathematical model,
which includes well-known conservation laws for mean pressure, mass
2290
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
velocity, and density as well as the system of the kinetic equations describing the physical processes that occur in a compressed magma during its explosive decompression, is considered. The results of numerical analysis
show that the processes of a magma saturation by cavitation nuclei as their
density magnitude increases by a few orders lead to the formation of separate zone with anomalously high values of the flow parameters. As it has
turned out the abnormal zone is located in the vicinity of a free surface of a
cavitating magma column. The mechanism of its formation is determined
by diffusion flows redistribution as the nuclei density increases as well as by
the change of the distribution character of main flow parameters in the
abnormal zone from a gradual to an abrupt increase of their values on the
lower zone bound. Note, the mass velocity jump by the order magnitude relatively main flow allows to conclude that the flow disintegration on the
lower bound of the zone is quite probable. [Supp. RAS Presidium Program,
Project 2.6].
4:30
4pPA12. Cavity collapse in a bubbly liquid. Ekaterina S. Bolshakova
(Phys., Novosibirsk State Univ., Novosibirsk, Russian Federation) and
Valeriy Kedrinskiy (Physical HydroDynam., Lavrentyev Inst. of HydroDynam., Russian Acad. of Sci., Lavrentyev prospect 15, Novosibirsk 630090,
Russian Federation, kedr@hydro.nsc.ru)
The effect of an ambient liquid state on a spherical cavity dynamics
under atmospheric hydrostatic pressure p and extremely low initial gas pressure p(0) inside was investigated. The equilibrium bubbly (cavitating) medium with sound velocity C as the function of gas phase concentration k
was considered as the ambient liquid model. The cavity dynamics is analyzed within the framework of Herring-equation for the following diapasons
of main parameters : k ¼ 0—5%, h(0) ¼ 0.02—10( 6) atm. Numerical
analysis has shown that the deceleration C by two order does not have an
influence neither on an asymptotic value of collapsed cavity radius nor on
the acoustical losses under its collapse. It means than in the whole the integral acoustical losses remain invariable. However the collapse cavity dynamics and the radiation structure are essentially changed: from numerous
pulsations with a decreasing amplitudes up to a single collapse and from a
wave packet up to a single wave, correspondingly. It has turned out that the
acoustic corrections in the Herring-equation don’t influence practically on
the cavity dynamics if the term of equation with dH/dt is absent. Naturally,
the deceleration C exerts essential influence on an empty cavity dynamics.
The graphs of dR/Cdt values as a function of R/Ro for different C values are
located higher the data of classical models of Herring, Gilmore and Hunter.
So the value M = 1 is reached at R/Ro = 0.023 for k = 0, and at the value
0.23 when k = 5 %. [Support RFBR, grant 12-01-00314.]
168th Meeting: Acoustical Society of America
2290
THURSDAY AFTERNOON, 30 OCTOBER 2014
MARRIOTT 1/2, 1:30 P.M. TO 5:00 P.M.
Session 4pPP
Psychological and Physiological Acoustics: Physiological and Psychological Aspects of Central Auditory
Processing Dysfunction II
Frederick J. Gallun, Cochair
National Center for Rehabilitative Auditory Research, Portland VA Medical Center, 3710 SW US Veterans Hospital Rd.,
Portland, OR 97239
Adrian KC Lee, Cochair
University of Washington, Box 357988, University of Washington, Seattle, WA 98195
Invited Papers
1:30
4pPP1. Aging as a window into central auditory dysfunction: Combining behavioral and electrophysiological approaches. David
A. Eddins, Erol J. Ozmeral, and Ann C. Eddins (Commun. Sci. & Disord., Univ. of South Florida, 4202 E. Fowler Ave., PCD 1017,
Tampa, FL 33620, deddins@usf.edu)
Central auditory processing involves a complex set of feed-forward and feedback processes governed by a cascade of dynamic
neuro-chemical mechanisms. Central auditory dysfunction can arise from disruption of one or more of these processes. With hearing
loss, dysfunction may begin with reduced and/or altered input to the central auditory system followed by peripherally induced central
plasticity. Similar central changes may occur with advancing age and neurological disorders even in the absence of hearing loss. Understanding the behavioral and physiological consequences of this plasticity on the processing of basic acoustic features is critical for effective clinical management. Major central auditory processing deficits include reduced temporal processing, impaired binaural hearing,
and altered coding of spectro-temporal features. These basic deficits are thought to be primary contributing factors to the common complaint of difficulty understanding speech in noisy environments in persons with hearing loss, brain injury, and advanced age. The results
of investigations of temporal, spectral, and spectro-temporal processing, binaural hearing, and loudness perception will be presented
with a focus on central auditory deficits that occur with advancing age and hearing loss. Such deficits can be tied to reduced peripheral
input, altered central coding, and complex changes in cortical representations.
2:00
4pPP2. Age-related declines in hemispheric asymmetry as revealed in the binaural interaction component. Ann C. Eddins, Erol J.
Ozmeral, and David A. Eddins (Commun. Sci. & Disord., Univ. of South Florida, 4202 E. Fowler Ave., PCD 1017, Tampa, FL 33620,
aeddins@usf.edu)
4p THU. PM
The binaural interaction component (BIC) is a physiological index of binaural processing. The BIC is defined as the brain activity
resulting from binaural (diotic or dichotic) stimulus presentation minus the brain activity summed across successive monaural stimulus
presentations. Smaller binaural-induced activity relative to summed monaural activity is thought to reflect neural inhibition in the central
auditory pathway. Since aging is commonly associated with reduced inhibitory processes, we evaluate the hypothesis that the BIC is
reduced with increasing age. Furthermore, older listeners typically have reduced hemispheric asymmetry relative to younger listeners,
interpreted in terms of compensation or recruitment of neural resources and considered an indication of age-related neural plasticity.
Binaural stimuli designed to elicit a lateralized percept generate maximum neural activity in the hemisphere opposite the lateralized
position. In this investigation, we evaluated the hypothesis that the BIC resulting from stimuli lateralized to one side (due to interaural
time differences) results in less hemispheric asymmetry in older than younger listeners with normal hearing. Behavioral data were
obtained to assess the acuity of binaural processing. Data support the interpretation that aging is marked by reduced central auditory inhibition, reduced temporal processing, and broader distribution of activity across hemispheres compared to young adults.
2:30
4pPP3. Effects of blast exposure on central auditory processing. Melissa Papesh, Frederick Gallun, Robert Folmer, Michele Hutter,
M. Samantha Lewis, Heather Belding (National Ctr. for Rehabilitative Auditory Res., Portland VA Medical Ctr., 6303 SW 60th Ave.,
Portland, OR 97221, Melissa.Papesh@va.gov), and Marjorie Leek (Res., Loma Linda VA Medical Ctr., Loma Linda, CA)
Exposure to high-intensity blasts is the most common cause of injury in recent U.S. military conflicts. Prior work indicates that
blast-exposed Veterans report significantly more hearing handicap than non-blast-exposed Veterans, often in spite of clinically normal
hearing thresholds. Our research explores the auditory effects of blast exposure using a combination of self-report, behavioral, and electrophysiological measures of auditory processing. Results of these studies clearly indicate that blast-exposed individuals are significantly
more likely to perform poorly on tests requiring the use of binaural information and tests of pitch sequencing and temporal acuity
2291
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2291
compared to non-blast-exposed control subjects. Behavioral measures are corroborated by numerous objective electrophysiological
measures, and are not necessarily attributable to peripheral hearing loss or impaired cognition. Thus, evidence indicates that blast exposure can lead to acquired deficits in central auditory processing (CAP) which may persist for at least 10 years following blast exposure.
Future studies of these deficits in this and other adult populations are needed to address important issues such as individual susceptibility,
anatomical, and physiological changes in auditory pathways which contribute to symptoms of these types of deficits, and development
of effective evidence-based methods of rehabilitation in adult patients. [Work supported by the VA Rehabilitation Research & Development Service and the VA Office of Academic Affiliations.]
3:00–3:30 Break
3:30
4pPP4. Auditory processing demands and working memory span. Margaret K. Pichora-Fuller (Dept. of Psych., Univ. of Toronto,
3359 Mississauga Rd., Mississauga, ON L5L 1C6, Canada, k.pichora.fuller@utoronto.ca) and Sherri L. Smith (Audiologic Rehabilitation Lab., Veterans Affairs, Mountain Home, TN)
The (in)dependence of auditory and cognitive processing abilities is a controversial topic for hearing researchers and clinicians.
Some advocate for the need to isolate auditory and cognitive factors. In contrast, we argue for the need to understand how they interact.
Working memory span (WMS) is a cognitive measure that has been related to language comprehension in general and also to speech
understanding in noise. In healthy adults with normal hearing, there is typically a strong correlation between reading and listening measures of WMS. Some investigators have opted to use visually presented stimuli when testing people who do not have normal hearing in
order to avoid the influence of modality-specific auditory processing deficits on WMS. However, tests conducted using auditory stimuli
are necessary to evaluate how cognitive processing is affected by the auditory processing demands experienced by different individuals
over a range of conditions in which the tasks to be performed, the availability of supportive context, and the acoustical and linguistic
characteristics of targets and maskers are varied. Attempts to measure auditory processing independent of cognitive processing will fall
short in assessing listening function in realistic conditions.
4:00
4pPP5. Auditory perceptual learning as a gateway to rehabilitation. Beverly A. Wright (Commun. Sci. and Disord., Northwestern
Univ., 2240 Campus Dr., Evanston, IL 60202, b-wright@northwestern.edu)
A crucial aspect of the central nervous system is that it can be modified through experience. Such changes are thought to occur in
two learning phases: acquisition—the actual period of training—and consolidation—a post-training period during which the acquired information is transferred to long-term memory. My coworkers and I have been addressing these principles in auditory perceptual learning
by characterizing the factors that induce and those that prevent learning during the acquisition and consolidation phases. We also have
been examining how these factors change during development and aging and are affected by hearing loss and other conditions that alter
auditory perception. Application of these principles could improve clinical training strategies. Further, though learning is the foundation
for behavioral rehabilitation, the capacity to learn can itself be impaired. Therefore, an individual’s response to perceptual training could
be used as an objective, clinical measure to guide diagnosis and treatment of a cognitive disorder. [Work supported by NIH.]
4:30–5:00 Panel Discussion
2292
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2292
THURSDAY AFTERNOON, 30 OCTOBER 2014
MARRIOTT 5, 1:00 P.M. TO 4:00 P.M.
Session 4pSC
Speech Communication: Voice (Poster Session)
Richard J. Morris, Chair
Communication Science and Disorders, Florida State University, 201 West Bloxham Road, 612 Warren Building,
Tallahassee, FL 32306-1200
All posters will be on display from 1:00 p.m. to 4:00 p.m. To allow contributors an opportunity to see other posters, the contributors of
odd-numbered papers will be at their posters from 1:00 p.m. and 2:30 p.m. and contributors of even-numbered papers will be at their
posters from 2:30 p.m. to 4:00 p.m.
Contributed Papers
Vocal tremor involves atypical modulation of the fundamental frequency
(F0) and intensity of the voice. Previous research on vocal tremor has focused
on measuring the modulation rate and extent of the F0 and intensity without
characterizing other modulations present in the acoustic signal (i.e., modulation of the harmonics). Characteristics of the voice source and vocal tract filter
are known to affect the amplitude of the harmonics and could potentially be
manipulated to reduce the perception of vocal tremor. The purpose of this
study was to determine the adjustments that could be made to the voice source
or vocal tract filter to alter the acoustic output and reduce the perception of
modulation. This research was carried out using a computational model of
speech production that allows for precise control and modulation of the glottal
and vocal tract configurations. Results revealed that listeners perceived a
higher magnitude of voice modulation when simulated samples had a higher
mean F0, greater degree of vocal fold adduction, and vocal tract shape for /i/
vs. /A/. Based on regression analyses, listeners’ judgments were predicted by
modulation information present in both low and high frequency bands. [Work
supported by NIH F31-DC012697.]
4pSC2. Perception of breathiness in pediatric speakers. Lisa M. Kopf,
Rahul Shrivastav (Communicative Sci. and Disord., Michigan State Univ.,
Rm. 109, Oyer Speech and Hearing Bldg., 1026 Red Cedar Rd., East Lansing,
MI 48824, kopflisa@msu.edu), David A. Eddins (Commun. Sci. and Disord.,
Univ. of South Florida, Tampa, FL), and Mark D. Skowronski (Communicative Sci. and Disord., Michigan State Univ., East Lansing, MI)
Extensive research has been done to determine acoustic metrics for voice
quality. However, few studies have focused on voice quality in the pediatric
population. Instead, metrics evaluated on adults have directly been applied to
children’s voices. Some variables, such as pitch, that differ between adult and
pediatric voices, have been shown to be critical in the perception of breathiness. Furthermore, it is not known whether adults perceive voice quality similarly for pediatric and adult speakers. In this experiment, 10 listeners judged
breathiness for 28 stimuli using a single-variable matching task. The stimuli
were modeled after four pediatric speakers and synthesized using a Klatt-synthesizer to have a wide range of aspiration noise and open quotient. Both of
these variables have been shown to influence the perception of breathiness.
The resulting data were compared to that previously obtained for adult speakers using the same matching task. Comparison of adult and pediatric voices
will help identify differences in the perception of breathiness for these groups
of speakers and to develop more accurate metrics for voice quality in children. [Research supported by NIH (R01 DC009029).]
2293
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
1:00
4pSC3. Combining differentiated electroglottograph and differentiated
audio signals to reliably measure vocal fold closed quotient. Richard J.
Morris (Commun. Sci. and Disord., Florida State Univ., 201 West Bloxham
Rd., 612 Warren Bldg., Tallahassee, FL 32306-1200, richard.morris@cci.
fsu.edu), Shonda Bernadin (Elec. and Comput. Eng., Florida A & M Univ.,
Tallahassee, FL), David Okerlund (College of Music, Florida State Univ.,
Tallahassee, FL), and Lindsay B. Wright (Commun. Sci. and Disord., Florida State Univ., Tallahassee, FL)
Over the past few decades researchers have explored the use of the electroglottograph (EGG) as a non-invasive method for representing vocal fold contact
during vowel production and to measure the closed quotient (CQ) and open quotient (OQ) of the glottal cycle. The first derivative of the EGG signal (dEGG)
can be used to indicate these moments (Childers & Krishnamurthy, 1985). However, there can be double positive peaks in the dEGG as well as a variety of negative peak patterns (Herbst et al., 2010). Obviously these variations will alter
any measurements made from the signal. Recently, the use of the dEGG with
dAudio signal was reported as a means for more reliable measurement of the
CQ from the EGG signal in combination with a time synchronized audio signal.
The purpose of this study is to demonstrate the reliability of the dEGG and dAudio for determining CQ across a variety of vocal conditions. Files recorded from
group of 15 trained females singing an octave that included their primo passaggio provided the data. Preliminary results indicate high reliability of the CQ
measurements in both the chest and middle registers of all of the singers.
4pSC4. A reduced-order three-dimensional continuum model of voice
production. Zhaoyan Zhang (UCLA School of Medicine, 1000 Veteran
Ave., 31-24 Rehab Ctr., Los Angeles, CA 90095, zyzhang@ucla.edu)
Although vocal fold vibration largely occurs in the transverse plane, control of voice is mainly achieved by adjusting vocal fold stiffness along the anterior–posterior direction through muscle activation. Thus, models of voice
control need to be at least three-dimensional on the structural side. Modeling
the detailed three-dimensional interaction between vocal fold dynamics, glottal aerodynamics, and the sub- and supra-glottal acoustics is computationally
expensive, which prevents parametric studies of voice production using threedimensional models. In this study, a Galerkin-based reduced-order threedimensional continuum model of phonation was presented. Preliminary
results showed that this model was able to qualitatively reproduce previous
experimental observations. This model is computationally efficient and thus
ideal for parametric studies in phonation research as well as practical applications such as speech synthesis. [Work supported by NIH.]
168th Meeting: Acoustical Society of America
2293
4p THU. PM
4pSC1. Acoustical bases for the perception of simulated laryngeal vocal
tremor. Rosemary A. Lester, Brad H. Story, and Andrew J. Lotto (Speech,
Lang., and Hearing Sci., Univ. of Arizona, P.O. Box 210071, Tucson, AZ
85721, rosemary.lester@gmail.com)
4pSC5. The influence of attentional focus on voice control. Eesha A. Zaher
and Charles R. Larson (Commun. Sci. and Disord., Northwestern Univ., 2240
Campus Dr., Evanston, IL 60208, EeshaZaheer2014@u.northwestern.edu)
The present study tested the role of attentional focus on control of voice
fundamental frequency (F0). Subjects vocalized an “ah” sound while hearing their voice auditory feedback randomly shifted upwards or downwards
in pitch. In the “UP” condition, subjects vocalized, listened for and pressed
a button for each upward pitch shift stimulus. In the “DOWN” condition,
subjects listened for and pressed a button for each downward shift. In the
CONTROL condition, subjects vocalized without paying attention to the
stimulus direction or pressing a button. Data were analyzed by averaging
voice F0 contours across several trials for each pitch shift stimulus in all
conditions. Response magnitudes were larger for the CONTROL than for
the UP or DOWN conditions. Responses for the UP and DOWN conditions
did not differ. Results suggest that when subjects focus their attention to
identify specific stimuli and produce a non-vocal motor response conditional
upon the identification, the neural mechanisms involved in voice control are
reduced, possibly because of a reduction in the error signal resulting from
the comparison of the efference copy of voice output with auditory feedback. Thus, focusing attention away from vocal control reduces neural
resources involved in control of voice F0.
4pSC6. Attention-related modulation of involuntary audio-vocal
response to pitch feedback errors. Hanjun Liu, Huijing Hu, and Ying Liu
(Rehabilitation Medicine, The First Affiliated Hospital of Sun Yat-sen
Univ., 58 Zhongshan 2nd Rd., Guangzhou, Guangdong 510080, China,
lhanjun@mail.sysu.edu.cn)
It has been demonstrated that unexpected alterations in auditory feedback elicit fast compensatory adjustments in vocal production. Although
generally thought to be involuntary in nature, whether these adjustments can
be influenced by cognitive function such as attention remains unknown. The
present event-related potential (ERP) study investigated whether neurobehavioral processing of auditory-vocal integration can be affected by attention.
While sustaining a vowel phonation and hearing pitch-shifted feedback, participants were required to either ignore the auditory feedback perturbation,
or attend to it with two levels of attention load. The results revealed
enhancement of P2 response to the attended auditory perturbation with the
low load level as compared to the unattended auditory perturbation. Moreover, increased auditory attention load led to a significant decrease of P2
response. By contrast, there was no attention-related change of vocal
response. These findings provide the first neurophysiological evidence that
involuntary auditory-vocal integration can be modulated as a function of auditory attention. Furthermore, it is suggested that auditory attention load can
result in a decrease of the cortical processing of auditory-vocal integration
in pitch regulation.
4pSC7. A study on the effect of intraglottal vortical structures on vocal
fold vibration. Mehrdad H Farahani and Zhaoyan Zhang (Head and Neck
Surgery, UCLA, 31-24 Rehab Ctr., UCLA School of Medicine, 1000 Veteran Ave., Los Angeles, CA 90095, mh.farahani@gmail.com)
Recent investigations suggested possible formation of the vortical structures in the intraglottal region during the closing phase of the phonation
cycle. Vortical regions in the flow field are locations of negative pressure,
and it has been hypothesized that this negative pressure might facilitate the
glottal closure and thus affects the vibration pattern and voice production
for high subglottal pressures. However, it is unclear whether the vortexinduced negative pressure is large enough, compared with vocal fold inertia
and elastic recoil, to have a noticeable effect on glottal closure. In addition,
the intraglottal vortical structures generally exist only for a small fraction of
the closing phase when the glottis becomes divergent enough to induce flow
separation. In the current work, oscillation of the vocal folds and the flow
field are modeled using a non-linear finite element solver and a reduced
order flow solver, respectively. The effect of vortical structures is modeled
as a sinusoidal negative pressure wave applied to vocal fold surface between
the flow separation point and the superior edge of the vocal folds. The
effects of this vortex-induced negative pressure are quantified at different
conditions of vocal fold stiffness and subglottal pressures. [Work supported
by NIH.]
2294
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
4pSC8. Effects of thyroarytenoid muscle activation on phonation in an
in vivo canine larynx model. Georg Luegmair, Dinesh Chhetri, and
Zhaoyan Zhang (Dept. of Head and Neck Surgery, Univ. of California Los
Angeles, 1000 Veteran Ave., Rehab 31-24, Los Angeles, CA 90095, gluegmair@ucla.edu)
Previous studies have shown that the thyroarytenoid (TA) muscle plays
an important role in the control of vocal fold adduction and stiffness. The
effects of TA stimulation on vocal fold vibration, however, are still unclear.
In this study, the effects of TA muscle activation on phonation were investigated in an in vivo canine larynx model. Laryngeal muscle activation was
achieved through parametric stimulation of the thyroarytenoid, the lateral
cricoarytenoid (LCA), and the cricothyroid (CT) muscles. For each stimulation level, the subglottal pressure was gradually increased to produce phonation. The subglottal pressure, the volume flow, and the outside acoustic
pressure were measured together with high-speed recording of vocal fold
vibration from a superior view. The results show that, without TA activation, phonation was limited to conditions of medium to high levels of LCA
and CT activations. TA activation allowed phonation to occur at a much
lower activation level of the LCA and CT muscles. Compared to conditions
of no TA activation, TA activation led to decreased open quotient. Increasing TA activation also allow phonation to occur at a much larger range of
the subglottal pressure while still maintaining certain degree of glottal closure during vibration. [Work supported by NIH.]
4pSC9. Voice accumulation and voice disorders in primary school
teachers. Pasquale Bottalico (Dept. of Communicative Sci. and Disord.,
Michigan State Univ., 1026 Red Cedar Rd., East Lansing, MI 10125, pasqualebottalico@yahoo.it), Lorenzo Pavese, Arianna Astolfi (Dipartimento
di Energia, Politecnico di Torino, Torino, Italy), and Eric J. Hunter (Dept.
of Communicative Sci. and Disord., Michigan State Univ., East Lansing,
MI)
Statistics on professional voice users with vocal health issues demonstrate the significance of the problem. However, such disorders are not currently recognized as an occupational disease in Italy. Conducting studies
examining the vocal health of occupational voice users is an important step
in identifying this as an important public health issue. The current study was
conducted in six primary schools in Italy with 25 teachers, one of the most
affected occupational categories. A clinical examination was conducted
(consisting of hearing and voice screening, a VHI, etc.). On this basis, teachers were divided into three groups: healthy subjects, subject with logopaedic
disorders, and subjects with objectively measured pathological symptoms.
The distributions of voicing and silence periods for the teachers at work
were collected using the Ambulatory Phonation Monitor (APM3200), a device for long-term monitoring of vocal parameters. The APM senses the
vocal fold vibrations at the base of the neck by means of a small accelerometer. Correlations were calculated between the voice accumulation slope
(obtained by multiplying the number of occurrences for each period by the
corresponding duration) and the clinical status of the teachers. The differences in voice accumulation distributions among the three groups were
analyzed.
4pSC10. Room acoustics and vocal comfort in untrained vocalists. Eric
J. Hunter, Pasquale Bottalico, Simone Graetzer, and Russell Banks (Dept. of
Communicative Sci. and Disord., Michigan State Univ., 1026 Red Cedar
Rd., East Lansing, MI 48824, ejhunter@msu.edu)
Talkers have long been studied in their speech accommodation strategies
in noise. Vocal effort and comfort within noisy situations have also been
studied. In this study, untrained vocalists were exposed to a range of room
acoustic conditions. In each environment, the subject performed a vocal
task, with a goal of being “heard” by a listener 5 m away. After each task,
the subject completed a series of questions addressing vocal effort and comfort. Additionally, using a head and torso simulator (HATS), the environment was assessed using a sine sweep presented at the HATS mouth and
recorded at the ears. It was found that vocal clarity (C50) and the initial
reflection related to vocal comfort. The results are not only relevant to room
design but also to understanding talkers’ acuity to acoustic conditions and
their adjustments to them.
168th Meeting: Acoustical Society of America
2294
Frequency (F0) vibrato is commonly known, but not so for flow vibrato,
the mean flow variation that accompanies frequency vibrato. Two classically trained singers, each with over 20 years professional experience, a soprano and a tenor, recorded /pa:pa:pa:/ sequences on three pitches (C4, A4,
and G5 for the soprano, D3, D4, and G4 for the tenor) and three loudness
levels (p, mf, and f) at each pitch. Each vowel had 3–6 frequency vibrato
cycles. For both singers, flow vibrato (obtained using the Glottal Enterprises
aerodynamic system) was present, and the lowest pitch had the most variability; otherwise, flow vibrato was fairly sinusoidal in shape. For the soprano, flow vibrato cycle extents were: 21–88 cc/s, lowest pitch; 60–147 cc/
s, middle pitch; 115–214 cc/s, highest pitch, across loudness levels. For the
soprano, the phase difference for the flow was 120–180 degrees ahead of the
F0 vibrato. For the tenor, the flow vibrato cycle extents were: 32–85 cc/s,
lowest pitch; 98–113 cc/s, middle pitch; 76–240 cc/s, highest pitch, across
loudness levels. Flow vibrato for the tenor led the F0 vibrato typically by
40–120 degrees. For both subjects, some flow vibrato cycles had double
peaks. Flow vibrato needs further study to determine its origin, shapes, and
magnitudes.
4pSC12. Impact of vocal tract resonance on the perception of voice
quality changes caused by vocal fold stiffness. Rosario Signorello,
Zhaoyan Zhang, Bruce Gerratt, and Jody Kreiman (Head and Neck Surgery,
Univ. of California Los Angeles David Geffen School of Medicine, 31-24
Rehab Ctr., UCLA School of Medicine, 1000 Veteran Ave., Los Angeles,
CA 90095, rsignorello@ucla.edu)
Experiments using animal and human larynx models are often conducted
without a vocal tract. While it is reasonable to assume the absence of a vocal
tract has only small effects on vocal fold vibration, it is unclear how sound
production and its perception will be affected. In this study, the validity of
using data obtained in the absence of a vocal tract for voice perception studies was investigated. Using a two-layer self-oscillating physical model, three
series of voice stimuli were created: one produced with conditions of leftright symmetric vocal fold stiffness, and two with left-right asymmetries in
vocal fold body stiffness. Each series included a set of stimuli created with a
physical vocal tract, and a second set created without a physical vocal tract.
Stimuli were re-synthesized to equalize the mean F0 for each series and normalized for amplitude. Listeners were asked to evaluate the three series in a
sort-and-rate task. Multidimensional scaling analysis will be applied to
examine the perceptual interaction between the voice source and the vocal
tract resonances. [Work supported by NIH.]
4pSC13. Perceptual differences among models of the voice source: Further evidence. Marc Garellek (Linguist, UCSD, La Jolla, CA), Gang Chen
(Elec. Eng., UCLA, Los Angeles, CA), Bruce R. Gerratt (Head and Neck
Surgery, UCLA, 31-24 Rehab Ctr., 1000 Veteran Ave., Los Angeles, CA
90403), Abeer Alwan (Elec. Eng., UCLA, Los Angeles, CA), and Jody
Kreiman (Head and Neck Surgery, UCLA, Los Angeles, CA, jkreiman@
ucla.edu)
Models of the voice source differ in how they fit natural voices, but it is
still unclear which differences in fit are perceptually salient. This study
describes ongoing analyses of differences in the fit of six voice source models to 40 natural voices, and how these differences relate to perceptual similarities among stimuli. Listeners completed a visual sort-and-rate task to
compare versions of each voice created with the different source models,
and the results were analyzed using multi-dimensional scaling (MDS). Perceptual spaces were interpreted in terms of variations in model fit in both
the time and spectral domains. The discussion will focus on the perceptual
importance of matches to both time-domain and spectral features of the
voice. [Work supported by NIH/NIDCD grant DC01797 and NSF grant IIS1018863.]
2295
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
4pSC14. The biological function of fundamental frequency in leaders’
charismatic voices. Rosario Signorello (Head and Neck Surgery, Univ. of
California Los Angeles David Geffen School of Medicine, 31-24 Rehab
Ctr., UCLA School of Medicine, 1000 Veteran Ave., Los Angeles, CA
90095, rsignorello@ucla.edu)
Charismatic leaders use voice based on two functions: a primary biological function and a secondary language and culture-based function (Signorello, 2014). In order to study the primary function in more depth, we
conducted acoustic and perceptual studies on the use of F0 by French, Italian and Brazilian charismatic political leaders. Results show that leaders
manipulate F0 in significantly different manners relative to: (1) the context
of communication (persuasive goal, the place where communication occurs
and the type of audience) in order to be recognized as the leader of the
group; and (2) the elapse of time (from the beginning to the end of the
speech) in order to create a climax with the audience. Results of a perceptual
test show that the leader’s use of low F0 voice results in the perception of
the leader as a dominant or threatening leader and the use of higher F0 conveys sincere, calm, and reassuring leadership. These results show cross-language and cross-cultural similarities in leaders’ vocal behavior and
listeners’ perception, and robustly demonstrate the two different functions
of leaders’ voices.
4pSC15. Voice quality variation and gender. Kara Becker, Sameer ud
Dowla Khan, and Lal Zimman (Linguist, Reed College, Reed College, 3203
SE Woodstock Boulevard, Portland, OR 97202, kbecker@reed.edu)
Recent work on American English has established that speakers increasingly use creaky phonation to convey pragmatic information, with young
urban women assumed to be the most active users of this phonetic feature.
However, no large-scale acoustic or articulatory study has established the
actual range and diversity of voice quality variation along gender identities,
encompassing different sexual orientations, regional backgrounds, and socioeconomic statuses. The current study does exactly that, through four methods: (1) subjects identifying with a range of gender and other demographic
identities were audio recorded while reading wordlists as well as a scripted
narrative assuming characters’ voices designed to elicit variation in vowel
quality. Simultaneously, (2) electroglottographic readings were taken and
analyzed to determine the glottal characteristics of this voice quality variation. (3) Subjects were then asked to rate recordings of other people’s voices
to identify the personal characteristics associated with the acoustic reflexes
of phonation; in the final task, (4) subjects were explicitly asked about their
language ideologies as they relate to gender. Thus, the current study
explores the relation between gender identity and phonetic features, measured acoustically, articulatorily, and perceptually. This work is currently
underway and preliminary results are being compiled at this time.
4pSC16. Towards standard scales for dysphonic voice quality: Magnitude estimation of reference stimuli. David A. Eddins (Commun. Sci. &
Disord., Univ. of South Florida, 4202 E. Fowler Ave., PCD 1017, Tampa,
FL 33620, deddins@usf.edu) and Rahul Shrivastav (Communicative Sci. &
Disord., Michigan State Univ., East Lansing, MI)
This work represents a critical step in developing standard measurement
scales for the dysphonic voice qualities of breathiness and roughness. Methods such as Likert ratings, visual analog scales and magnitude estimation
result in arbitrary units, limiting their clinical usefulness. A single-variable
matching task can quantifying voice quality in terms of physical units but is
too time consuming for clinical use. None of these methods result in information that has a direct or intuitive relationship with the underlying percept.
A proven approach for the perception of loudness is the Sone scale which
ties physical units to the perceptual estimates of loudness magnitude. As a
first step in developing such a scale for breathiness and roughness, here we
establish the relationship between the change in perceived VQ magnitude
and the change in physical units along the continuum of each VQ dimension. A group of 25 listeners engaged in a magnitude estimation task to
determine perceived magnitude associated with the comparison stimuli used
in our single-variable matching tasks. This relationship is analogous to mapping intensity in dB to perceived loudness in Phons and is a critical step in
developing a Sone-like scale for breathiness and roughness.
168th Meeting: Acoustical Society of America
2295
4p THU. PM
4pSC11. Flow vibrato in singers. Srihimaja Nandamudi and Ronald C.
Scherer (Commun. Sci. and Disord., Bowling Green State Univ., 200 Health
and Human Services Bldg., Bowling Green, OH 43403, nandas@bgsu.
edu)
4pSC17. Divergent or convergent glottal angles: Which give greater
flow? Ronald Scherer (Commun. Sci. and Disord., Bowling Green State
Univ., 200 Health Ctr., Bowling Green, OH 43403, ronalds@bgsu.edu)
4pSC18. Methodological issues when estimating subglottal pressure
from oral pressure. Brittany Frazer (Commun. Sci. and Disord., Bowling
Green State Univ., 200 Marie Pl., Perrysburg, OH 43551, bfrazer@bgsu.
edu) and Ronald C. Scherer (Commun. Sci. and Disord., Bowling Green
State Univ., Bowling Green, OH)
During phonation, the glottis alters between convergent and divergent
angles. For the same angle value, diameter, and transglottal pressure, which
angle, divergent or convergent, results in greater flow? The symmetric glottal angles of the physical static model M5 were used. Characteristics (lifesize) of the model were: axial glottal length 0.30 cm; angles of 5, 10, 20,
and 40 degrees; diameters of 0.005, 0.01, 0.02, 0.04, 0.08, 0.16, and 0.32
cm; transglottal pressures from 1 to 25 cm H2O; resulting in flows from 2.7
to 1536 cc/s and Reynolds number from 29.4 to 13,058. Results: (1) For
diameters of 0.04, 0.08 and 0.16 cm, the divergent angle always gave more
flow than the convergent angle (about 5–25%); (2) for the smallest (0.005
cm) and largest diameter (0.32 cm), the divergent angles always gave less
flow (10–30%); (3) for diameters of 0.01 and 0.02 cm, flow was greater for
divergent 5 and 10 degrees, and less for divergent 20 and 40 degrees. These
results suggest that the divergent glottal angle will increase the glottal flow
for midrange glottal diameters (skewing the glottal flow further “to the
right”?), and create less flow at very small diameters (increasing energy in
the higher harmonics?).
A noninvasive method to estimate subglottal pressure for vowel productions is to smoothly utter a CVCV string such as /p:i:p:i:p:i:…/ using a short
tube in the mouth with the tube attached to a pressure transducer. The pressure during the lip occlusion estimates the subglottal pressure during the adjacent vowel. What should the oral pressure look like for it to provide
accurate estimates? The study compared results using various conditions
against a standard condition that required participants to produce /p:i:p:i:../
syllables smoothly and evenly at approximately 1.5 syllables per second.
The non-standard tasks were: performing the task without training, increasing syllable rate, using a voiced /b/ instead of a voiceless /p/ initial syllable,
adding a lip or velar leak, or using a two syllable production (“peeper”)
instead of a single syllable production. Lip leak, velar leak, and lack of time
to equilibrate air pressure throughout the airway caused estimates of subglottal pressure to be inaccurate. Accuracy was better when estimates of
subglottal pressure were obtained using the voiced initial consonant and the
two-syllable word. Training improved the consistency of the oral pressure
profiles and thus the assurance in estimating the subglottal pressure. Oral
pressures with flat plateaus appear most accurate.
THURSDAY AFTERNOON, 30 OCTOBER 2014
INDIANA F, 1:00 P.M. TO 4:45 P.M.
Session 4pUW
Underwater Acoustics: Shallow Water Reverberation III
Kevin L. Williams, Chair
Applied Physics Lab., University of Washington, 1013 NE 40th St., Seattle, WA 98105
Contributed Paper
1:00
with inversion techniques. Recently, the method has been extended to treat
non-parallel sediment layering. The method is applied to data from an autonomous underwater vehicle (AUV) towing a source (1600–3500 Hz) and
an horizontal array of hydrophones. AUV reflection measurements were
acquired every 3 m along 10 criss-cross lines over a 1km<+>2<+> area
with evidently dipping layers. Mapping the along track sound-speed profiles
in geographical coordinates results in a pseudo-3D (Nx2D) sediment structure characterization of the area down to several tens of meters in the subbottom. The sound speed profile agreement at crossing points is quite good.
4pUW1. Seafloor sound-speed profile and interface dip angle measurement by the image source method: Application to real data. Samuel Pinson (Laboratorio de Vibraç~
oes e Ac
ustica, Universidade Federal de Santa
Catarina, LVA Dept de Engenharia Mec^anica, UFSC, Bairro Trindade, Florian
opolis, SC 88040-900, Brazil, samuelpinson@yahoo.fr) and Charles W.
Holland (Penn State Univ., State College, PA)
The image source method characterizes the sediment sound-speed profile from seafloor reflection data with a low computational cost compared
Invited Papers
1:15
4pUW2. Requirements, technology, and science drivers of applied reverberation modeling. Anthony I. Eller and Kevin D. Heaney
(OASIS, Inc., 11006 Clara Barton Dr., Fairfax Station, VA 22039, ellera@oasislex.com)
The historical development of reverberation modeling is a story driven by both supporting and sometimes conflicting features of
application requirements, measurement and computing capability, and scientific understanding. This paper presents an overview of how
underwater reverberation modeling technology has responded to application needs and how this process has helped the community to
identify and resolve related science issues. Emphasis is on the areas of System Design and Acquisition Support, Deployment and Operational Support, and Training Support. Gaps in our scientific knowledge are identified and recent advances are described that help push
forward our collective understanding of how to predict and mitigate reverberation.
2296
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2296
1:35
4pUW3. Reverberation models as an aid to interpret data and extract environmental information. Dale D. Ellis (Phys., Mount
Allison Univ., 18 Hugh Allen Dr., Dartmouth, NS B2W 2K8, Canada, daledellis@gmail.com) and John R. Preston (Appl. Res. Lab.,
The Penn State Univ., State College, PA)
Reverberation measurements obtained with towed arrays are a valuable tool to extract information about the ocean environment.
Preston pioneered the use of polar plots to display reverberation and superimpose the beam time series on bathymetry maps. As part of
Rapid Environmental Assessment (REA) exercises Ellis and Preston [J. Marine Syst. 78, S359–S371, S372–S381] have used directional
reverberation measurements to detect uncharted bottom features, and to extract environmental information using model-data comparisons. One enthusiast declared “This is like doing 100 simultaneous transmission loss runs and having the results available immediately.”
Though that was clearly an exaggeration and the results are not precise, the approach provides valuable information to direct more accurate and detailed surveys. The early work used range-independent (flat bottom) models for the model-data comparisons, while current
work includes a range-dependent model based on adiabatic normal modes. A model has been developed which calculates reverberation
from range-dependent bottom bathymetry, echoes from targets and discrete clutter objects, then outputs beam time series directly comparable with measured ones. Recent work has identified interesting effects in sea bottom sand dunes in the TREX experiments. This paper will provide an overview of the earlier work, and examples from the recent TREX experiment.
1:55
4pUW4. Reverberation data/model comparisons using transport theory. Eric I. Thorsos, Jie Yang, Frank S. Henyey, and W. T.
Elam (Appl. Phys. Lab., Univ. of Washington, 1013 NE 40th St., Seattle, WA 98105, eit@apl.washington.edu)
Transport theory has been developed for modeling shallow water propagation and reverberation at mid frequencies (1–10 kHz)
where forward scattering from a rough sea surface is taken into account in a computationally efficient manner. The method is based on a
decomposition of the field in terms of unperturbed modes, and forward scattering at the sea surface leads to mode coupling that is treated
with perturbation theory. Reverberation measurements made during TREX13 combined with extensive environmental measurements
provide an important test of transport theory predictions. Modeling indicates that the measured reverberation was dominated by bottom
reverberation, and the reverberation level in the 2 4 kHz band was observed to decrease as the sea surface conditions increased from a
low sea state to a higher sea state. This suggests that surface forward scattering was responsible for the change in reverberation level.
Results of data/model comparisons examining this effect will be shown. [Work supported by ONR Ocean Acoustics.]
Contributed Papers
4pUW5. Physics of backscattering in terms of mode coupling applied to
measured seabed roughness spectra in shallow water. David P. Knobles
(ARL, UT at Austin, PO BOX 8029, Austin, TX 78713-8029, dpknobles@
yahoo.com)
Energy conserving coupled integral equations for the forward and backward propagating modal components have been previously developed [J.
Acoust. Soc. Am. 130, 2673–2680 (2011)]. A rough seabed surface leads to
a backscattered field and modifies the interference structure of the forward
propagating field. Perturbation theory applied to the basic coupled integral
equations allows for physical insight into the correlation of the roughness
spectrum to the forward and backward modal intensities and cross mode coherence. This study applies the Nx2D integral equation coupled-mode
approach to 3-D roughness measurements and examines the physics of the
coupling of the forward and backward field components and computes the
modal intensities as a function of azimuth. The roughness measurements
were made in about 20 m of water off Panama City, Florida. [Work supported by ONR Code 322 OA.]
2:30
4pUW6. Energy conservation via coupled modes in waveguides with an
impedance boundary condition. Steven A. Stotts and Robert A. Koch (Environ. Sci. Lab., Appl. Res. Labs/The Univ. of Texas at Austin, 10000 Burnet Rd., Austin, TX 78758, stotts@arlut.utexas.edu)
A statement of energy conservation for a coupled mode formulation
with real mode functions and eigenvalues has been demonstrated to be consistent with the statement of conservation derivable from the Helmholtz
equation. The restriction to real mode functions and eigenvalues precludes
coupled mode descriptions with waveguide absorption or untrapped modes.
The demonstration, along with the derivation of the coupled mode range
equation, relies on orthonormality in terms of a product of two modal depth
functions integrated to infinite depth. This paper shows that energy conservation and the derivation of the coupled mode range equation can be
extended to complex mode functions and eigenvalues, and that energy is
conserved for ocean waveguides with a penetrable bottom boundary at a finite depth beneath any range dependence. For this, the penetrable bottom
2297
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
boundary is specified by an impedance condition for the mode functions.
The new derivations rely on completeness and a modified orthonormality
statement. Mode coupling is driven solely by waveguide range dependence.
Thus, the form of the range equation and the values of the coupling coefficients are unaffected by a finite depth waveguide. Applications of energy
conservation to examine the accuracy of a numerical coupled mode calculation are presented.
2:45
4pUW7. Effect of channel impulse response on matched filter performance in the 2013 Target and Reverberation Experiment. Mathieu E.
Colin (Acoust. and Sonar, TNO, Postbus 96864, Den Haag 2509 JG, Netherlands, mathieu.colin@tno.nl), Michael A. Ainslie (Acoust. and Sonar, TNO,
The Hague, Netherlands), Peter H. Dahl, David R. Dall’Osto (Appl. Phys.
Lab., Univ. of Washington, Seattle, WA), Sean Pecknold (Underwater Surveillance and Communications, Defence Res. and Development Canada ,
Dartmouth, NS, Canada), and Robbert van Vossen (Acoust. and Sonar,
TNO, The Hague, Netherlands)
Active sonar performance is determined by the characteristics of the target, the sonar system and the effect of the environment on the received
waveform. The two main influences of the environment are propagation
effects and the contamination of the target echo with a background. The ambient noise and reverberation are mitigated by means of signal processing,
mostly through beamforming and matched-filtering. The improvement can
be quantified by the signal to noise ratios before and after processing. Propagation effects can have a large influence on the gains obtained by the processing. To study the effect of the channel on the matched filter performance,
broadband channel impulse responses were modeled and compared to measurements acquired during the Office of Naval Research-funded 2013 Target
and Reverberation Experiment (TREX). In shallow water, a large time
spread is often observed, reducing the effectiveness of the matched filter.
TREX data show, however, a limited time spread. Model predictions indicate that this could be caused by a rough sea-surface, which while increasing propagation loss, at the same time increases matched filter gain.
3:00–3:15 Break
168th Meeting: Acoustical Society of America
2297
4p THU. PM
2:15
3:15
4:00
4pUW8. Using physical oceanography to improve transmission loss calculations in undersampled environments. Cristina Tollefsen and Sean
Pecknold (Defence Res. and Development Canada, P. O. Box 1012, Dartmouth, NS B2Y 3Z7, Canada, cristina.tollefsen@gmail.com)
4pUW11. Laboratory measurements of backscattering strengths from
two-types of artificially roughened sandy bottoms. Su-Uk Son (Dept. of
Marine Sci. and Convergent Technol., Hanyang Univ., 55 Hanyangdaehakro, Sangnok-gu, Ansan, Gyeonggi-do 426-791, South Korea, suuk2@
hanyang.ac.kr), Sungho Cho (Maritime Security Res. Ctr., Korea Inst. of
Ocean Sci. & Technol., Ansan, Gyeonggi-do, South Korea), and Jee Woong
Choi (Dept. of Marine Sci. and Convergent Technol., Hanyang Univ.,
Ansan, Gyenggi-do, South Korea)
The vertical sound speed profile (SSP) is a critical input to any acoustic
propagation model. However, even when measured SSPs are available they
are frequently noisy “snapshots” of the SSP at a single moment in time and
space and do not fully capture changes such as solar heating and winddriven mixing that can significantly affect shallow water propagation on
time scales of less than a day. Furthermore, SSPs measured in the field may
not extend to the ocean bottom and are often based on measured profiles of
temperature with an implicit assumption of constant salinity. In April–May
2013, the Target and Reverberation Experiment (TREX) was conducted in
the Northeastern Gulf of Mexico near Panama City, Florida, a region
strongly affected by local wind forcing, freshwater inputs, and the presence
of a warm-core Gulf of Mexico Loop Current eddy ("Eddy Kraken") offshore of the experimental site. “Synthetic” SSPs were constructed for the
trial area by combining knowledge of the physical oceanography and water
masses in the area with the measured SSPs that were available. Transmission loss was modelled using both synthetic and measured SSPs and the
results will be compared with measured transmission loss.
3:30
4pUW9. Analytic formulation for broadband rough surface and volumetric scattering including matched-filter range resolution. Wei Huang,
Delin Wang, and Purnima Ratilal (Elec. and Comput. Eng., Northeastern
Univ., 006 Hayden Hall, 370 Huntington Ave., Boston, MA 02115, huang.
wei1@husky.neu.edu)
An analytic formulation is derived for the broadband scattered field
from a randomly rough surface based on Green’s theorem employing perturbation theory. The matched filter is applied to resolve the scattered field
within the range resolution footprint of a broadband imaging system. Statistical moments of the scattered field are then expressed in terms of the second moment characterization of the scattering surface. The broadband
diffuse reverberation depends on the rough surface spectrum evaluated over
a range of wavenumbers, centered at the Bragg wavenumber corresponding
to the center frequency of the broadband pulse and extending to wavenumbers proportional to the signal bandwidth. A corresponding analytic broadband volume scattering model is derived from the Rayleigh-Born
approximation to Green’s theorem.
3:45
4pUW10. Objective identification of the dominant seabed scattering
mechanism. Gavin Steininger (SEOS, U Vic, 201 1026 Johnson St., Victoria, BC v7v 3n7, Canada, gavin.amw.steininger@gmail.com), Charles W.
Holland (SEOS, U Vic, State College, Pennsylvania), Stan E. Dosso, and
Jan Dettmer (SEOS, U Vic, Victoria, BC, Canada)
This paper develops and applies a quantitative inversion procedure for
scattering-strength data to determine the dominant scattering mechanism (surface and/or volume scattering) and to estimate the relevant scattering parameters and their uncertainties. The classification system is based on transdimensional Bayesian inversion with the deviance information criterion used
to select the dominant scattering mechanism. Scattering is modeled using
first-order perturbation theory as due to one of three mechanisms: interface
scattering from a rough seafloor, volume scattering from a heterogeneous
sediment layer, or mixed scattering combining both interface and volume
scattering. The classification system is applied to six simulated test cases
where it correctly identifies the true dominant scattering mechanism as having
greater support from the data in five cases; the remaining case is indecisive.
The approach is also applied to measured backscatter-strength data from the
Malta Plateau where volume scattering is determined as the dominant scattering mechanism. This conclusion and the scattering/geoacoustic parameters
estimated in the inversion are consistent with properties from previous inversions and/or with core measurements from the site. In particular, the scattering
parameters are converted from the continuous scattering models used in the
inversion to the equivalent discrete scattering parameters, which are found to
be consistent with properties of the cores. [Work supported by ONR.]
2298
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
In the case of sandy bottom, the backscattering from the interface roughness is significantly dominant compared to that from the volume inhomogeneities, and the power spectrum of interface roughness thus becomes the
most important factor to control the scattering mechanism. Backscattering
strength measurements with a 50-kHz signal were made for two types of
roughness (smooth and rough interfaces) which were artificially formed on
a 0.5-m thick sandy bottom in a 5-m deep water tank. The roughness profiles
were estimated by the arrival time analysis of 5-MHz backscattering signals
emitted by the transducer moving parallel to the interface at a speed of 1
cm/s, which were then Fourier transformed to yield power spectra. In this
talk, the measurements of backscattering strength as a function of grazing
angle in a range of 35 to 90 are presented. Finally, the effect of different
roughness types on the scattering strength will be discussed in comparison
with the predictions obtained by theoretical scattering model including the
perturbation and Kirchhoff approximations. [This research was supported
by the Agency for Defense Development, Korea.]
4:15
4pUW12. Multistatic performance prediction for Doppler-sensitive
waveforms in a shallow-water environment. Cristina Tollefsen (Defence
Res. and Development Canada, P. O. Box 1012, Dartmouth, NS B2Y 3Z7,
Canada, cristina.tollefsen@gmail.com)
Navies worldwide are now operationally capable of exploiting multistatic sonar technology. One of the purported advantages of multistatics
when detecting directional targets should be the increased probability of
receiving a strong reflection at one of the multistatic receivers. However, it
is not yet clear (or intuitive) how best to deploy multistatic-capable assets to
achieve particular mission objectives. The Performance Assessment for Tactical Systems (PATS) software was recently developed by Maritime Way
Scientific under contract to Defence Research and Development Canada as
a research tool to assist in exploring different approaches to multistatic performance modelling. Beginning with a user-defined environment and sensor
layout, PATS uses transmission loss and reverberation model results to calculate signal excess at each grid point in the model domain. Monte Carlo
simulations using many realizations of target tracks allow for the calculation
of the cumulative probability of detection as a means to assess performance.
Results will be presented comparing the shallow-water performance of
monostatic and multistatic sensors using frequency-modulated and Dopplersensitive waveforms as well as omnidirectional and directional targets in a
variety of realistic military scenarios.
4:30
4pUW13. Twinkling exponents for backscattering by spheres in the vicinity of airy caustics associated with reflections by corrugated surfaces.
Philip L. Marston (Phys. and Astronomy Dept., Washington State Univ.,
Pullman, WA 99164-2814, marston@wsu.edu)
High frequency sound reflected by corrugated surfaces produce caustic
networks relevant to sea surface reflection [Williams et al., J. Acoust. Soc.
Am. 96, 1687–1702 (1994)]. When a sphere is positioned sufficiently far
from the reflecting surface, it may be close to an Airy caustic which causes
a significant increase in the backscattering [Dzikowicz and Marston, J.
Acoust. Soc. Am. 116, 2751–2758 (2004)] for signals that bounce only once
off of the focusing surface. For simplicity, here, it is assumed that those signals may be distinguished from the earlier direct echo from the sphere and
the later (and sometimes stronger) doubly focused echo from the sphere
[Dzikowicz and Marston, J. Acoust. Soc. Am. 118, 2811–2819 (2005)]. In
1977, M. V. Berry noticed that the third and higher intensity moments of
wavefields containing caustics can increase in proportion to k , where k is
the wavenumber and is a “twinkling exponent” determined by the
168th Meeting: Acoustical Society of America
2298
dependencies of the intensity and focal volume on k. Assuming that the
sphere is impenetrable and sufficiently large that its direct scattering
depends only weakly on k, for the single-bounce backscattering by a sphere
considered here (the easiest situation for applying Berry’s analysis) the predicted exponent for the third moment is = 1/3.
THURSDAY EVENING, 30 OCTOBER 2014
7:30 P.M. TO 9:00 P.M.
OPEN MEETINGS OF TECHNICAL COMMITTEES
The Technical Committees of the Acoustical Society of America will hold open meetings on Tuesday, Wednesday, and Thursday
evenings. On Tuesday the meetings will begin at 8:00 p.m., except for Engineering Acoustics which will hold its meeting starting at
4:30 p.m. On Wednesday evening, the Technical Committee on Biomedical Acoustics will meet starting at 7:30 p.m. On Thursday evening, the meetings will begin at 7:30 p.m.
These are working, collegial meetings. Much of the work of the Society is accomplished by actions that originate and are taken in
these meetings including proposals for special sessions, workshops, and technical initiatives. All meeting participants are cordially
invited to attend these meetings and to participate actively in the discussion.
Committees meeting on Thursday are as follows:
Lincoln
Santa Fe
Marriott 3/4
Marriott 1/2
Indiana G
Indiana F
4p THU. PM
Animal Bioacoustics
Musical Acoustics
Noise
Psychological and Physiological Acoustics
Signal Processing in Acoustics
Underwater Acoustics
2299
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2299
FRIDAY MORNING, 31 OCTOBER 2014
INDIANA A/B, 8:00 A.M. TO 12:30 P.M.
Session 5aBA
Biomedical Acoustics: Cavitation Control and Detection Techniques
Kevin J. Haworth, Cochair
Univ. of Cincinnati, 231 Albert Sabin Way, CVC3940, Cincinnati, OH 45209
Oliver D. Kripfgans, Cochair
Dept. of Radiology, Univ. of Michigan, Ann Arbor, MI 48109-5667
Invited Papers
8:00
5aBA1. Detection and control of cavitation during blood–brain barrier opening: Applications and clinical considerations.
Meaghan A. O’Reilly, Ryan M. Jones, Alison Burgess, Cassandra Tyson, and Kullervo Hynynen (Physical Sci., Sunnybrook Res. Inst.,
2075 Bayview Ave., Rm. C713, Toronto, ON M4N3M5, Canada, moreilly@sri.utoronto.ca)
Microbubble-mediated opening of the blood–brain barrier (BBB) using ultrasound is a targeted technique that provides a transient
time window during which circulating therapeutics that are normally restricted to the vasculature can pass into the brain. This effect has
been associated with increases in cavitation activity of the circulating microbubbles, and our group has previously described a method to
actively control treatments in pre-clinical rodent models based on acoustic emissions recorded by a single transducer. Recently, we have
developed a clinical-scale receiver array capable of detecting bubble activity through ex vivo human skullcaps starting at pressure levels
below the threshold for BBB opening. The use of this array to spatially map cavitation activity in the brain during ultrasound therapy
will be discussed, including considerations for compensating for the distorting effects of the skull bone. Additionally, results from preclinical investigations examining safety and therapeutic potential will be presented, and receiver design considerations for both pre-clinical and clinical scale systems will be discussed.
8:20
5aBA2. Passive acoustic mapping of stable and inertial cavitation during ultrasound therapy. Christian Coviello (Inst. of Biomedical Eng., Dept. of Eng. Sci., Univ. of Oxford, Oxford, United Kingdom), James Choi (Dept. of BioEng., Imperial College, London,
United Kingdom), Jamie Collin, Robert Carlisle (Inst. of Biomedical Eng., Dept. of Eng. Sci., Univ. of Oxford, Oxford, United Kingdom), Miklos Gyongy (Faculty of Information Technol. and Bionics, Pazmany Peter Catholic Univ., Prague, Hungary), and Constantin
C. Coussios (Inst. of Biomedical Eng., Dept. of Eng. Sci., Univ. of Oxford, Inst. of Biomedical Eng., Old Rd. Campus Res. Bldg.,
Oxford, Oxfordshire OX3 7DQ, United Kingdom, constantin.coussios@eng.ox.ac.uk)
Accurate spatio-temporal characterization, quantification, and control of the type and extent of cavitation activity is crucial for a
wide range of therapeutic ultrasound applications, ranging from ablation to sonothrombolysis, opening of the blood-brain barrier and
drug delivery for cancer. Passive Acoustic Mapping (PAM) is a technique that utilizes arrays of acoustic detectors, typically coaxially
aligned or coincident with the therapeutic elements, to receive acoustic emissions outside the main frequency band of the therapy pulse.
The signals received by each detector are then filtered in the frequency domain into harmonics and ultra/subharmonics of the fundamental therapeutic frequency and other broadband components, and subsequently beamformed using a multi-correlation algorithm, which
uses measures of similarity between the signals rather than time-of-flight information in order to map sources of non-linear emissions in
real time. 2D and 3D cavitation maps obtained using time exposure acoustics beamforming will be presented, and juxtaposed to the
greater spatial resolution but increased computational complexity afforded by more advanced algorithms such as the Robust Capon
Beamformer (RCB). The spatial correlation between cavitation maps produced using PAM and the associated therapeutic effect will
also be discussed in the context of cavitaion-enhanced ablation and drug delivery.
8:40
5aBA3. Image-guided sonothrombolysis in a stroke model with a cavitation delivery and monitoring system. Francois Vignon,
William T. Shi (Ultrasound Imaging and Therapy, Philips Res. North America, 345 Scarborough Rd., Briarcliff Manor, NY 10510,
francois.vignon@philips.com), Jeffry Powers (Philips Ultrasound, Bothell, WA), Feng Xie, Juefei Wu, Shunji Gao, John Lof, and
Thomas R. Porter (Cardiology, Univ. of NE Medical Ctr., Omaha, NE)
Microbubbles (MB) and ultrasound (US) can dissolve intra-arterial thrombi. In order to reproducibly deliver the correct cavitation
dose and ensure treatment efficacy and safety, we designed a therapeutic US mode with cavitation monitoring. Therapy delivery and
recording of the MB signal are achieved with a sector imaging probe. Monitoring is achieved by spectrally analyzing the MB signal:
ultraharmonics are a marker of stable cavitation (SC) and broadband noise characterizes inertial cavitation (IC). We used the system in a
pig model. Thrombotic occlusions were created by injecting 4-hour old clots bilaterally into the internal carotids. Forty pigs were
randomized to either 2.4 MI, 5 ls pulses with MBs; 1.7 MI, 20 ls pulses with MBs; and 2.4 MI, 5 ls pulses without MBs. Angiographic
recanalization rates were compared. Cavitation as a function of MI was estimated in vivo. Dominant SC started at an applied MI of 0.6
2300
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2300
(0.3MI in situ after derating by skull attenuation). Dominant IC was estimated to start at an applied MI of 0.9 (0.6 in situ). Thus, all therapy settings were in the IC regime. The 2.4MI + MB setting was the most effective (100% recanalization) vs 38% for the 1.7MI + MB
and 50% for 2.4 MI without MBs (both p<0.05 compared to 2.4MI + MB). No signs of hemorrhage were found in any animal. In conclusion, higher IC levels are most effective for thrombus dissolution. Spectral analysis techniques can be used to plan and monitor the
therapy.
9:00
5aBA4. Timing of high intensity pulses for myocardial cavitation-enabled therapy. Douglas Miller, Chunyan Dou (Radiology,
Univ. of Michigan, 3240A Medical Sci. I, 1301 Catherine St., Ann Arbor, MI 48109-5667, douglm@umich.edu), Gabe E. Owens
(Pediatrics, Univ. of Michigan, Ann Arbor, MI), and Oliver Kripfgans (Radiology, Univ. of Michigan, Ann Arbor, MI)
Ultrasound pulses intermittently triggered from an ECG signal can interact with circulating microbubbles to produce myocardial cavitation microlesions, which may enable tissue-reduction therapy. The timing of therapy pulses relative to the ECG was investigated to
identify the optimal trigger point with regard to physiological response and microlesion production. Rats were anesthetized, prepared for
ultrasound, placed in a heated water bath, and treated with 1.5 MHz focused ultrasound pulses aimed by 8 MHz imaging. Initially, rats
were treated for 1 min with triggering at each of six different points in the ECG while monitoring blood pressure. Premature complexes,
a useful indicator of efficacy, were seen in the ECG, except during early systole. Premature complexes corresponded with blood pressure
pulses for triggering during diastole, but not during systole. Next, triggering at three of the time points, end diastole, end systole, or middiastole, was tested for the impact on microlesion creation. Microlesions stained by Evans blue dye were scored in frozen sections. There
was no statistically significant variation in cardiomyocyte injury. The end of systole was identified as an optimal trigger time point which
yielded ECG complexes and substantial cardiomyocyte injury, but minimal cardiac functional disruption.
9:20
5aBA5. Cavitation threshold determination —Can we do it? Gail ter Haar, John Civale, Ian Rivens, and Marcia Costa (Phys., Inst. of
Cancer Res., Phys. Dept., Royal Marsden Hospital, Sutton, Surrey SM2 5PT, United Kingdom, gail.terhaar@icr.ac.uk)
As clinical applications, which harness acoustic cavitation, become more commonplace, it becomes more and more important to be
able to determine the threshold pressures at which it is likely to occur. In our studies, we have used a suite of different detection techniques in an effort to determine these thresholds. These include passive cavitation detection, transducer impedance monitoring, and visual
appearance. Different methods of acoustic signal processing have been compared. The resultant cavitation thresholds will be discussed.
9:40
5aBA6. Monitoring boiling histotripsy with bubble-based ultrasound techniques. Vera Khokhlova (Ctr. for Industrial and Medical
Ultrasound, Appl. Phys. Lab., Univ. of Washigton, 1013 NE 40th St., Seattle, WA 98105, va.khokhlova@gmail.com), Michael Canney
(INSERM U556, Lyon, France), Julianna Simon, Tatiana Khokhlova, Joo-Ha Hwang, Adam Maxwell, Michael Bailey, Oleg Sapozhnikov,
Wayne Kreider, and Lawrence Crum (Ctr. for Industrial and Medical Ultrasound, Appl. Phys. Lab., Univ. of Washigton, Seattle, WA)
Cavitation phenomena have been always considered as a predominant mechanism of concern in mechanical tissue damage induced
by therapeutic ultrasound. Corresponding methods have been developed to monitor cavitation. Recently, a new high intensity focused
ultrasound technology, called boiling histotripsy (BH), was introduced, in which the major physical phenomenon that initiates mechanical tissue damage is vapor bubble growth associated with rapid tissue heating to boiling temperatures. Caused by nonlinear propagation
effects and the development of high-amplitude shocks, this tissue heating is localized in space and can lead to boiling within milliseconds. Once a boiling bubble is created, interaction of shock waves with the cavity results in tissue disintegration. While the incident
shocks can lead to cavitation phenomena and accompanying broadband emissions, the presence of a millimeter-sized vapor cavity in tissue produces strong echogenicity in ultrasound (US) imaging that can be exploited with B-mode diagnostic ultrasound. Various other
methods of imaging boiling histotripsy, including passive cavitation detection (PCD), Doppler or nonlinear pulse-inversion techniques,
and high speed photography in transparent gel phantoms are also overviewed. The role of shock amplitude as a metric for mechanical
tissue damage is discussed. [Work supported by NIH EB007643, T32DK007779, and NSBRI through NASA NCC 9-58.]
10:00
5aBA7. Control of cavitation through coalescence of cavitation nuclei. Timothy L. Hall, Alex Duryea, and Hedieh Tamaddoni
(Univ. of Michigan, 2200 Bonisteel Blvd., Ann Arbor, MI 48109, hallt@umich.edu)
5a FRI. AM
Therapeutic ultrasound in the form of SWL, HIFU, or histotripsy frequently generates cavitation nuclei (bubbles 1–10 um radius),
which can persist up to about 1 s before dissolving. These nuclei can attenuate and reflect propagation of acoustic fields reducing SWL
efficiency, enhancing HIFU heating, or shifting the location of a histotripsy focal zone making procedures less predictable. Depending
on their location, nuclei can also directly cause tissue damage when a high amplitude sound field causes them to undergo inertial cavitation. These undesirable effects can be reduced by using a low amplitude sound field (MI <1) to stimulate coalescence of nuclei through
primary and secondary Bjerknes forces. We will show nuclei coalescence significantly reduces sound field attenuation, improves SWL
breakup of model kidney stones, and reduces collateral damage in soft tissues. We also show techniques for designing the non-focal
acoustic fields for efficient coalescence with 3D printed acoustic lenses. Timothy Hall has a consulting arrangement with Histosonics,
Inc., which has licensed intellectual property related to this abstract.
10:20–10:30 Break
2301
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2301
Contributed Papers
10:30
5aBA8. Ultraharmonic intravascular ultrasound imaging with commercial 40 MHz catheter: A feasibility study. Himanshu Shekhar, Ivy Awuor,
Steven Huntzicker, and Marvin M. Doyley (Univ. of Rochester, 345 Hopeman Bldg., University of Rochester River Campus, Rochester, NY 14627,
himanshuwaits@gmail.com)
The abnormal growth of the vasa vasorum is characteristic of life-threatening atherosclerotic plaques. Intravascular ultraharmonic imaging is an
emerging technique that could visualize the vasa vasorum and help clinicians identify life-threatening plaques. Implementing this technique on commercial intravascular ultrasound (IVUS) systems could to accelerate its
clinical translation. Our previous work has demonstrated ultraharmonic
IVUS imaging with a modified clinical system that was equipped with a
commercial 15 MHz peripheral imaging catheter. In the present study, we
investigated the feasibility of ultraharmonic imaging with a commercially
available 40 MHz coronary imaging catheter. We imaged a flow phantom
that had contrast agent microbubbles (Targestar-P-HF, Targeson Inc., CA)
perfused in side channels parallel to its main lumen. The transducer was
excited at 30 MHz using 10% bandwidth chirp-coded pulses. The ultraharmonic response at 45 MHz was isolated and preferentially visualized using
pulse inversion and digital filtering. Side channels with 900 lm and 500 lm
diameter were detected with contrast-to-tissue ratios approaching 10 dB for
clinically relevant microbubble concentrations. The results of this study
indicate that ultraharmonic imaging is feasible with commercially available
coronary IVUS catheters, which may facilitate its widespread application in
preclinical research and clinical imaging.
10:45
5aBA9. A method to calibrate the absolute receive sensitivity of spherically focused, single-element transducers. Kyle T. Rich and T. Douglas
Mast (Biomedical Eng., Univ. of Cincinnati, 3938 Cardiovascular Res. Ctr.,
231 Albert Sabin Way, Cincinnati, OH 45267-0586, doug.mast@uc.edu)
Quantitative acoustic measurements of microbubble behavior, including
scattering and emissions from cavitation, would be facilitated by improved
calibration of transducers making absolute pressure measurements. In particular, appropriate methods are needed for wideband calibration of focused
passive cavitation detectors. Here, a substitution method was developed to
characterize the absolute receive sensitivity of two spherically focused, single-element transducers (center transmit frequencies 4 and 10 MHz).
Receive calibrations were obtained by transmitting and receiving a broadband pulse between the two focused transducers in a pitch-catch, confocally
aligned configuration, separated by a distance equal to the sum of the two
focal lengths. A calibrated hydrophone was substituted to measure the pressure field in the plane of each receiver’s surface. The frequency dependent
receive sensitivity at the focus was then calculated for each transducer as
the ratio of the receiver-measured voltage and the average hydrophonemeasured pressure amplitude across the receiver surface. Calibrations were
validated by generating an approximately spherically spreading, broadband
pressure wave at the focus of each transducer using a 2-mm diameter transducer and comparing the absolute acoustic pressure measured by each
focused transducer to that measured by a calibrated hydrophone.
11:00
5aBA10. Instigation and monitoring of inertial cavitation from nanoscale particles using a diagnostic imaging platform and passive acoustic
mapping. Christian Coviello, James Kwan, Susan Graham, Rachel Myers,
Apurva Shah, Penny Probert Smith, Robert Carlisle, and Constantin Coussios (Inst. of Biomedical Eng., Univ. of Oxford, ORCRB, Oxford OX3
7DQ, United Kingdom, christian.coviello@eng.ox.ac.uk)
Inertial cavitation nucleated by microbubble contrast agents has been
recently shown to enhance extravasation and improve the distribution of
anti-cancer agents during ultrasound (US)-enhanced delivery. However,
microbubbles require frequent replenishment due to their rapid clearance
and destruction upon US exposure and are unable to extravasate into tumor
tissue due to their large size. A new generation of gas-stabilizing polymeric
2302
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
cup-shaped nanoparticles, or “nanocups” (NCs), have been formulated to a
size that enables exploitation of the enhanced permeability and retention
effect for intratumoral accumulation. NCs provide sustained inertial cavitation, as characterized by broadband emissions, at peak rarefactional pressures readily achievable by diagnostic ultrasound systems. This enables the
use of a single low-cost system for B-mode treatment guidance, instigation
of cavitation, and real-time passive acoustic mapping (PAM) of the location
and extent of cavitation activity during therapy. The significant lowering of
the inertial cavitation threshold in the presence of NCs as characterized by
PAM is first quantified in-vitro. In-vivo and ex-vivo results in xenograftimplanted tumor bearing mice further evidence the strong presence of inertial cavitation detectable in the tumor at diagnostic levels of US intensity, as
confirmed by PAM images overlaid on B-mode in real-time.
11:15
5aBA11. Passive cavitation imaging with nucleic acid-loaded microbubbles in mouse tumors. Man M. Nguyen, Jonathan A. Kopechek, Bima
Hasjim, Flordeliza S. Villanueva, and Kang Kim (Dept. of Medicine, Univ.
of Pittsburgh, 3550 Terrace St., 562 Scaife Hall, Pittsburgh, PA 15261,
manmnguyen@gmail.com)
Ultrasound-targeted microbubble (MB) destruction has been used to
deliver nucleic acids to cancer cells for therapeutic effect. Identifying both
the location and cavitation activities of the MBs is needed for efficient and
effective treatment. In this study, we implemented passive cavitation imaging into a commercially available ultrasound open platform (Verasonics) for
a 128-element linear array transducer, centered at 5 MHz, and applied it to
an in-vivo mouse tumor model. Cationic lipid MBs were loaded with a transcription factor decoy that suppresses STAT3 signaling and inhibits tumor
growth in murine squamous cell carcinomas. During systemic MB infusion,
ultrasound pulses (4 or 20 cycles) were delivered with a 1-MHz single-element transducer (0.4–1.4MPa peak pressures). Channel data were offline
beamformed, band-pass filtered, subtracted from reference images acquired
without MBs, and co-registered with B-mode images. During MB infusion,
harmonics and broadband emissions were detected in the tumor with both
frequency spectra and cavitation images. For 4-cycle 0.4 MPa pulses, harmonic signals at 5 MHz and broadband signals 3–7 MHz were 23 dB and at
least 5 dB greater with MBs than without MBs, respectively. These preliminary results demonstrate the feasibility of in-vivo passive cavitation imaging
and could lead to further studies for optimizing US/MB-mediated delivery
of nucleic acids to tumors.
11:30
5aBA12. Non-focal acoustic lens designs for cavitation bubble consolidation. Hedieh A. Tamaddoni, Alexander Duryea, and Timothy L. Hall
(Univ. of Michigan, 2740 Barclay Way, Ann Arbor, MI 48105, alavi@
umich.edu)
During shockwave lithotripsy, cavitation bubbles form on the surface of
urinary stones aiding in the fragmentation process. However, shockwaves
can also produce pre-focal bubbles, which may shield or block subsequent
shockwaves and potentially induce collateral tissue damage. We have previously shown in-vitro that low amplitude acoustic waves can be applied to
actively stimulate bubble coalescence and help alleviate this effect. A traditional elliptical transducer lens design produces the maximum focal gain
possible for a given aperture. From experiments and simulation, we have
found that this design is not optimal for bubble consolidation as the primary
and secondary Bjerknes forces may act against each other and the effective
field volume is too small. For this work, we designed and constructed nonfocal transducer lenses with complex surface geometries using rapid prototyping stereolithography to produce more effective acoustic fields for bubble
consolidation during lithotripsy or ultrasound therapy. We demonstrate a
design methodology using an inverse problem technique to map the desired
acoustic field back to the surface of the transducer lens to determine the correct phase shift at every point on the lens surface. This method could be
applied to other acoustics problems where non-focused acoustic fields are
desired.
168th Meeting: Acoustical Society of America
2302
11:45
12:00
5aBA13. Scavenging dissolved oxygen via acoustic droplet vaporization.
Kirthi Radhakrishnan, Christy K. Holland, and Kevin J. Haworth (Internal
Medicine, Univ. of Cincinnati, Cardiovascular Ctr. 3972, 231 Albert Sabin
Way, Cincinnati, OH 45267, radhakki@ucmail.uc.edu)
5aBA14. Effects of rose bengal on cavitation cloud behavior in optically
transparent gel phantom investigated by high-speed observation. Jun
Yasuda, Takuya Miyashita (Dept. of Commun. Eng., Tohoku Univ., 6-6-065 Aramakiazaaoba, Aoba, Sendai, Miyagiken 980-0801, Japan, j_yasuda@
ecei.tohoku.ac.jp), Kei Taguchi (Dept. of Biomedical Eng., Tohoku Univ.,
Sendai, Japan), Shin Yoshizawa (Dept. of Commun. Eng., Tohoku Univ.,
Sendai, Japan), and Shin-ichiro Umemura (Dept. of Biomedical Eng.,
Tohoku Univ., Sendai, Japan)
Acoustic droplet vaporization (ADV) has been investigated for capillary
hemostasis, thermal ablation, and ultrasound imaging. The maximum diameter of a microbubble produced by ADV depends on the gas saturation of
the surrounding fluid. This dependence is due to diffusion of dissolved gases
from the fluid into the perfluoropentane (PFP) microbubble. This study
investigated the change in oxygen concentration in the surrounding fluid after ADV. Albumin-shelled PFP droplets in air-saturated saline (1:30, v/v)
were continuously pumped through a flow system and insonified by a
focused 2-MHz single-element transducer to induce ADV. B-mode image
echogenicity was used to determine the ADV threshold pressure amplitude.
The dissolved oxygen concentration in the fluid upstream and downstream
of the insonation region was measured using inline sensors. Droplet size distributions were measured before and after ultrasound exposure to determine
the ADV transition efficiency. The ADV pressure threshold at 2 MHz was
1.7 MPa (peak negative). Exposure of PFP droplets to ultrasound at 5 MPa
peak negative pressure caused the dissolved oxygen content in the surrounding fluid to decrease from 88 6 3% to 20 6 4%. The implications of oxygen
scavenging during ADV will be discussed.
Sonodynamic treatment is a non-thermal ultrasonic method using sonochemical effect of cavitation bubbles. Rose bengal (RB) is sonochemically
active and reduces cavitation threshold and therefore has potential to be an
agent for sonodynamic treatment. For the effectiveness and safety of the
treatment, controlling cavitation is crucial. In our previous study, we have
suggested high-intensity focused ultrasound (HIFU) employing second-harmonic superimposition, which can control cavitation cloud generation by
superimposing the second harmonic onto the fundamental. In this study, to
investigate the effects of RB on cavitation behavior, a polyacrylamide gel
phantom containing RB was exposed to second-harmonic superimposed
ultrasound and the generated cavitation bubbles were observed by a highspeed camera. The gel contained three different concentrations of RB, 0, 1,
and 10 mg/L. The ultrasonic intensity and exposure duration were 40 kW/
cm2 and 100 ls, respectively. The fundamental frequency was 0.8 MHz. In
the results, the amount of the incepted cloud became higher and the lifetime
of bubbles became longer as the RB concentration increased at high reproducibility. The observed RB concentration dependence suggests that the
amount of cavitation bubbles can be controlled using second-harmonic
superimposition. The observed lifetime extension of bubbles can not only
promote sonochemical but also enhance thermal bioeffect.
12:15–12:30 Panel Discussion
FRIDAY MORNING, 31 OCTOBER 2014
INDIANA E, 10:00 A.M. TO 1:00 P.M.
Session 5aED
Education in Acoustics: Hands-On Acoustics Demonstrations for Indianapolis Area Students
Uwe J. Hansen, Cochair
Chemistry& Physics, Indiana State University, 64 Heritage Dr., Terre Haute, IN 47803-2374
Andrew C. H. Morrison, Cochair
Natural Science Department, Joliet Junior College, 1215 Houbolt Rd., Joliet, IL 60431
2303
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
5a FRI. AM
Acoustics has a long and rich history of physical demonstrations of fundamental (and not so fundamental) acoustics principles and
phenomenon. In this session “Hands-On” demonstrations will be set-up for a group of middle school students from the Indianapolis
area. The goal is to foster curiosity and excitement in science and acoustics at this critical stage in the students’ educational development
and is part of the larger “Listen Up” education outreach effort by the ASA.
Each station will be manned by an experienced acoustician who will help the students understand the principle being illustrated in
each demo. Any acousticians wanting to participate in this fun event should email Uwe Hansen (uhansen@indstate.edu) or Andrew
C. H. Morrison (amorriso@jjc.edu).
2303
FRIDAY MORNING, 31 OCTOBER 2014
MARRIOTT 7/8, 9:45 A.M. TO 12:05 P.M.
Session 5aNS
Noise: Transportation Noise, Soundscapes, and Related Topics
Alan T. Wall, Chair
Battlespace Acoustics Branch, Air Force Research Laboratory, Bldg. 441, Wright-Patterson AFB, OH 45433
Chair’s Introduction—9:45
Contributed Papers
9:50
5aNS1. Traffic monitoring with noise: Investigations on an urban seismic network. Nima Riahi and Peter Gerstoft (Marine Physical Lab., Scripps
Inst. of Oceanogr., 9500 Gilman Dr., MC 0238, La Jolla, CA 92093-0238,
nriahi@ucsd.edu)
Traffic in urban areas generates not only acoustic noise but also much
seismic noise. The latter is typically not perceptible by humans but could, in
fact, offer an interesting data source for traffic information systems. To
explore the potential for this, we study a 5300 geophone network, which
covered an area of over 70 km2 in Long Beach, CA, and was deployed as
part of a hydrocarbon industry survey. The sensors have a typical spacing of
about 100 m, which presents a two-sided processing challenge here: signals
beyond a few receiver spacings from the sources are often strongly attenuated and scattered whereas nearby receiver signals may contain complicated
near-field effects. We illustrate how we address this issue and give three
simple applications: counting cars on a highway section, classifying different types of vehicles passing along a road, and measuring time and take-off
velocity of aircraft at Long Beach airport. We discuss future work toward
traffic monitoring and also possible connections with acoustical problems.
10:05
5aNS2. Impact of AMX-A1 military aircraft operations on the acoustical environment close to a Brazilian airbase. Olmiro C. de Souza and
Stephan Paul (UFSM, Acampamento, 569, Santa Maria, Santa Maria
97050003, Brazil, olmirocz.eac@gmail.com)
Military aircraft operating on airbases usually have a considerable
impact on the neighborhood. While the impact of civil aircraft operations
can be modeled by commercially available software, the same is hardly possible for military aircraft as no EPNL data are available for such aircraft and
flight path are not restricted to those used by civilian operations. Therefore,
in this work, the noise impact of AMX-A1 aircraft operating at a Brazilian
airbase was evaluated from measurements originally intended for calibration
of the noise map of an university that is in the vicinity. From the data, it was
possible to obtain LAeq10min with and without jet noise to see how much
does AMX operations influence in the total measurement. Sound exposure
levels (SEL) were also calculated. It was found that depending of AMX procedure (Approach, Departure, Touch, Go, etc.), jet noise increases the
LAeq10min up to 10 dB and SEL values reaches 96, 6 dBA in sensitive areas.
It will be discussed if A-weighted sound power level can be estimated form
the data considering the aircraft as a point source in free field.
10:20
noise is defined as the noise produced by an aircraft in high altitude
operation (>3000 m) measured by a microphone 1.2 m above ground level.
Calculations were performed for three different aircraft operating conditions—cruise, climb, and descent. For each calculation, the aircraft noise
source was modeled as an isolated CROR engine. This noise model was
determined from experimental measurements made in a transonic wind tunnel using a 1/6th-scale open rotor rig. En-route noise levels were calculated
using the whole aircraft noise prediction code SOPRANO. The CROR noise
model were input into SOPRANO and long-distance propagation was calculated using the ray-tracing code APHRODITE, which is implemented within
SOPRANO. This ray tracing code requires atmospheric wind speed, wind
direction, temperature, and humidity profiles, which were collected from
historical data around Europe. The ray tracing method divides the atmosphere up into a number of layers. Meteorological parameters were assumed
to vary linearly between the values specified at the boundaries of each layer.
Numerous simulations are conducted using different atmospheres in order to
assess the impact of atmospheric conditions on the en-route noise levels.
10:35
5aNS4. Gaps in the literature on the effects of aircraft noise on children’s cognitive performance. Matthew Kamrath and Michelle C. Vigeant
(Graduate Program in Acoust., Penn State Univ., 201 Appl. Sci. Bldg.,
University Park, PA 16802, kamrath64@gmail.com)
In the past two decades, several major studies have indicated that
chronic aircraft noise exposure negatively impacts children’s cognitive performance. For example, the longitudinal Munich airport study (Hygge, Am.
Psychol. Soc., 2002) demonstrated that noise adversely affects reading ability, memory, attention, and speech perception. Moreover, the cross-sectional
RANCH study (Stansfeld, Lancet, 2005) found a linear correlation between
extended noise exposure and reduced reading comprehension and recognition memory. This presentation summarizes these and other recent studies
and discusses four key areas in need of further research (ENNAH Final
Report Project No. 226442, 2013). First, future studies should account for
all of the following confounding factors: socioeconomic variables, daytime
and nighttime aircraft, road, and train noise, and air pollution. Second, multiple noise metrics should be evaluated to determine if the character of the
noise alters the relationship between noise and cognition. Third, future
research should explore the mitigating effects of improved classroom acoustics and exterior sound insulation. Finally, additional longitudinal studies
are necessary: (1) to establish a causal relationship between aircraft noise
and cognition; and (2) to understand how changes in the duration of the exposure and in the age of the students influence the relationship. [Work supported by FAA PARTNER Center of Excellence.]
5aNS3. The effect of long-range propagation on contra-rotating open
rotor en-route noise levels. Upol Islam (Inst. of sound and Vib. Res. (ISVR),
Univ. of Southampton, Highfield Campus, Bldg. 13, Rm. 2009, Southampton,
Hampshire SO17 1BJ, United Kingdom, ui1d11@soton.ac.uk)
The purpose was to calculate the en-route noise level produced by an
advanced contra-rotating open rotor (CROR) powered aircraft. The en-route
2304
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2304
5aNS5. Acoustic absorption of green roof samples commercially available in southern Brazil. Ricardo Brum, Stephan Paul (Centro de Tecnologia, Universidade Federal de Santa Maria, Rua Erly de Almeida Lima, 650,
Santa Maria, RS 97105-120, Brazil, ricardozbrum@yahoo.com.br), Andrey
R. da Silva (Centro de Engenharias da Mobilidade, Universidade Federal de
Santa Catarina, Joinville, Brazil), and Tenile Piovesan (Centro de Tecnologia, Universidade Federal de Santa Maria, Santa Maria, RS, Brazil)
Previous investigations have shown that green roofs provide many environmental benefits, such as thermal conditioning, air cleaning, and rain
water absorption. Nevertheless, information regarding acoustic properties,
such as sound absorption and transmission loss is still sparse. This work
presents measurements of the sound absorption coefficient of two types of
green roofs commercially available in Brazil: the alveolar and the hexa system. Measurements were made in a reverberant chamber according to ISO354 for different variations of both systems: the alveolar system with 2.5 cm
of substrate with and without grass and 4 cm of substrate only. The hexa
system was measured with layers of 4 and 6 cm of substrate without vegetation and 6 cm of substrate with a layer of vegetation of the sedum type. For
all systems, high absorption coefficients were found for medium and high
frequency limits (a 0.7) and low absorption in low frequencies (a 0.2).
This was expected due to the highly porous structure of the substrate. The
results suggest that the types of green roofs evaluated in this work could be
a good approach to noise control in urban areas.
11:05
5aNS6. The perceived annoyance of urban soundscapes. Adam Craig,
Don Knox, and David Moore (School of Eng. and Built Environment, Glasgow Caledonian Univ., 70 Cowcaddens Rd., Glasgow G4 0BA, United
Kingdom, Adam.Craig@gcu.ac.uk)
Annoyance is one of the main factors that contribute to a negative view
of environmental noise, and can lead to stress-related health conditions.
Subjective perception of environmental sounds is dependent upon a variety
of factors related to the sound, the geographical location, and the listener.
Noise maps used to communicate information to the public about environmental noise in a given geographic location are based on simple noise level
measurements and do not include any information regarding how perceptually annoying or otherwise the noise might be. This study involved subjective assessment by a large panel of listeners (N = 200) of a corpus of 60
pre-recorded urban soundscapes collected from a variety of locations around
Glasgow City Centre. Binaural recordings were taken at three points during
each 24 hour period in order to capture urban noise during day, evening, and
night. Perceived annoyance was measured using Likert and numerical scales
and each soundscape measured in terms of arousal and positive/negative valence. The results shed light on the subjective annoyance of environmental
sound in a range of urban locations around Glasgow, and form the basis for
development of environmental noise maps, which more fully communicate
the effects of environmental noise to the public.
11:20
5aNS7. What comprises a healthy soundscape for the captive Southern
White Rhinoceros (Ceratotherium simum simum)? Suzi Wiseman (Environ. Geography, Texas State Univ.-San Marcos, 3901 North 30th St., Waco,
TX 76708, sw1210txstate@gmail.com), Preston S. Wilson (Mech. Eng.,
Univ. Texas at Austin, Austin, TX), and Frank Sepulveda (Geophys., Baylor
Univ., Killeen, TX)
Many creatures, including the myopic rhinoceros, depend upon hearing
and smell to determine their environment. Nature is dominated by meaningful biophonic and geophonic information quickly absorbed by soil and vegetation, while anthrophonic urban soundscapes exhibit vastly different
physical and semantic characteristics, sound repeatedly reflecting off hard
2305
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
geometric surfaces, distorting and reverberating, and becoming noise. Noise
damages humans physiologically, including reproductively, and likely damages other mammals. Rhinos vocalize sonically and infrasonically, but
audiograms are unavailable. They generally breed poorly in urban zoos,
where infrasonic noise can be chronic. Biological and social factors are
studied, but little attention if any is paid to soundscape. We present a methodology to analyze the soundscapes of captive animals according to their
hearing range. Sound metrics determined from recordings at various institutions can then be compared and correlations with the health and wellbeing
of their animals can be sought. To develop this methodology we studied the
sonic, infrasonic, and seismic soundscape experienced by the white rhinos
at Fossil Rim Wildlife Center, one of the few U.S. facilities to successfully
breed this species in recent years. Future analysis can seek particular parameters known to be injurious to human mammals, plus parameters known to
invoke response in animals.
11:35
5aNS8. Shape optimization of acoustic horns using few design variables.
Nilson Barbieri (Mech. Eng., PUCPR, Rua Imaculada Conceiç~ao, 1155,
Curitiba, Parana 80215-901, Brazil, nilson.barbieri@pucpr.br), Renato
Barbieri (Mech. Eng., UDESC, Joinville, Santa Catarina, Brazil), Clebe T.
Vitorino, and Key F. Lima (Mech. Eng., PUCPR, Curitiba, Brazil)
The main steps for design of the optimal geometry of acoustic horns
employing numerical methods are: the definition of the domain and the
restrictions and control of the boundary, the definition of the objective function and the frequency range of interest, the evaluation of the objective function value, and the selection of a robust optimization technique to calculate
the optimal value. During the optimization process, the profile is changing
continuously until obtaining the optimal horn profile. The main focus of this
work was to obtain optimal geometries with the use of few design variables.
Two different methods to control the horn profile during the optimization
process are used: approximation of the contour of the horn with Hermite
polynomials and sinusoidal functions. The numerical results show the efficiency of these methods and it was also found (at least from the engineering
point of view) that the solution is not unique to the geometry of the horn to
single-frequency. The results for the optimization for more than one frequency are also shown.
11:50
5aNS9. Parametric study of a PULSCO vent silencer. Usama Tohid
(Eng., PULSCO, 17945 Sky Park Circle, Ste. G, Irvine, CA 92614, u.
tohid@pulsco.com)
We have conducted a parametric study via numerical simulations of a
PULSCO vent silencer. The overall objective is to demonstrate the existence
of an optimum system performance for a given set of operating conditions
by modifying the corresponding geometry of the device. The vent silencer
under consideration consists of a perforated diffuser, the silencer body, and
a tube module. The tube module consists of a set of tubes through which the
working fluid passes. The flow tubes are perforated and surrounded with
acoustic packing that is responsible for the attenuation. The mathematical
model of the vent silencer is built upon Helmholtz equation for the plane
wave solution, and the Delany-Bazley model for the acoustic packing. The
geometrical parameters chosen for the parametric study include: the porosity
of the diffuser and the flow tubes, the type of packing material used for the
tube module, bulk density for the acoustic packing, and the hole diameter of
the perforated diffuser and flow tubes. The equations of the mathematical
model are discretized over the computational domain and solved with a finite element method. Numerical results in terms of transmission loss, for the
system, indicate that diffuser hole size of 1/4” with porosity of 0.1, flow
tube hole size of 1/8” with porosity of 0.23, packing density of 16 kg/m3 for
TRS-10 and 100 kg/m3 for Advantex provided the optimum results for the
chosen set of conditions. The numerical results were found to be in agreement with experimental data.
168th Meeting: Acoustical Society of America
2305
5a FRI. AM
10:50
FRIDAY MORNING, 31 OCTOBER 2014
MARRIOTT 5, 8:00 A.M. TO 10:00 A.M.
Session 5aPPa
Psychological and Physiological Acoustics: Psychological and Physiological Acoustics Potpourri
(Poster Session)
Noah H. Silbert, Chair
Communication Sciences & Disorders, University of Cincinnati, 3202 Eden Avenue, 344 French East Building,
Cincinnati, OH 45267
All posters will be on display from 8:00 a.m. to 10:00 a.m. To allow contributors an opportunity to see other posters, contributors of
odd-numbered papers will be at their posters from 8:00 a.m. to 9:00 a.m. and contributors of even-numbered papers will be at their posters from 9:00 a.m. to 10:00 a.m.
Contributed Papers
5aPPa1. Is age-related hearing loss predominantly metabolic? Robert H.
Withnell (Speech and Hearing Sci., Indiana Univ., Bloomington, IN) and
Margarete A. Ueberfuhr (Systemic NeuroSci., Ludwig-Maximilians Univ.,
Großhaderner Str. 2, D-82152 Planegg-Martinsried, Munich, Germany, margarete.ueberfuhr@gmx.de)
Studies in animals have shown that age-related hearing loss is predominantly metabolic in origin. In humans, direct access to the cochlea is not
usually possible and so non-invasive methods of assessing cochlear mechanical function are required. This study used a non-invasive assay of cochlear
mechanical function, otoacoustic emissions, to examine a metabolic versus
hair-cell-loss origin for age-related hearing loss. Three subject groups were
examined: adult females with clinically normal hearing, adult females with
age-related hearing loss, and adult males with noise-induced hearing loss.
Contrasting otoacoustic emission input-output functions were obtained for
the three groups, suggesting a causal relationship between age-related hearing loss and strial dysfunction.
5aPPa2. Further modeling of temporal effects in two-tone suppression.
Erica L. Hegland and Elizabeth A. Strickland (Speech, Lang., and Hearing
Sci., Purdue Univ., Heavilon Hall, 500 Oval Dr., West Lafayette, IN 47907,
ehegland@purdue.edu)
Two-tone suppression, a nearly instantaneous reduction in cochlear gain
and a by-product of the active process, has been extensively studied both
physiologically and psychoacoustically. Some physiological data suggest
that the medial olivocochlear reflex (MOCR), which reduces the gain of the
active process in the cochlea, may also reduce suppression. The interaction
of these two gain reduction mechanisms is complex and has not been widely
studied or understood. Therefore, a model of the auditory periphery that
includes the MOCR time course was used to systematically investigate this
interaction of gain reduction mechanisms. This model was used to closely
examine two-tone suppression at the level of the basilar membrane using
suppressors lower in frequency than the probe tone. Results were compared
both with and without elicitation of the MOCR. Preliminary results indicate
that elicitation of the MOCR reduces two-tone suppression when measured
as the total basilar membrane response at the characteristic frequency (CF)
of the probe. The purpose of this study was to investigate further by separating the frequency components of the basilar membrane response at CF to
determine the excitation produced by the probe and by the suppressor with
and without MOCR elicitation. [Research supported by NIH(NIDCD)R01
DC008327 and T32 DC00030.]
5aPPa3. Characterization of cochlear implant-related artifacts during
sound-field recording of the auditory steady state response using an amplitude modulated stimulus: A comparison among normal hearing
adults, cochlear implant recipients, and implant-in-a-box. Shruti B.
Deshpande (Commun. Sci. & Disord., Univ. of Cincinnati, 3202 Eden Ave.,
Cincinnati, OH 45267-0379, balvalsn@mail.uc.edu), Michael P. Scott (Div.
of Audiol., Cincinnati Children’s Hospital Medical Ctr., Cincinnati, OH),
Fawen Zhang, Robert W. Keith (Commun. Sci. & Disord., Univ. of Cincinnati, Cincinnati, OH), and Andrew Dimitrijevic (Commun. Sci. Res. Ctr.,
Cincinnati Children’s Hospital, Dept. of Otolaryngol., Univ. of Cincinnati,
Cincinnati, OH)
Recent work has investigated the use of electric stimuli to evoke auditory steady state response (ASSR) in cochlear implant (CI) users. While
more control can be exerted using electric stimuli, acoustic stimuli present
natural listening environment for CI users. However, ASSR using acoustic
stimuli in the presence of a CI could lead to artifacts. Five experiments
investigated the presence and characteristics of CI-artifacts during soundfield ASSR using amplitude modulated (AM) stimulus (carrier frequency: 2
kHz; modulation frequency: 82.031 Hz). Experiment 1 investigated differences between 10 normal hearing (NH) and 10 CI participants in terms of
ASSR amplitude versus intensity and onset phase versus intensity. Experiment 2 explored similar relationships for an implant-in-a-box. Experiment 3
investigated correlations between electrophysiological ASSR thresholds
(ASSRe) and behavioral thresholds to the AM stimulus (BTAM) for the NH
and CI groups. Mean threshold differences (ASSRe-BTAM) were computed
for each group and group differences were studied. Experiment 4 investigated the presence of transducer-related artifacts using masking. Experiment
5 investigated the effect of manipulation of intensity and external components of the CI on the ASSR. Overall, results of this study provide the first
comprehensive description of the characteristics of CI-artifacts during
sound-field ASSR. Implications for future research to further characterize
CI-artifacts, thereby leading to strategies to minimize them are discussed.
5aPPa4. Baseline neurophysiological noise levels in children with auditory processing disorder. Kyoko Nagao (Biomedical Res., Nemours/Alfred
I. duPont Hospital for Children, 1701 Rockland Rd., CPASS, Wilmington,
DE 19803, knagao@nemours.org), L. Ashleigh Greenwood (Audiol. Services, Pediatrix , Falls Church, VA), Raj C. Sheth, Rebecca G. Gaffney (Biology, Univ. of Delaware, Newark, DE), Matthew R. Cardinale (College of
Osteopathic Medicine, New York Inst. of Technol., New York, NY), and
Thierry Morlet (Biomedical Res., Nemours/Alfred I. duPont Hospital for
Children, Wilmington, DE)
The current study examined the baseline neurophysiological responses
between children with auditory processing disorder (APD) and the control
group. Auditory event related potentials were recorded in 23 children with
APD (ages 7–12 years, mean age = 8.9 years) and 25 age-matched control
2306
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2306
5aPPa5. Speech spectral intensity discrimination at frequencies above 6
kHz. Brian B. Monson (Dept. of Pediatric Newborn Medicine, Brigham and
Women’s Hospital, Harvard Med. School, 75 Francis St., Boston, MA
02115, bmonson@research.bwh.harvard.edu), Andrew J. Lotto, and Brad H.
Story (Speech, Lang., and Hearing Sci., Univ. of Arizona, Tucson, AZ)
Hearing aids and other communication devices (e.g., mobile phones)
have made some recent efforts to extend their bandwidths to represent
higher frequencies. The impact of this expansion on speech perception is
not well characterized. To assess human sensitivity to speech high-frequency energy (HFE, defined here as energy in the 8- and 16-kHz octave
bands), difference limens for HFE level changes in male and female speech
and singing were obtained. Listeners showed significantly greater ability to
detect level changes in singing vs. speech, but not in female vs. male
speech. Mean differences limen scores for speech and singing were about 5
dB in the 8-kHz octave (5.6–11.3 kHz) but 8–10 dB in the 16-kHz octave
(11.3–22 kHz). These scores are lower (better) than scores previously
reported for isolated vowels and some musical instruments, and similar to
scores previously reported for white noise.
5aPPa6. Duration perception of time-varying sounds: The role of the
amplitude decay and rise-time modulator. Lorraine Chuen (Psych., Neurosci. & Behaviour, McMaster Univ., Psych. Bldg. (PC), Rm. 102, 1280
Main St. West, Hamilton, ON L8S 4K1, Canada, chuenll@mcmaster.ca)
and Michael Schutz (School of the Arts, McMaster Univ., Hamilton, ON,
Canada)
It is well known that ramped (rising energy) sounds are perceived as longer in duration than damped (falling energy) sounds that are time-reversed,
but otherwise identical versions of one another (Schlauch, Ries & DiGiovanni, 2001; Grassi & Darwin, 2006). This asymmetry has generally been
attributed to the under-estimation of damped sound duration, rather than the
over-estimation of ramped sound duration. As previous literature most commonly employs exponential amplitude modulators, in the present experiment, we investigate whether altering the nature of this amplitude decay- or
rise-time modulator (linear or exponential) would influence this typically
observed perceptual asymmetry. Participants performed an adaptive, 2AFC
task that assessed the point of subjective equality (PSE) between a standard
tone with a constant ramped/damped envelope, and a comparator tone with
a “flat,” steady state envelope whose duration varied according to a 1-up, 1down rule. Preliminary results replicated previous findings that ramped
sounds are perceived as longer than their time-reversed, damped counterparts. However, for sounds with a linear amplitude modulator, this perceptual asymmetry is partially accounted for by ramped tone over-estimation,
contrasting previous findings in the literature conducted with exponential
amplitude modulators.
al., 1998; Parbery-Clark, 2009). Among the auditory skills in musicians that
have been studied are gap detection measures of temporal acuity (Mishra &
Panda, 2014; Payne, 2012). These studies typically have compared the gap
detection thresholds of musicians and non-musicians. The present work
relates gap detection performance to musical aptitude rather than to reported
musical training history. In addition, in the present study, gap detection was
measured under two different stimulus conditions: the within-channel (WC)
condition (in which the sound that precedes the gap is spectrally identical to
the sound following the gap) and the across-channel (AC condition) (in
which the pre- and post-gap sounds are spectrally different. Results indicate
a significant correlation between across-channel gap detection thresholds
and musical aptitude and no correlation between within-channel performance and musical aptitude. These results have important implications for
temporal acuity as it relates to musical aptitude.
5aPPa8. Modeling response times to analyze perceptual interactions in
complex non-speech perception. Noah H. Silbert (Commun. Sci. & Disord., Univ. of Cincinnati, 3202 Eden Ave., 344 French East Bldg., Cincinnati, OH 45267, noah.silbert@uc.edu) and Joseph W. Houpt (Psych.,
Wright State Univ., Dayton, OH)
General recognition theory (GRT) provides a powerful framework for
modeling interactions between perceptual dimensions in identification-confusion data. The linear ballistic accumulator (LBA) model provides powerful methods for analyzing multi-choice (2 + ) response time (RT) data as a
function of evidence accumulation and response thresholds. We extend
(static) GRT to the domain of RTs by fitting LBA models to RTs collected
in two auditory GRT experiments. Although the mapping between the constructs of GRT (e.g., perceptual separability, perceptual independence) and
the components of the LBA (e.g., drift rates, response thresholds) is complex, the dimensional interactions defined in GRT can be indirectly
addressed in the LBA framework by testing for invariance of LBA parameters across appropriate subsets of the data. The present work focuses on correspondences between (invariance of) parameters in LBA and perceptual
separability and independence in GRT.
5aPPa9. The effect of experience on environmental sound identification.
Rachel E. Bash, Brandon J. Cash, and Jeremy Loebach (Psych., St. Olaf
College, 1520 St. Olaf Ave., Northfield, MN 55057, bash@stolaf.edu)
The perception of environmental stimuli was compared across normal
hearing (NH) listeners exposed to an eight-channel sinewave vocoder and
experienced bilateral, unilateral, and bimodal cochlear implant (CI) users.
Three groups of NH listeners underwent no training (control), one day of
training with environmental stimuli (exposure), or four days of training with
a variety of speech and environmental stimuli (experimental). A significant
effect of training was observed. The experimental group performed significantly better than exposure or control groups, equal to bilateral CI users, but
worse than bimodal users. Participants were divided into low, medium and
high-performing groups using a two-step cluster algorithm. High-performing members were only observed for the CI and experimental conditions,
and significantly more low-performing members were observed for exposure and control conditions, demonstrating the effectiveness of training. A
detailed item-analysis revealed that the most accurately identified sounds
were often temporal in nature or contained iconic repeating patterns (e.g., a
horse galloping). Easily identified stimuli were common across all groups,
with experimental subjects identifying more short or spectrally driven stimuli, and CI users identifying more animal vocalizations. These data demonstrate that explicit training in identifying environmental stimuli improves
sound perception, and could be beneficial for new CI users.
5a FRI. AM
children in response to a /da/ presented to each ear separately (right and left
ear conditions). A no-sound condition was recorded as well. Baseline neurophysiological activity was measured as the root mean square amplitude of
the 100 ms pre-stimulus period. Preliminary analysis of data from 19 children with APD and 13 controls indicated that the APD group showed significantly greater pre-stimulus amplitude than the control group in the left ear
condition, F(1, 30) = 4.415, p = 0.044, but we did not find significant group
differences in the no-sound and right ear conditions, F(1, 30) = 2.237, p =
0.15 and F(1, 30) = 0.088, p = 0.77, respectively. The results suggest that
children with APD may need a longer time period to return to a resting state
than control children when the left ear is stimulated. Hence, these results
may indicate asymmetrical neural activities of the auditory pathways in
APD.
5aPPa7. Relationship between gap detection thresholds and performance on the Advanced Measures of Music Audiation Test. Matthew
Hoch (Music, Auburn Univ., Auburn, AL), Judith Blumsack, and Lindsey
Soles (CMDS, Auburn Univ., 1199 Haley Ctr., Auburn, AL 36849-5232,
blumsjt@auburn.edu)
Considerable neurophysiological, neural imaging, and behavioral
research indicates that auditory processing in musicians differs from that of
non-musicians (e.g., Musacchia et al., 2007; Ohnishi et al., 2001; Pantev et
2307
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2307
5aPPa10. The U.S. National Hearing Test, a 2013–2014 progress report.
Charles S. Watson (Res., Commun. Disord. Technol., Inc., CDT, Inc., 3100
John Hinkle Pl, Bloomington, IN 47408, watson@indiana.edu), Gary R.
Kidd (Speech and Hearing, Indiana Univ., Bloomington, IN), James D.
Miller (Res., Commun. Disord. Technol., Inc., Bloomington, IN), Jill E. Preminger (Surgery, Univ. of Louisville, Louisville, KY), Alex Crowley, and
Daniel P. Maki (Res., Commun. Disord. Technol., Inc., Bloomington,
IN)
A telephone-administered screening test for sensorineural hearing loss
was made publically available in the United States in September 2013. This
test is similar to the digits-in-noise test developed by Smits and colleagues
in the Netherlands, versions of which are now in use in most European
countries and in Australia. The test was initially offered in the United States
for a small fee ($8, then $4) but after a year of promotion it became clear
that either the fee or the complexity of paying it was inhibiting. During the
first month in which the test was subsequently offered free of charge,
31,806 calls were made to the test line, of which 26,507 were completed
tests. Analyses of test performance suggest that about 81% of the test takers
had at least a mild hearing loss, and 40% had a substantial loss (estimated to
be in excess of 45 dB PTA). Follow-up studies are being conducted to determine whether those who failed the test sought a full-scale hearing assessment, and whether those advised to obtain hearing aids did so. [Work
funded by Grant No. 5R44DC009719 from the National Institute for Deafness and other Communication Disorders.]
FRIDAY MORNING, 31 OCTOBER 2014
MARRIOTT 1/2, 10:15 A.M. TO 12:15 P.M.
Session 5aPPb
Psychological and Physiological Acoustics: Perceptual and Physiological Mechanisms, Modeling, and
Assessment
Anna C. Diedesch, Chair
Hearing & Speech Sciences, Vanderbilt University, Nashville, TN 37209
Contributed Papers
10:15
5aPPb1. Modest, reliable spectral peaks in preceding sounds influence
vowel perception. Christian Stilp and Paul Anderson (Dept. of Psychol. and
Brain Sci., Univ. of Louisville, 308 Life Sci. Bldg., Louisville, KY 40292,
christian.stilp@louisville.edu)
Sensory systems excel at extracting predictable signal properties in order
to be optimally sensitive to unpredictable, more informative properties.
Studies of auditory perceptual calibration (Kiefte & Kluender, 2008 JASA;
Alexander & Kluender, 2010 JASA) showed that when precursor sounds
were filtered to emphasize frequencies matching the second formant (F2) of
the subsequent target vowel, vowel perception decreased its reliance on F2
(predictable cue) and increased reliance on spectral tilt (unpredictable cue).
Perceptual calibration occurred when reliable spectral peaks were 20 dB or
larger, but findings in profile analysis and spectral contrast detection predict
sensitivity to more modest spectral peaks. The present experiments tested
identification of vowels varying in F2 (1000–2200 Hz) and spectral tilt (-120 dB/octave), perceptually varying from /u/ to /i/. Listeners first identified
vowels in isolation, then following a sentence filtered to add a reliable + 2 to
+ 15 dB spectral peak centered at F2 of the target vowel. Changes in perceptual weights (standardized logistic regression coefficients) across sessions
were indices of perceptual calibration. Vowel identification weighted F2 significantly less when reliable peaks were at least + 5 dB, but increases in
spectral tilt weights were very modest. Results demonstrate high sensitivity
to predictable acoustic properties in the sensory environment.
10:30
5aPPb2. Testing the contribution of spectral cues on pitch strength
judgments in normal-hearing listeners. William Shofner (Speech and
Hearing Sci., Indiana Univ., 200 S. Jordan Ave., Bloomington, IN 47405,
wshofner@indiana.edu) and Marisa Marsteller (Speech, Lang. and Hearing
Sci., Univ. of Arizona, Tucson, AZ)
When a wideband harmonic tone complex (wHTC) is passed through a
noise vocoder, the resulting sounds can have harmonic structures with large
peak-to-valley ratios in the spectra, but little or no periodicity strength in the
2308
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
autocorrelation functions. Noise-vocoded wHTCs evoke simultaneous noise
percepts and pitch percepts similar to those evoked by iterated rippled
noises. We have previously shown that spectral cues do not appear to control behavioral responses of chinchillas to noise-vocoded wHTCs in a stimulus generalization task, but do appear to contribute to pitch strength
judgments in normal-hearing listeners for noise-vocoded wHTCs relative to
non-vocoded wHTCs. To further test the role of spectral cues, normal-hearing listeners judged the pitch strengths of noise-vocoded wHTCs relative to
infinitely-iterated rippled noise (IIRN). Stimuli had harmonic structures
with a fixed fundamental frequency of 500 Hz and were presented monaurally at 50 dB SL. Listeners’ judgments of pitch strength evoked by vocoded
wHTCs were generally consistent with peak-to-valley ratios of the stimuli.
In order to reduce spectral cues and resolvability, stimuli were high-passed
filtered. Pitch strength judgments of vocoded wHTCs were reduced following high-pass filtering. These findings suggest that spectral cues do contribute to pitch perception in human listeners.
10:45
5aPPb3. The role of onsets and envelope fluctuations in binaural cue
use. G. Christopher Stecker and Anna C. Diedesch (Hearing and Speech
Sci., Vanderbilt Univ. Medical Ctr., 1215 21st Ave. South, Rm. 8310, Nashville, TN 37232-8242, g.christopher.stecker@vanderbilt.edu)
Effective localization of real sound sources requires neural mechanisms
to accurately extract and represent binaural cues, including interaural time
and level differences (ITD and ILD) in the sound arriving at the ears. Many
studies have explored the relative effectiveness of these cues, and how that
effectiveness varies with the acoustical features of a sound such as spectral
frequency and modulation characteristics. In particular, several classic and
recent studies have demonstrated relatively greater sensitivity to ITD and
ILD present at sound onsets and other positive-going fluctuations of the
sound envelope. The results of those studies have clear implications for how
spatial cues are extracted from naturally fluctuating sounds such as human
speech, and how that process is altered by echoes, reverberation, and competing sources in real auditory scenes. Here, we review the results of several
recent studies to summarize and critique the evidence for envelope-triggered
168th Meeting: Acoustical Society of America
2308
11:00
5aPPb4. Loudness of a multi-tonal sound field, consisting of either one
two-component complex sound source or two simultaneous spatially distributed sound sources. Micha€el Vannier and Etienne Parizet (Genie
Mecanique Conception, INSA-Lyon, Laboratoire Vibrations Acoustique,
13, Pl. Jean Mace, Lyon 69007, France, michael.vannier@insa-lyon.fr)
The aim of the present study is to provide new elements about the perceived loudness of stationary complex sound fields and test the validity of
current models under such conditions. The first part consisted in testing the
hypothesis according which the directional loudness of a multi-component
sound source could be fully explained by the directional loudness of each of
its single components. In this way, the directional loudness sensitivities of a
two-component complex sound source (third-octave noise bands centered at
1 kHz and 5 kHz) have been measured in the horizontal plane. Despite a
previous equalization in loudness of each component to a frontal reference,
a small effect of the azimuth angle on loudness still remained, partly disproving the assumption. In a second part, the influence of the spatial distribution of two sound sources on the global loudness was investigated (with
the same two narrow-band noises). No effect has been found by Song
(2007) for small incidence angles (10 and 30 ). The present experiment
extends this result for wide incidence angles and so, under highly dichotic
listening situations. Finally, all the subjective data have been compared with
the predictions from different models of loudness, and the results will be
discussed.
11:15
5aPPb5. Computing interaural differences using idealized head models.
Tingli Cai, Brad Rakerd, and William Hartmann (Phys. Astronomy, Michigan State Univ., 567 Wilson Rd., East Lansing, MI 48824, hartman2@msu.
edu)
The spherical model of the human head, attributable to Lord Rayleigh,
accounts for important features of observed interaural time differences
(ITD) and interaural level differences (ILD), but it also fails to capture
many details. To gain an intuitive understanding of the failures, we computed ITDs and ILDs for a succession of idealized shapes approximating the
human head: sphere, ellipsoid, ellipsoid plus cylindrical neck, ellipsoid plus
cylindrical neck plus disk torso. Calculations were done as a function of frequency (100–2500 Hz) and for source azimuths from 10 to 90 degrees using
finite-element models. The computations were compared to free-field measurements on a KEMAR manikin. The spherical head model approximated
many measured interaural features, but the frequency dependence tended to
be too flat in both ITD and ILD. The ellipsoidal head produced greater variation with frequency and therefore agreed better with the measurements,
reducing the RMS discrepancies in both ITD and ILD by 35%. Adding a
neck further increased the frequency variation. Adding the disk torso further
improved the agreement, especially below 1000 Hz, decreasing the ITD discrepancy by another 21\%. The evolution of models enabled us to associate
details of interaural differences with overall anatomical features. [Work supported by the AFOSR grant 11NL002.]
11:30
5aPPb6. Acoustic reflex attenuation in phon loudness measurements. Julius L. Goldstein (Hearing Emulations LLC, Ariel Premium,Hearing Emulations LLC, 8825 Page Ave., Saint Louis, MO 63114-6105, goldstein.jl@
sbcglobal.net)
listeners as equally loud as a 1 kHz tone at L dB SPL. Loudness is defined
relatively as L phons (Fletcher & Munson, 1933). ELC measurements by
Lydolf and Mller (1997) included in the current ISO standard (Suzuki &
Takeshima, JASA vol. 116, 2004), show systematic increases in ELC
growth rate with loudness above 60 phons and below 1 kHz, which suggests
middle-ear attenuation by the acoustic reflex (AR). A steady-state ELC
model was assembled including known mechanisms: (1) middle ear transmission modified by a head-related-transfer-function, (2) compressive cochlear amplification (CA) for signaling loudness, (3) a negative feedback
model for AR attenuation by CA inputs exceeding AR threshold, and (4)
attenuation of pressure-field stimuli by trans-eardrum static pressure. Model
parameters were calculated from ELC data using minimum-square-error
estimation. AR attenuation below 1 kHz depends on AR attenuation at the 1
kHz loudness reference frequency, but predicted ELCs are relatively insensitive to it. An earlier psychophysical study of AR attenuation, including 1
kHz, is consistent with subject-dependent model predictions (Rabinowitz &
Goldstein, JASA vol. 54, 1973; Rabinowitz, 1977). [NIH-Funded.]
11:45
5aPPb7. Effects of tinnitus and hearing loss on functional brain networks involved in auditory and visual short-term memory. Fatima T.
Husain, Kwaku Akrofi (Speech and Hearing Sci., Univ. of Illinois at
Urbana-Champaign, 901 S. Sixth St., Champaign, IL 61820, husainf@illinois.edu), and Jake Carpenter-Thompson (Neurosci., Univ. of Illinois at
Urbana-Champaign, Champaign, IL)
Brain imaging data were acquired from three subject groups—persons
with hearing loss and tinnitus (TIN), individuals with similar hearing loss
without tinnitus (HL) and those with normal hearing without tinnitus
(NH)—to test the hypothesis that TIN and control subjects use different
functional brain networks for short-term memory. Previous studies have
provided evidence of a link between hearing disorders such as tinnitus and
the reorganization of auditory and extra-auditory functional networks.
Greater knowledge of this reorganization could lead to the development of
more effective therapies. Data analysis was conducted on fMRI data
obtained while subjects performed short-term memory tasks with low or
high attentional loads, using both auditory and visual stimuli in separate
scanning sessions. Auditory stimuli were pure tones with frequencies
between 500 and 1000 Hz. Visual stimuli were Korean fonts, unfamiliar to
the subjects. We found similar behavioral response across the three groups
for both modalities and tasks. However, the groups differed in their brain
response, with these differences being more marked for the auditory tasks
and not for the tasks involving visual stimuli.
12:00
5aPPb8. Preliminary results of a two-interval forced-choice method for
assessing infant hearing sensitivity. Lynne Werner (Speech & Hearing
Sci., Univ. of Washington, 1417 North East 42nd St., Seattle, WA 981056246, lawerner@u.washington.edu)
Current methods for assessing infants’ hearing are yes-no, single interval
procedures. Although bias-free statistics can be used to describe the results
of such procedures, with the limited number of trials typically available
from an individual infant, use of these statistics can be problematic. A twointerval forced choice method based on infants’ anticipatory eye movements
toward an interesting visual event is currently under development. Preliminary results indicate that a high proportion of both 3- and 7-month-old
infants achieve over 80% correct in the detection of a 70 dB SPL 1000 Hz
tone presented through an insert earphone. Infants continue to perform better than expected by chance at levels as low as 25 dB SPL. Thus, a test
method based on infant eye movements holds potential as an efficient behavioral method for assessing infant hearing.
Equal Loudness-level Contours, ELC(f, L), represent the sound pressure
level in dB SPL of tones at frequency f that are perceived by normal-hearing
2309
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2309
5a FRI. AM
extraction of ITD and ILD across a wide range of spectral frequencies. A
number of competing models for cue extraction in fluctuating envelopes are
also considered in light of this evidence. [Work supported by NIH R01DC011548.]
FRIDAY MORNING, 31 OCTOBER 2014
MARRIOTT 5, 8:00 A.M. TO 12:00 NOON
Session 5aSC
Speech Communication: Speech Perception and Production in Challenging Conditions (Poster Session)
Alexander L. Francis, Chair
Purdue University, SLHS, Heavilon Hall, 500 Oval Dr., West Lafayette, IN 47907
All posters will be on display from 8:00 a.m. to 12:00 noon. To allow contributors an opportunity to see other posters, contributors of
odd-numbered papers will be at their posters from 8:00 a.m. to 10:00 a.m. and authors of even-numbered papers will be at their posters
from 10:00 a.m. to 12:00 noon.
Contributed Papers
5aSC1. A new dual-task paradigm to assess cognitive resources utilized
during speech recognition. Andrea R. Plotkowski and Joshua M.
Alexander (Speech, Lang., and Hearing Sci., Purdue Univ., 500 Oval Dr.,
West Lafayette, IN 47906, mitche99@purdue.edu)
Listening to ongoing conversations in challenging situations requires
explicit use of cognitive resources to decode and process spoken messages.
Traditional speech recognition tests are insensitive measures of this cognitive effort, which may differ greatly between individuals or listening conditions. Furthermore, most dual-task paradigms that have been devised for
this purpose generally rely on secondary tasks like reaction time and recall
that do not reflect real-world listening demands. A new task was designed to
capture changes in both speech recognition and verbal processing across different conditions. Listeners heard two sequential sentences spoken by opposite gender talkers in speech-shaped noise. The primary task was a
traditional speech recognition test, in which listeners immediately repeated
aloud the second sentence in the pair. The secondary task was designed to
engage explicit cognitive processes by requiring listeners to write down the
first sentence after holding it in memory while listening to and repeating
back the second sentence. Test sentences consisted of lists from the
PRESTO test (Gilbert et al. 2013, J. Am. Acad. Audiol. vol. 24, pp. 26–36)
that were carefully modified to help ensure list-equivalency. Psychometric
results from the revised PRESTO sentence lists and from the new dual-sentence task will be reported.
5aSC2. Vocal effort, coordination, and balance. Robert A. Fuhrman (Linguist, U Br. Columbia, 2613 West Mall, Vancouver, BC, Canada, robert.a.
fuhrman@gmail.com), Adriano Barbosa (Electron. Eng., Federal Univ. of
Minas Gerais, Belo Horizonte, Brazil), and Eric Vatikiotis-Bateson (Linguist, U Br. Columbia, Vancouver, BC, Canada)
Manipulating speaking and discourse requirements allows us to asses
the time-varying correspondences between various subsystems within a
talker at different levels of vocal effort. These subsystems include fundamental frequency (F0) and acoustic amplitude, rigid body (6D) motion of
the head, motion (2D) of the body, and postural forces and torques measured
at the feet. Analysis of six speakers has confirmed our hypothesis that as
vocal effort increases coordination among sub-systems simplifies, as shown
by greater correspondence (e.g., the instantaneous correlation) between the
various time-series measures. However, at the two highest levels of vocal
effort, elicited by having talkers shout to and yell at someone located appropriately far away, elements of the postural force, notably one or more torque
components, often show a reduction in correspondence with the other measures. We interpret this result as evidence that talkers become more rigidly
coordinated at the highest levels of vocal effort, which can interfere with
their balance. Furthermore, the discourse type—shouting at someone to
carry on a conversation vs. yelling at someone not expected to reply—can
be associated with differing amounts of imbalance.
2310
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
5aSC3. The gradient effect of transitional magnitude: A source of the
vowel context effect. SANG-IM LEE-KIM (Linguist, New York Univ.,
45-35 44th St. 1i, Sunnyside, NY 11104, sangim119@gmail.com)
Previous studies have shown that vocalic transitions play an important
role in the identification of the consonantal places (e.g., Whalen 1981/1991,
Nowak 2006, Babel & McGuire 2013). While it has been intermittantly
reported that the contribution of transitions may depend on vowel contexts,
the common methodology, i.e., C-V cross-splicing, is too coarse to precisely
identify the nature of this effect. In the present study, vocalic transitions are
systematically manipulated and used as a gradient variable by incrementally
removing the transitional period of the three vowels /u a e/ following alveolopalatal sibilant /ˆ/ in Polish. In an identification task, native Polish speakers were given a choice between /ˆ/ and // for stimuli with varying levels
of palatal transitions. The results showed that participants’ perception is gradient: greater transitions overall elicit more palatal responses in all vowel
contexts. More importantly, it has been shown that the apparent vowel effect
can be largely reduced to the relative magnitude of transitions that are specific to each vowel. The low and back vowels elicit greater palatal transitions providing more robust transitional cues in perception, while the high
and front vowels elicit smaller or nearly zero palatal transitions providing
less robust cues to the sibilants’ place.
5aSC4. Adaptive compensation for reliable spectral characteristics of a
listening context in vowel perception. Paul Anderson and Christian Stilp
(Psychol. and Brain Sci., Univ. of Louisville, 2301 S 3rd St., Louisville,
KY, paul.anderson@louisville.edu)
When precursor sounds are filtered to emphasize frequencies matching
F2 of a subsequent target vowel, vowel perception decreases reliance on F2
(predictable cue) and increases reliance on spectral tilt (unpredictable cue)
and vice versa. Previously, initial cue weights and weight changes (i.e., perceptual calibration to reliable signal properties) were larger for F2 than tilt,
obscuring whether the magnitude of calibration reflects cue predictability or
F2’s status as a primary cue to vowel identity. Here, vowels varied from /u/
to /i/ in tilt (-12-0 dB/octave) and the full range of F2 values (1000–2200
Hz) or a reduced range (1300–1900 Hz) designed to decrease F2 cue
weights, making tilt the primary cue for vowel identification. Vowels were
presented in isolation, then following sentences filtered to match the target
vowel’s F2 or tilt. In isolation, cue weights for F2 were higher when identifying full-F2-range vowels and higher for tilt when identifying reduced-F2range vowels. Weight changes (calibration) were comparable when the primary cue was predictable; this was also true for predictable secondary cues
(tilt for full-F2-range vowels, F2 for reduced-F2-range vowels). Perceptual
calibration to reliable signal properties is an adaptive process reflecting cue
predictability, not solely a priori cue use (e.g., F2 over tilt).
168th Meeting: Acoustical Society of America
2310
5aSC5. An approach to the analysis of relations between syllable and
sentence perception in quiet and noise in the Speech Perception Assessment and Training System: Preliminary results for ten hearing-aid
users. James D. Miller (Res., Commun. Disord. Technol., Inc., 3100 John
Hinkle Pl, Ste 107, Bloomington, IN 47408, jamdmill@indiana.edu)
5aSC8. Perceptual versus cognitive speed in a time-compressed speech
task. Michelle R. Molis, Frederick J. Gallun, and Nirmal Srinivasan
(National Ctr. for Rehabilitative Auditory Res., Portland VA Medical Ctr.,
3710 SW US Veterans Hospital Rd., Portland, OR 97239, michelle.molis@
va.gov)
Logistic functions relating abilities to indentify syllable onsets, nuclei,
and codas in quiet and noise as a function of snr are measured. Syllable perception is the product of these individual abilities. It is found that syllable
perception in noise is highly correlated with syllable perception quiet. The
relation of sentence perception in the SPATS sentence task with SPATS syllable constituent perception is examined. As shown years ago at the Bell
Labs, only modest levels of syllable identification are needed to support
nearly perfect levels of sentence perception. Here, it is found that sentence
perception in quiet and noise is correlated with syllable perception in quiet,
the use of inherent context provided by syllable perception (Boothroyd and
Nittrauer (1988)), and with the use of situational context, independent of
syllable perception. Finally, the effects of speech perception training on
these relations are examined for each of the ten hearing-aid users studied.
[Work supported by NIH/NICD Grant R21/R33DC011174 “Multi-site
Study of the Efficacy of Speech Perception Training for Hearing-Aid
Users,” C. S. Watson, PI. Data supplied by cooperating sites: Medical University of South Carolina, J. Dubno, Site PI; University of Memphis, D.
Wark, Site PI; and University of Maryland, S. Gordon-Salant, Site PI.]
Time-compression retains the information-bearing spectral change present uncompressed speech, although at a rate that may outstrip cognitive
processing speed. To compare the relative importance of perceptual and
cognitive processing speed, we compared the understanding of (1) timecompressed stimuli expanded in time via gaps with (2) uncompressed stimuli where spectral change information was removed. We hypothesized that,
despite the initial compression, the compressed and expanded stimuli would
be more intelligible as it would retain relatively more information-bearing
spectral change. Participants were somewhat older listeners (mid-1950s to
mid-1960s) with normal hearing or mild hearing loss. Stimuli were spoken
seven-digit strings time-compressed via pitch synchronous overlap and add
(PSOLA) at three uniform compression ratios (2:1, 3:1, and 5:1). In gap
insertion conditions, the total duration of the compressed stimuli was
restored via introduction of periodic gaps. This produced signal-to-gap
ratios of 1:1, 1:2, and 1:4. For comparison, segments of unaccelerated
strings, equal to the duration of the inserted gaps, were zeroed out resulting
in the same signal-to-gap ratios. Listeners identified the final four digits of
the strings presented in quiet and in a steady-state, speech-shaped background noise (SNR + 5). Our hypothesis was supported for the fastest compression rates. [Work supported by VA RR&D.]
Signal processing schemes used in hearing aids, such as nonlinear frequency compression (NFC) recode speech information by moving high-frequency information to lower frequency regions. Perceptual studies have
shown that depending on the dominant speech sound, compression occurs
and the amount of compression can have a significant effect on perception.
Very little is understood about how frequency-lowered information is
encoded by the auditory periphery. We have developed a measure that is
sensitive to information in the altered speech signal in an attempt to predict
optimal hearing aid settings for individual hearing losses. The NeuralScaled Entropy (NSE) model examines the effects of frequency-lowered
speech at the level of the inner hair cell synapse of an auditory nerve model
[Zilany et al. 2013, Assoc. Res. Otolaryngol.]. NSE quantifies the information available in speech by the degree to which the pattern of neural firing
across frequency changes relative to its past history (entropy). Nonsense syllables with different NFC parameters were processed in noise. Results are
compared with perceptual data across the NFC parameters as well as across
different vowel-defining parameters, consonant features, and talker gender.
NSE successfully captured the overall effects of varying NFC parameters
across the different sound classes.
5aSC7. Tempo-based segregation of spoken sentences. Gary R. Kidd and
Larry E. Humes (Speech and Hearing Sci., Indiana Univ., 200 S. Jordan
Ave., Bloomington, IN 47405, kidd@indiana.edu)
The ability to make use of differences in speech rhythms to selectively
attend to a single spoken message in a multi-talker background was examined
in a series of studies. Sentences from the coordinate response measure corpus
provided a set of stimuli with a common rhythmic framework spoken by several talkers at similar speaking rates. Subjects were asked to identify two key
words spoken in a “target” sentence identified by a word (call sign) near the
beginning of the sentence. The target talker was always in the same male voice
and either two or six background talkers were presented in different voices
(half male and half female). The rate of the background talkers was manipulated to create natural sounding speech that preserved the original pitch and
speech rhythms at faster and slower speaking rates. Unaltered target sentences
were presented in the presence of faster, unaltered, or slower competing sentences. Performance was poorest with matching target and background tempos,
with substantial increases in performance as the tempo differences increased.
Modification of the target-sentence rate confirmed that the effect is due to the
relative timing of target and background speech, rather than the properties of
rate-modified background speech. [Work supported by NIH-NIA.]
2311
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
5aSC9. Information-bearing acoustic changes are important for understanding vocoded speech in a simulation of cochlear implant processing
strategies. Christian Stilp (Dept. of Psychol. and Brain Sci., Univ. of Louisville, 308 Life Sci. Bldg., Louisville, KY 40292, christian.stilp@louisville.
edu)
Information-bearing acoustic changes (IBACs) in the speech signal are
important for understanding speech. This was demonstrated with cochleascaled entropy for cochlear implants (CSECI), which measures perceptually
significant intervals of noise-vocoded speech (Stilp et al., 2013 JASA; Stilp,
2014 JASA; Stilp & Goupell, 2014 ASA). However, vocoding does not necessarily mimic CI processing. Some CI processing strategies present acoustic information in all channels at all times (e.g., CIS) while others present
only the n-highest-amplitude channels out of m at any time (e.g., ACE).
Here, IBACs were explored in a simulation of ACE processing. Sentences
were divided into 22 channels spanning 188–7938 Hz and noise-vocoded. In
each 1-ms interval (simulating 1000 pulses/second stimulation rate), only
the eight highest-amplitude channels were retained. CSECI was calculated
between 1-ms or 16-ms sentence segments, then summed into 80-ms intervals. High-CSECI or low-CSECI intervals were replaced by speech-shaped
noise. Consistent with previous studies, replacing high-CSECI intervals
impaired sentence intelligibility more than replacing an equal number of
low-CSECI intervals. Importantly, performance was comparable when 1- or
16-ms IBACs were replaced by noise. Results reveal the perceptual importance of IBACs on rapid timescales after simulated ACE processing, indicating this information is likely available to CI users for understanding
speech.
5aSC10. Talker intelligibility across clear and sinewave vocoded speech.
Jeremy Loebach, Gina Scharenbroch, and Katelyn Berg (Psych., St. Olaf
College, 1520 St. Olaf Ave., Northfield, MN 55057, loebach@stolaf.edu)
Talker intelligibility was compared across clear and sinewave vocoded
speech. Ten talkers (5 female) from the Midwest and Western dialect
regions recorded samples of 210 meaningful IEEE sentences, 206 semantically anomalous sentences, and 300 MRT words. Ninety-three normal hearing participants provided open set transcriptions of the materials presented
in the clear over headphones. Forty-one different normal hearing participants provided open set transcriptions of the materials processed with an
eight-channel sinewave vocoder. Transcription accuracy was highest for
clear speech compared to vocoded speech, and for meaningful sentences,
followed by anomalous sentences and words for both conditions. Weak
talker effects were observed for the meaningful sentences in the clear (ranging from 97.7% to 98.2%), but were more pronounced for vocoded versions
168th Meeting: Acoustical Society of America
2311
5a FRI. AM
5aSC6. Neural-scaled entropy predicts the effects of nonlinear frequency compression on speech perception. Varsha Hariram and Joshua
Alexander (Speech Lang. and Hearing Sci., Purdue Univ., 500 Oval Dr.,
West Lafayette, IN 47907, vhariram@purdue.edu)
(68.5% to 85.5%). Weak talker effects were observed for semantically
anomalous sentences in the clear (89.4%-93.3%), but more variability was
observed across talkers in the vocoded condition (54.4%–73.7%). Finally,
stronger talker effects were observed for clear and vocoded MRT words
(83.8%–95.6%, 46.3%–59.0%, respectively). Talker rankings differed
across stimulus conditions, as well as across processing conditions, but significant positive correlations between conditions were observed for meaningful and anomalous sentences, but not MRT words. Acoustic and dialect
influences on intelligibility will be discussed.
5aSC11. Vowels of four-year-old children with cerebral palsy in
Mandarin-learning environment. Li-mei Chen (Foreign Lang. and Lit.,
National Cheng Kung Univ., Tainan, Taiwan), Yu Ching Lin (Physical
Medicine and Rehabilitation, National Cheng Kung Univ., Tainan, Taiwan),
Wei Chen Hsu, and Meng-Hsin Yeh (Foreign Lang. and Lit., National
Cheng Kung Univ., 1 University Rd, Tainan 701, Taiwan, myonaa@gmail.
com)
Characteristics of vowel productions of children with cerebral palsy
(CP) were investigated with data from two 4-year-old children with CP and
two typically-developing (TD) children in Mandarin-learning environment.
Clear vowel productions from picture naming and natural conversation in
three 50-minute audio recordings of each child were transcribed and analyzed. Seven parameters were examined: vowel duration of /a/, F2 slope in
transition of CV sequence, cumulative change of F2 for vowel /a/, degree of
nasalization in oral vowel (A1-P1), percent of jitter, percent of shimmer,
and the signal to noise ratio (SNR). Major findings are: (1) The CP group
showed shorter vowel duration of /a/; (2) TD group has larger F2 slope in
CV transition; (3) No obvious differences were found between TD and CP
groups in cumulative change of F2 for vowel /a/, degree of nasalization
(A1-P1), and voice perturbation (percent of jitter, percent of shimmer, and
SNR). Further study with more participants and with careful data selection
can verify findings of this study in search for valid parameters to characterize vowel production of children with CP.
5aSC12. Effects of depression on speech. Saurabh Sahu and Carol EspyWilson (Elec. and Comput. Eng., Univ. of Maryland College Park, 8125 48
Ave., Apt 101, College Park, MD 20740, ssahu89@umd.edu)
In this paper, we are investigating the effects of depression on speech.
The motivation comes from the fact that neuro-physiological changes associated with depression affect motor coordination and can disrupt the articulatory precision in speech. We use the database collected by Mundt et al. (J.
Neurolinguist. vol. 20, no. 1, pp. 50–64, Jan. 2007) in which 35 subjects
were treated over a 6 week period and study how the changes in mental state
are manifest in certain acoustic properties that correlate with the Hamilton
Depression Rating Scale (HAM-D), which is a clinical assessment score.
We look at features such as the modulation frequencies, aperiodic energy
during voiced speech, vocal fold jitter and shimmer, and other cues that are
related to articulatory precision. These measures will be discussed in detail.
5aSC13. Pitch production of a Mandarin-learning infant with cerebral
palsy. Meng-Hsin Yeh, Li-mei Chen (Foreign Lang. and Lit., National
Cheng Kung Univ., 1 University Rd., Tainan 701, Taiwan, myonaa@gmail.
com), Chyi-Her Lin, Yuh-Jyh Lin, and Yung-Chieh Lin (Pediatrics,
National Cheng Kung Univ., Tainan, Taiwan)
In this study, pitch production were investigated in two Mandarin-learning infants at 6 months of age, an infant with cerebral palsy (CP) and a typically developing (TD) infant. Words with distinct tones in Mandarin differ
in meaning. In order to produce a correct tone, having good control of the
respiratory and the laryngeal mechanisms are necessary. Thus, producing a
correct tone and reaching intelligibility for children with CP is considered to
be relatively difficult. In previous studies, Kent and Murray (1982) pointed
out that falling contours predominated in infant vocalizations at 3, 6, and 9
months. A study by Chen et al (2013) with 4-year-old children indicated
that the mean pitch duration of CP children is 1.3–1.8 times longer than TD
counterparts. In adults, Jeng, Weismer, and Kent (2006) found that the pitch
slopes of Mandarin in CP adults are smaller than in healthy adults. Three
measures were employed in this current study and the major findings are:
2312
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
(1) Both TD and CP infants produced more falling than rising pitch; (2) The
mean duration of pitch in CP is 2.3 times longer than that of TD; (3) The
pitch slope in CP is smaller than that of TD.
5aSC14. Linear and non-linear acoustic voice analysis of Persian speaking Parkinson’s disease patients. Fatemeh Majdinasab (Speech Therapy,
Tehran Univ. of Medical Sci., Tehran, Iran), Maryam Mollashahi, Mansour
Vali (Medical Eng., k.N. toosi Univ. of Technol., Tehran, Iran), and Hedieh
Hashemi (Dept. of Commun. Sci. & Disord., Univ. of Cincinnati, Cincinnati, OH, hashemihedieh@yahoo.com)
Purpose: Many studies have analyzed acoustic voice characteristics
(AVC) of Parkinson’s disease patients (PDP) by linear or non-linear methods. The aim of this study is to compare the linear and non-linear
approaches in acoustic voice analysis of Persian speaking PDPs. Method:
This cross sectional, non-experimental study was done on 27 (15 males, 12
females) PDP and 21 healthy age-sex matched subjects (11 males, 10
females). Patients were chosen from attendants of movement disorders
clinic using convenience sampling. All of patients evaluated in "on" medication period. AVC consisting average fundamental frequency (f0), standard
deviation of f0, mean percentage of jitter, shimmer, and HNR in prolongation of all Persian vowels /a, e, i, o, u /. PRAAT 5.1.17 software (as a linear
tool) and MATLAB (as a non-linear method) used to evaluate AVC. Result:
There was not any significant difference between PDPs and normal subjects
except for jitter /æ /(0.041) and / e/ (0.021). According to non-linear characteristics of Wavelet entropy coefficient, and by mother wavelet with coif1
(in MATLAB), all of AVC of patients differentiated from normal. Conclusion:
It seems that non-linear analysis is more detailed method to discriminate
dysarthric voice from normal voice. Keywords: Acoustic voice analysis,
Parkinson’s disease, linear, nonlinear.
5aSC15. Vowel development in children with Down and Williams syndromes. Ewa Jacewicz, Robert A. Fox (Dept. and Speech and Hearing Sci.,
The Ohio State Univ., 1070 Carmack Rd., 110 Pressey Hall, Columbus, OH
43210, jacewicz.1@osu.edu), Vesna Stojanovik, and Jane Setter (Dept. of
Clinical Lang. Sci., Univ. of Reading, Reading, United Kingdom)
Down syndrome (DS) and Williams syndrome (WS) are genetic disorders resulting from different types of genetic errors. While both disorders
lead to phonological and speech motor deficits, particularly little is known
about vowel production in DS and WS. Recent work suggests that impaired
vowel articulation in DS likely contributes to the poor intelligibility of DS
speech. Developmental delays in temporal vowel structure and pitch control
have been found in children with WS when compared to their chronological
matches. Here, we analyze spontaneous speech samples produced by British
children with DS and WI and compare them with typically developing children from the same geographic area in Southern England. We focus on the
acquisition of fine-grained phonetic details, asking if children with DS and
WS are able to synchronize the phonetic and indexical domains while coping with articulatory challenges related to their respective syndromes. Phonetic details pertaining to the spectral (vowel-inherent spectral change) and
indexical (regional dialect) vowel features are examined and vowel spaces
are derived from formant values sampled at multiple temporal locations.
Variations in density patterns across the vowel space are also considered to
define the nature of the acoustic overlap in vowels related to each
syndrome.
5aSC16. Prosodic characteristics in young children with autism spectrum disorder. Laura Dilley, Sara Cook, Ida Stockman, and Brooke Ingersoll (Michigan State Univ., Dept. of Communicative Sci., East Lansing, MI
48824, ldilley@msu.edu)
The prosody of high-functioning adults and adolescents with autism
spectrum disorder (ASD) has been reported to differ from that of typically
developing individuals. The present study investigated whether young children under eight years old with ASD differ in prosodic characteristics compared with neurotypical children matched on expressive language ability.
Seven children with ASD (38–93 months) and seven neurotypical children
(20–30 months) were recorded during naturalistic interactions with a parent.
Na€ıve listeners (n = 18) were recruited to rate utterances for: (i) age, (ii)
168th Meeting: Acoustical Society of America
2312
5aSC17. Speech production changes and intelligibility with a real-time
cochlear implant simulator. Lily Talesnick (Neurosci., Trinity College,
300 Summit St., Hartford, CT 06106, lily.talesnick@trincoll.edu) and Elizabeth D. Casserly (Psych., Trinity College, Hartford, CT)
Subjects hearing their speech through a real-time cochlear implant (CI)
simulator alter their production in multiple ways, e.g., reducing speaking
rate and constricting F1/F2 vowel space. The motivations behind these alterations, however, are currently unknown. Two possibilities are that the
changes in speech are due to the influence of a direct feedback loop in which
the subject is adjusting speech production to minimize acoustic “error,” or
that the changes could reflect the indirect influence of a high cognitive load
(stemming from the challenge of hearing through the real-time CI simulator). We explored these two possibilities by conducting a playback experiment in which 35 na€ıve listeners assessed the intelligibility of speech
produced under conditions of normal versus vocoded feedback. Intelligibility of vocoded isolated word stimuli in each condition was tested in both a
two-alternative forced choice task (“Which recording is easier to understand?”) and an open-set word recognition task. Listeners found normalfeedback speech significantly more intelligible in both tasks (p’s < 0.0125),
suggesting that speakers were not adjusting for direct error correction, but
rather due to the influence of an intervening factor, e.g., high cognitive load.
Confusion matrix analyses further illuminate the perceptual consequences
of the effects of CI-simulated speech feedback.
5aSC18. Hearing and hearing-impaired children’s acoustic–phonetic
adaptations to an interlocutor with a hearing impairment. Sonia Granlund, Valerie Hazan (Speech, Hearing & Phonetic Sci., Univ. College London (UCL), Rm. 326, Chandler House, 2 Wakefield St., London WC1N
1PF, United Kingdom, s.granlund@ucl.ac.uk), and Merle Mahon (Developmental Sci., Univ. College London (UCL), London, United Kingdom)
In England, the majority of children with a hearing impairment attend
mainstream schools. However, little is known about the communication
strategies used by children when interacting with a peer with hearing loss.
This study examined how children with normal-hearing (NH) and those
with a hearing impairment (HI) adapt to the needs of a HI interlocutor, focusing on the acoustic–phonetic properties of their speech. Eighteen NH
and 18 HI children between the ages of 9 and 15 years performed two problem-solving communicative tasks in pairs: one session was completed with
a friend with normal hearing (NH-directed speech) and one session was
done with a friend with a hearing impairment (HI-directed speech). As
expected, task difficulty increased in interactions involving a HI interlocutor. HI speakers had a slower speech rate, higher speech intensity, and
greater F0 range than NH speakers. However, both HI and NH participants
decreased their speech rate, and increased their F0 range, mean F0 and the
intensity of their speech in HI-directed speech compared to NH-directed
speech. This suggests that both NH and HI children are able to adapt to the
needs of their interlocutor, even though speech production is more effortful
for HI children than their NH peers.
5aSC19. Objective speech intelligibility prediction in sensorineural
hearing loss using acoustic simulations and perceptual speech quality
measures. Emma Chiaramello, Stefano Moriconi, and Gabriella Tognola
(Inst. of Electronics, Computers and TeleCommun. Eng., CNR Italian
National Res. Council, Piazza Leonardo Da Vinci 32, Milan 20133, Italy,
gabriella.tognola@ieiit.cnr.it)
A novel approach to objectively predict speech intelligibility in sensorineural hearing loss using acoustic simulations of impaired perception and
2313
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
objective measures of perceptual speech quality (PESQ) is proposed and
validated. Acoustic simulations of impaired perception with different types
and degrees of hearing loss were obtained degrading of the original speech
waveforms by spectral smearing, expansive nonlinearity, and level scaling.
The CUNY NST syllables were used as test material. PESQ was used to
measure perceptual quality of the acoustic simulations thus obtained.
Finally, PESQ scores were transformed into predicted intelligibility scores
using a logistic function. Validation of the proposed objective method was
performed by comparing predicted intelligibility scores with subjective
measures of intelligibility of the degraded speech in a group of ten subjects.
Predictive intelligibility scores showed good correlation (R<sup>2</
sup> = 0.7) with subjective intelligibility scores and a low error in the prediction (RMSE = 0.14). The proposed approach could be a valuable aid in
real clinical applications where it is needed to measure speech intelligibility
and might be of some help in avoiding time-consuming experimental measurements. In particular, this method might be valuable in the characterization of the sensitivity of new speech tests for screening and diagnosing of
hearing loss, or in the assessment of the performance of novel algorithms of
speech enhancement for a target hearing impairment.
5aSC20. Identification of dialect cues by dyslexic and non-dyslexic listeners. Robert A. Fox (Speech and Hearing Sci., The Ohio State Univ., 110
Pressey Hall, 1070 Carmack Rd., Columbus, OH 43210-1002, fox.2@osu.
edu), Gayle Long, and Ewa Jacewicz (Speech and Hearing Sci., The Ohio
State Univ., Columbus, OH)
Spoken language encodes two different forms of information: linguistic
(related to the message) and indexical (e.g., speaker’s age, gender, and regional dialect). However, some speech-language impairments (such as dyslexia) can reduce a listener’s ability to process both linguistic and indexical
speech cues. For example, Perrachione et al. (Science, 333, 2011) demonstrated that individuals with dyslexia were less able to identify new voices
than were control listeners. This study examines the ability of listeners with
and without dyslexia to identify speaker dialect. Eighty listeners—40 adults
and 40 children (20 in each group were dyslexic, 20 were not; 40 were
males and 40 were females)—listened to a set of 80 sentences produced by
English speakers from Western North Carolina or central Ohio and were
asked to identify which region the speaker came from. Results demonstrated
that adult listeners were significantly better at dialect identification and that
listeners with dyslexia were significantly poorer at dialect identification.
More notably, there was a significant age by listener group interaction—the
improvement in dialect identification between adults and children was
significantly smaller in listeners with dyslexia. This indicates that an initial
limitation in language learning can inhibit long-term development of
speaker-specific phonetic representations.
5aSC21. Individual differences in the lexical processing of phonetically
reduced speech. Rory Turnbull (Linguist, The Ohio State Univ., 222 Oxley
Hall, 1712 Neil Ave., Columbus, OH 43210, turnbull@ling.osu.edu)
There is widespread evidence that phonetically reduced speech is processed slower and more effortfully than unreduced speech. However, individual differences in degree and strategies of reduction, and their effects on
lexical access, are largely unexplored. This study explored the role of autistic traits in the production and perception of reduced pronunciation variants.
Stimuli were recordings of words produced in either high reduction (HR) or
low reduction (LR) contexts, extracted from sentences produced by talkers
ranging in autism-spectrum quotient (AQ) scores. The reductions in these
stimuli were generally small temporal differences, rather than segmentallevel alterations such as /t/-flapping. Listeners completed a lexical decision
task with these stimuli and the autism-spectrum quotient (AQ) questionnaire. Confirming previous research, the results demonstrate that response
times (RTs) to reduced words were slower than to unreduced words. No
other effects on RT were observed. In terms of response accuracy, LR words
were responded to more accurately than HR words, but this pattern was only
observed for temporally reduced words. This LR word accuracy benefit was
larger for listeners with more autistic personality traits. These results suggest that individuals differ in the extent to which unreduced speech provides
a perceptual benefit.
168th Meeting: Acoustical Society of America
2313
5a FRI. AM
percentage of intelligible words, (iii) pitch, (iv) speech rate, (v) degree of
animation, and (vi) certainty of diagnosis. An acoustic analysis was also
conducted of speech rate and fundamental frequency (F0). Results of the rating task showed no statistically significant difference on any measure except
estimated age. However, children in the ASD group had a significantly
lower mean, maximum, and minimum F0 than children in the control group;
there was no significant difference between groups for speech rate. These
findings may indicate that speech characteristics alone are unlikely to be a
sufficient early sign of an ASD diagnosis.
5aSC22. Change of static characteristics of Japanese word utterances
with aging. Mitsunori Mizumachi and Kazuto Ogata (Dept. of Elec. Eng.
and Electronics, Kyushu Inst. of Technol., 1-1 Sensui-cho, Tobata-ku,
Kitakyushu, 805-8440, Japan, mizumach@ecs.kyutech.ac.jp)
Acoustical characteristics of elderly speech have been investigated in
the various viewpoints. Elderly speech can be subjectively characterized by
roughness, breathiness, asthenic, and hoarseness. Those characteristics have
been individually explained in both medical science and speech science. In
particular, the hoarseness, which is caused by a physiological problem with
an aged vocal cord, is the most well-known static properties of elderly
speech. Change of the hoarseness is quantitatively investigated with aging.
Japanese phonetically-balanced 543 word utterances were collected with the
cooperation of 153 speakers, whose ages ranged from 20 to 89 years old.
Acoustical characteristics of the word utterances were examined in the
viewpoints of age and auditory impression. In the static acoustical analysis
of Japanese vowels /a/, /e/, /i/, /o/, and /u/, it is confirmed that energy in the
high frequency region rises with aging. There is a remarkable energy lift
over 4 kHz, and the amount of the energy lift is proportion to the degree of
subjective hoarseness.
5aSC23. Effect of formant characteristics on older listeners’ dynamic
pitch perception. Jing Shen (Commun. Sci. and Disord., Northwestern
Univ., 2240 Campus Dr., Evanston, IL 60208, jing.shen@northwestern.
edu), Richard Wright (Linguist, Univ. of Washington, Seattle, Washington),
and Pamela Souza (Commun. Sci. and Disord., Northwestern Univ., Evanston, IL)
Previous research suggested a large inter-subject variability in dynamic
pitch perception among older individuals (Souza et al., 2011). Although
data from younger listeners with normal hearing indicate temporal and spectral variations in complex formant characteristics may influence dynamic
pitch perception (Green et al., 2002), the present study examines this interaction in an aging population. The stimulus set includes two monophthongs
that have static formant patterns and two diphthongs that have dynamic
formant patterns. The fundamental frequency at the midpoint in time of
each vowel is kept consistent, while the ratio of start-to-end frequency
varies in equal logarithmic steps. Older adults with near-normal hearing are
tested using an identification task, in which they are required to identify the
pitch glide as either “rise” or “fall.” An experimental task of AX discrimination is also included to verify the identification data. Results to date show
inter-subject variability in dynamic pitch perception among listeners with
good static pitch perception. Better pitch glide perception with monophthong than diphthong is observed in those individuals who perform poorly
in general. The findings suggest a connection between individual abilities to
perceive dynamic pitch and to extract the cues from fundamental and formant frequencies. [Work supported by NIH.]
5aSC24. Sentence recognition in older adults. Kathleen F. Faulkner
(Dept. of Psychol. and Brain Sci., Indiana Univ., 1101 E 10th St., Bloomington, IN 47401, katieff@indiana.edu), Gary R. Kidd, Larry E. Humes
(Speech and Hearing Sci., Indiana Univ., Bloomington, IN), and David B.
Pisoni (Dept. of Psychol. and Brain Sci., Indiana Univ., Bloomington,
IN)
Many older adults report difficulty when listening to speech in background noise. These difficulties may arise from some combination of factors, including age-related hearing loss, auditory sensory processing
difficulties, and/or general cognitive decline. To perform well in everyday
noisy environments, listeners must quickly adapt, switch attention, and
adjust to multiple sources of variability in both the signal and listening environments. Sentence recognition tests in noise have been useful for assessing
speech understanding abilities because they require a combination of basic
sensory/perceptual abilities as well as cognitive resources and processing
operations. This study was designed to explore several factors underlying
2314
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
individual differences in aided speech understanding in older adults. We
examined the relations between measures of speech perception, cognition,
and self-reported listening difficulties in a group of aging adults (N = 40, age
range 60–86) and a group of young normal hearing listeners (N = 28, age
range 18–30). All participants completed a comprehensive battery of tests,
including cognitive, psychophysical, speech understanding, as well as the
SSQ self-report scale. While controlling for audibility, speech understanding declined with age and was strongly correlated with psychophysical
measures, cognition, and self-reported speech understanding difficulties.
[Work supported by NIH: NIDCD grant T32-DC00012 and NIA grant R01AG008293 to Indiana University.]
5aSC25. Individual differences in speech perception in noise: A neurocognitive genetic study. Zilong Xie (Dept. of Commun. Sci. & Disord.,
The Univ. of Texas at Austin, 2504A Whitis Ave. (A1100), Austin, TX
78712, xzilong@gmail.com), W. Todd Maddox (Dept. of Psych., The Univ.
of Texas at Austin, Austin, TX), Valerie S. Knopik (Div. of Behavioral
Genetics, Rhode Island Hospital, Brown Univ. Med. School, Providence,
RI), John E. McGeary (Providence Veterans Affairs Medical Ctr. , Providence, RI), and Bharath Chandrasekaran (Dept. of Commun. Sci. & Disord.,
The Univ. of Texas at Austin, Austin, TX)
Previous work has demonstrated that individual listeners vary substantially in their ability to recognize speech in noisy environments. However,
little is known about the underlying sources of individual differences in
speech perception in noise. Noise varies in the levels of energetic masking
(EM) and informational masking (IM) imposed on target speech. Relative to
EM, release from IM places greater demand on selective attention. A polymorphism in exon III of the DRD4 gene has been shown to influence selective attention. Here we investigated whether this polymorphism contributes
to individual variation in speech recognition ability. We assessed sentence
recognition performance across a range of maskers (1-, 2-, and 8-talker babble, and speech-spectrum noise) among 104 young, normal-hearing adults.
We also measured their working memory capacity with Operation Span
Task, which relies on selective attention to update and maintain items in
memory while performing a secondary task. Results showed that the long
variant of the DRD4 gene significantly associated with better recognition
performance in 1-talker babble conditions only, and that this relation was
mediated by enhanced working memory capacity. These findings suggest
that the DRD4 polymorphism can explain some of the individual differences
in speech recognition ability, but is specific to IM conditions.
5aSC26. Potential sports concussion identification using acoustic-phonetic analysis of vowel productions. Terry G. Horner (Indiana Univ. Methodist Sports Medicine, 201 Pennsylvania Parkway, Ste. 100, Indianapolis,
IN 46280, tghorner@hughes.net) and Michael A. Stokes (Waveform Commun., Indianapolis, IN)
Concussions impair cognitive function and muscle motor control; however, little is known about how this impairment affects speech production.
In the present study, concussed athletes speech is recorded at the initial
office visit and subsequent visits. The last recording, when the brain is determined to have recovered using present criteria, becomes the baseline. The
vocabulary consists of seven h-vowel-d (hVd) words (who’d, heed, hood,
hid, had, hud, and heard) produced three times each for a total of 21 productions. The study is focused on vowel characteristics and the limited coarticulatory effects of the hVd vocabulary make it ideal for the study. Duration
measurements are made by experimenter analysis and the formant measurements are made using the automatic speech recognition engine ELBOW.
The preliminary comparisons from the subjects completing the protocol
show formant drift for three or more of the seven vowels, and duration is
affected for each talker and each vowel. These results were anticipated since
the impairment would affect articulatory movement and timing. The results
as well as a discussion of development of an automated real-time concussion identification application will be presented.
168th Meeting: Acoustical Society of America
2314
5aSC27. Thai phonetically balanced word recognition test: Reliability
evaluations and bias and error analysis. Adirek Munthuli (Elec. and Comput. Eng., Thammasat Univ., Khlong Luang, Pathumthani, Thailand), Chutamanee Onsuwan (Linguist, Thammasat Univ., Dept. of Linguist, Faculty
of Liberal Arts, Thammasat University, Khlong Luang, Pathumthani 12120,
Thailand, consuwan@hotmail.com), Charturong Tantibundhit (Elec. and
Comput. Eng., Thammasat Univ., Khlong Luang, Pathumthani, Thailand),
and Krit Kosawat (Thailand National Electronics and Comput. Technol.
Ctr., Khlong Luang, Pathumthani, Thailand)
are in line with those found for Thai speech sounds in noise condition. Interestingly, vowels are found to be most resistant to confusion. Finally, possible effect of lexical frequency is examined and discussed.
5aSC28. Talker variability in spoken word recognition: Evidence from
repetition priming. Yu Zhang and Chao-Yang Lee (Ohio Univ., W239
Grover Ctr., Ohio University, Athens, OH 45701, yz137808@ohio.edu)
The effect of talker variability on the processing of spoken words is
investigated using short-term repetition priming experiments. Prime-target
pairs, either repeated (e.g., queen-queen) or unrelated (e.g., bell-queen),
were produced by the same or different male speakers. Two interstimulus
intervals (ISI, 50 and 250 ms) were used to explore the time course of repetition priming and voice specificity effects. The auditory stimuli were presented to 40 listeners, who completed a lexical decision task followed by a
talker voice discrimination task. Results from the lexical decision task
showed that the magnitude of priming was attenuated in the different-talker
condition, indicating a talker variability effect on spoken word recognition.
In contrast, the talker variability effect on priming did not differ between
the two ISIs. Talker voice discrimination was faster and more accurate for
nonword targets, but not for word targets, indicating a lexical status effect
on voice discrimination. Taken together, these results suggest that talker
variability affects recognition of spoken words, and that the effect cannot be
simply attributed to non-lexical voice discrimination.
Word recognition score (WRS) is one of the measuring techniques used
in speech audiometry, a part of a routine audiological examination. The
test’s accuracy is crucial and largely depends on the test materials. With emphasis on phonetic balance, test-retest reliability, inter-list equivalency, and
symmetrical phoneme occurrence, Thammasat University Phonetically Balanced Word Lists 2014 (TU PB’14) were created with five different lists,
each with 25 Thai monosyllabic words. TU PB’14 reflects Thai phoneme
distribution based on large-scale written Thai corpora, InterBEST [1]. To
evaluate its validity and test-retest reliability, the lists were given at five intensity levels (15–55 dB HL) in test and retest sessions to 30 normal-hearing
subjects. The differences in performance between the two sessions are not
significantly large and correlation coefficients at the linear regions are all
positive. Analysis of listeners’ errors, including sequence recurrences, was
carried out. Errors occurred predominantly in the case of initials, followed
by finals and lexical tones. Confusion patterns of initials, finals, and tones
FRIDAY MORNING, 31 OCTOBER 2014
INDIANA F, 8:00 A.M. TO 12:30 P.M.
Session 5aUW
Underwater Acoustics: Acoustics, Ocean Dynamics, and Geology of Canyons
John A. Colosi, Cochair
Department of Oceanography, Naval Postgraduate School, 833 Dyer Road, Monterey, CA 93943
James Lynch, Cochair
Woods Hole Oceanographic, MS # 11, Bigelow 203, Woods Hole Oceanographic, Woods Hole, MA 02543
Chair’s Introduction—8:00
Invited Papers
8:05
5a FRI. AM
5aUW1. What do we know and what do we need to know about submarine canyons for acoustics? James Lynch, Ying-Tsong Lin,
Timothy Duda, Arthur Newhall (Appl. Ocean Phys. and Eng., Woods Hole Oceanographic Inst., MS # 11, Bigelow 203, Woods Hole
Oceanographic, Woods Hole, MA 02543, jlynch@whoi.edu), and Glen Gawarkiewicz (Physical Oceanogr., Woods Hole Oceanographic
Inst., Woods Hole, MA)
Acoustic propagation and scattering in marine canyons is an inherently 3-D problem, both for the environmental input (bottom topography and geology, biology, and physical oceanography) and the acoustic field. In this talk, we broadly examine what our knowledge
is of these environmental fields, and what their salient effects should be upon acoustics. Examples from recent experiments off the
United States and Taiwan will be presented, along with other historical data. Three dimensional acoustic modeling results will also be
presented. Directions for future research will be discussed.
2315
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2315
8:25
5aUW2. Ocean dynamics and numerical modeling of canyons and shelfbreaks. Pierre F. Lermusiaux (MechE, MIT, 77 Mass Ave.,
Cambridge, MA 02139, pierrel@mit.edu, Patrick Haley, Chris Mirabito (MIT, Cambridge, MA 02139), Timothy Duda, and Glen
Gawarkiewicz (WHOI, Woods Hole, MA)
Multiscale ocean dynamics and multi-resolution numerical modeling of canyons and shelfbreaks are outlined. The dynamics focus is
on fronts, currents, tides, and internal tides/waves that occur in these regions. Due to the topographic gradients and strong internal field
gradients, nonlinear terms and non-hydrostatic dynamics can be significant. Computationally, a challenge is to achieve accurate simulations that resolve strong gradients over dynamically significant space- and time-scales. To do so, one component are high-order schemes
that are more accurate for the same efficiency than lower-order schemes. A second is multi-resolution grids that allow optimized refinements, such as reducing errors near steep topography. A third are methods that allow to solve for multiple dynamics, e.g., hydrostatic
and non-hydrostatic, seamlessly. To address these components, new hybridizable discontinuous Galerkin (HDG) finite-element schemes
for (non)-hydrostatic physics including a nonlinear free-surface are introduced. The results of data-assimilative multi-resolution simulations are then discussed, using the primitive-equation MSEAS system and telescoping implicitly two-way nested domains. They correspond to collaborative experiments: (i) Shallow Water 06 (SW06) and the Integrated Ocean Dynamics and Acoustics (IODA) research
in the Middle Atlantic Bight region; (ii) Quantifying, Predicting and Exploiting Uncertainty (QPE) in the Taiwan-Kuroshio region; and
(iii) Philippines Straits Dynamics Experiment (PhilEx).
8:45
5aUW3. Internal tides in canyons and their effect on acoustics. Timothy F. Duda, Weifeng G. Zhang, Ying-Tsong Lin (Appl. Ocean
Phys. and Eng. Dept., Woods Hole Oceanographic Inst., WHOI AOPE Dept. MS 11, Woods Hole, MA 02543, tduda@whoi.edu), and
Aurelien Ponte (Laboratoire de Physique des Oceans, IFREMER-CNRS-IRD-UBO, Plouzane, France)
Internal gravity waves of tidal frequency are generated as the ocean tides push water upward onto the continental shelf. Such waves
also arrive at the continental slope from deep water and are heavily modified by the change in water depth. The wave generation and
wave shoaling effects have an additional level of complexity where a canyon is sliced into the continental slope. Recently, steps have
been taken to simulate internal tides in canyons, to understand the physical processes of internal tides in canyons, and also to compute
the ramifications on sound propagation in and near the canyons. Internal tides generated in canyons can exhibit directionality, with the
directionality being consistent with an interesting multiple-scattering effect. The directionality imparts a pattern to the sound-speed
anomaly field affecting propagation. The directionality also means that short nonlinear internal waves, which have specific strong effects
on sound, can have interesting patterns near the canyons. In addition to the directionality of internal tides radiated from canyons, the internal tide energy within the canyons can be patchy and may unevenly affect sound.
9:05
5aUW4. An overview of internal wave observations and theory associated with canyons and slopes. John A. Colosi (Dept. of Oceanogr., Naval Postgrad. School, 833 Dyer Rd., Monterey, CA 93943, jacolosi@nps.edu)
Topographic environments such as canyons and slopes are known to be regions of complex internal-wave behavior associated with
wave generation, propagation, and dissipation. Much of this anomalous behavior stems from the kinematic constraint that internal waves
must maintain their angle of propagation with respect to the horizontal even after interaction with a sloping boundary. In canyons or on
slopes, waves propagating in from deep water or generated locally (mostly by tidal flows) either reflect back out to sea or intensify in
energy density as they propagate up slope. In particular, wave intensification can lead to nonlinear phenomena including steepening,
breaking, and dissipation. This talk will provide and overview of internal wave observations, modeling, and theory in canyons and on
slopes with a particular emphasis on acoustically relevant aspects of the wave field.
9:25
5aUW5. Fiery ice from the sea: Marine gas hydrates. Ross Chapman (Earth and Ocean Sci., Univ. of Victoria, 3800 Finnerty Rd.,
Victoria, BC V8P5C2, Canada, chapman@uvic.ca)
Marine gas hydrates are cage-like structures of water containing methane or some higher hydrocarbons that are stable under conditions of high pressures and low temperatures. The hydrate structures are formed in sediments of continental margins and are found
worldwide. The stability zone extends to about 200 m beneath the sea floor, and hydrates exist in several different forms within the
zone, from massive ice-like features at cold seeps on the sea floor to finely distributed deposits in sediment pores over extensive areas.
The base of the stability zone is characterized by a strong acoustic impedance change from high velocity hydrated sediments above to
low velocity gas below. This acoustic feature generates a strong signal in seismic surveys called the Bottom Simulating Reflector, and it
is widely used as an indicator of the presence of hydrates. This paper reviews the current knowledge of hydrate systems from research
carried out on the Cascadia Margin off the west coast of Vancouver Island, and in the Gulf of Mexico. The hydrate distributions are different in each of these areas, leading to different effects in acoustic reflectivity.
9:45
5aUW6. South China Sea upper-slope sand dunes acoustics experiment. Ching-Sang Chiu, Ben Reeder (Dept. of Oceanogr., Naval
Postgrad. School, 833 Dyder Rd., Rm. 328, Monterey, CA 93943-5193, chiu@nps.edu), Linus Chiu (National Sun Yat-sen Univ., Kaohsiung, Taiwan), Yiing Jang Yang, and Chifang Chen (National Taiwan Univ., Taipei, Taiwan)
Very large subaqueous sand dunes were discovered on the upper continental slope of the Northeastern South China Sea. The spatial
distribution and scales of these large sand dunes were mapped by two multibeam echo sounding (MBES) surveys, one in 2012 and the
other in 2013. These two surveys represented two pilot cruises as part of a multiyear, US-Taiwan collaborative field study designed to
characterize these sand dunes, the associated physical processes and the associated acoustic scattering physics. The main experiment
will be carried out in 2014. The combination of MBES, coring and acoustic transmission data obtained from the two pilot cruises have
2316
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2316
provided vital initial knowledge of (1) the spatial and temporal scales of the sand dunes from objective analysis, (2) the geoacoustic
properties of the dunes based on forward modeling to matching the measured levels, and (3) the anisotropy and translational variability
of the transmission loss based on a signal energy analysis of the repeated 1–2 kHz and 4–6 kHz FM signals transmitted by a calibrated
sound source towed along two circular tracks, each surrounding a receiver. The results from the pilot cruises are presented and discussed.
[The research is sponsored by the US ONR and the Taiwan NSC.]
10:05–10:20 Break
10:20
5aUW7. Three dimensional underwater acoustic modeling on continental slopes and submarine canyons. Ying-Tsong Lin, David
Barclay, Timothy F. Duda, and Weifeng Gordon Zhang (Appl. Ocean Phys. and Eng., Woods Hole Oceanographic Inst., Bigelow 213,
MS#11, WHOI, Woods Hole, MA 02543, ytlin@whoi.edu)
Underwater sound propagation on slopes and canyons is influenced jointly and strongly by the complexity of topographic variability
and ocean dynamics. Some integrated ocean and acoustic models have been developed and implemented to investigate such joint acoustic effects. In this talk, an integrated numerical model employing a time-stepping three-dimensional (3D) parabolic-equation (PE) acoustic modeling method and the Regional Ocean Modeling System (ROMS) is presented. Numerical examples of sound propagation and
ambient noise in Mid-Atlantic Bight area with realistic environmental conditions are demonstrated. The sound propagation model
reveals the focusing of sound due to concave canyon seafloor and the different level of temporal variability of focused and unfocused
sound. The ambient noise model is constructed for surface wind generated noise, and the model shows the azimuthal dependency of
noise field and its spatial coherence structure. Lastly, a simple sonar performance prediction is made to investigate the variability of the
probability of detection in these complex underwater environments. [Work supported by the ONR.]
10:40
5aUW8. Three-dimensional effects in the sound propagation in area of coastal slope. Boris Katsnelson (Marine GeoSci., Univ. of
Haifa, 1, Universitetskaya sq, Voronezh 394006, Russian Federation, katz@phys.vsu.ru) and Andrey Malykhin (Phys., Voronezh Univ.,
Voronezh, Russian Federation)
Coastal slope (wedge) in the ocean is well known “canonical” problem for analysis of manifestation of the horizontal refraction (3d
effects) in a shelf zone. In given paper the following effects are reviewed:(1) Spatial variability of the sound field in given area. Areas of
one-path and multipath propagation, shadow zones, and caustics in horizontal plane, their properties in dependence on the frequency,
influence of bottom parameters;(2) Interference structure of the sound field in the horizontal plane in dependence on mode number and
frequency;(3) Distribution of the sound field in vicinity of curvilinear coastal line, for example, gulf, bay, peninsula etc. (shadow zones,
multipath area, and whispering gallery waveguide in horizontal plane);(4) Temporal variability of signals due to frequency dependence
of the horizontal refraction and in turn pulse compression/decompression and time reversal in multipath area;(5) time-frequency diagrams; Mentioned and other effects can change properties of bottom and surface reverberation, scattering, noise field distribution,
attenuation in area of coastal wedge. The corresponding estimations are presented. [Work was supported by BSF, Grant 2010471,
RFBR-NSFC Grant 14-05-91180.]
11:00
11:15
5aUW9. Analytic prediction of acoustic coherence time scales in continental-shelf environments with random internal waves. Zheng Gong,
Tianrun Chen (Mech. Eng., Massachusetts Inst. of Technol., 5-435, 77 Massachusetts Ave., Cambridge, MA 02139, zgong@mit.edu), Purnima Ratilal
(Elec. and Comput. Eng., Northeastern Univ., Boston, MA), and Nicholas
C. Makris (Mech. Eng., Massachusetts Inst. of Technol., Cambridge,
MA)
5aUW10. Modeling three dimensional environment and broadband
acoustic propagation in Arctic shelf-basin region. Mohsen Badiey,
Andreas Muenchow, Lin Wan (College of Earth, Ocean, and Environment,
Univ. of Delaware, 261 S. College Ave., Robinson Hall, Newark, DE
19716, badiey@udel.edu), Megan S. Ballard (Appl. Res. Labs., Univ. of
Texas, Austin, Delaware), David P. Knobles, and Jason D. Sagers (Appl.
Res. Labs., Univ. of Texas, Austin, TX)
An analytical model derived from normal mode theory for the accumulated effects of range-dependent multiple forward scattering is applied to
estimate the temporal coherence of the acoustic field forward propagated
through a continental-shelf waveguide containing random three-dimensional
internal waves. The modeled coherence time scale of narrow band low-frequency acoustic field fluctuations after propagating through a continentalshelf waveguide is shown to decay with a power-law of range to the 1/2 beyond roughly 1 km, decrease with increasing internal wave energy, to
be consistent with measured acoustic coherence time scales. The model
should provide a useful prediction of the acoustic coherence time scale as a
function of internal wave energy in continental-shelf environments. The
acoustic coherence time scale is an important parameter in remote sensing
applications because it determines (i) the time window within which standard coherent processing such as matched filtering may be conducted, and
(ii) the number of statistically independent fluctuations in a given measurement period that determines the variance reduction possible by stationary
averaging.
Rapid climate change over the last decade has created a renewed interest
in the nature of underwater sound propagation in the Arctic Ocean. Changes
in the oceanography and surface boundary conditions are expected to cause
measurable changes in the propagation and scattering of low frequency
sound. Recent measurements of a high-resolution three-dimensional (3-D)
sound speed structure in a 50 x 50 km2 region in an open-water shelf-basin
region of the Beaufort Sea offer a unique and rare opportunity to study the
effects of a complex oceanography on the acoustic field as it propagates
from the deep basin onto the continental shelf. The raw oceanography data
were analyzed and processed to create a 3-D sound speed field for the water
column in the basin-slope-shelf area. Recent advances in both 2-D and 3-D
acoustic modeling capability allow one to study the effects of the range- and
azimuth-dependent water column layers on the frequency-dependent acoustic modal structure. Of particular interest is the nature of the 3-D and modecoupling effects on the frequency response induced by the oceanography.
The results will likely be useful in designing acoustic experiments with serious logistical constraints in the rapidly changing Arctic Ocean.
2317
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2317
5a FRI. AM
Contributed Papers
11:30
12:00
5aUW11. Underwater jet noise simulation based on a Large Eddy Simulation/Lighthill hybrid method. GuoQing Liu (School of Naval Architecture and Ocean Eng., Huazhong Univ. of Sci. and Technol., Wuhan 430074,
China China, liugq_2010@163.com), Tao Zhang, YongOu Zhang (School
of Naval Architecture and Ocean Eng., Huazhong Univ. of Sci. and Technol., Wuhan, Hubei Province, China), Huajiang Ouyang (School of Eng.,
Univ. of Liverpool, Liverpool, United Kingdom), and Xu Li (School of Naval Architecture and Ocean Eng., Huazhong Univ. of Sci. and Technol.,
Wuhan, China)
5aUW13. Ultra low frequency electromagnetic underwater sound
source. Wei Lu and Yu Lan (College of Underwater Acoust. Eng., Harbin
Eng. Univ., 145 Nantong St.,Nangang District, Harbin 150001, China,
luwei@hrbeu.edu.cn)
In recent years, extensive researches about the numerical method for
aeroacoustics noise simulation have been made. However, the research of
hydrodynamic noise develops slowly. In this paper, a hybrid method of
combining Large Eddy Simulation (LES) and Lighthill’s acoustic analogy
theory is established to compute the hydrodynamic noise, which is based on
the preliminary study of the method for aerodynamic noise prediction under
low Mach number. First, the model of three-dimensional underwater jet is
determined by an experimental model. Meanwhile, the CFD mesh and the
acoustic mesh are both prepared. Then, the flow field of underwater jet is
simulated with LES. The characteristics of turbulent flow are analyzed by
the pressure difference and the uniformity coefficient of velocity. After that,
the noise of underwater jet is simulated using the theory of Lighthill’s
acoustic analogy. Finally, the solutions obtained by the hybrid method are
compared with the experimental data available in open literature. In conclusion, the sound pressure level at the observation point agrees well with the
experimental data. The LES/Lighthill hybrid method is able to compute the
underwater jet noise and the hydrodynamic noise.
11:45
5aUW12. Formation sparse aperture antenna arrays based on the
sequence Costas. Igor I. Anikin (Concern CSRI Elektropribor, JSC, 30,
Malaya Posadskaya Ul., St. Petersburg 197045, Russian Federation, anikin1952@bk.ru)
To obtain high spatial resolution in sonar, ultrasonic image, radar, seismic,
and radio astronomy use active antenna arrays that contain a large number of
elements. To reduce the cost of such an antenna arrays used with sparse aperture. In this approach, the antenna array is partitioned into several subarrays.
Geometric size subarray equivalent equidistant placement Nc * Nc elements.
Subarrays filled Nc elements arranged according to the sequence Costas Ncth order. Each filled subarrays own Costas sequence. As a result, the number
of elements in the array is reduced in times Nc. Form the beam pattern in the
main sections close to the plane shape of the beam pattern of equidistant
antenna array, and the directivity factor is almost independent of frequency
band. The upper frequency is reduced in the directivity factor (pNc)/2 times
as compared with the plane equidistant antenna array. Using a decaying distribution can be reduced in amplitude level of the side lobe in the principal
planes. Thus, by setting the order of the Costas sequence can in each case to
optimize the degree of reduction of the number of elements in the array
antenna at a predetermined directivity factor.
2318
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
A detail analysis is presented of one ultra low frequency sound source
which is smaller and lighter than conventional piezoelectric ultra low frequency sound source. This sound source is single piston vibration using the
electromagnetic principle. The radiation characteristic of sound source is
researched by single piston radiation model in low frequency. The dynamic
characteristic such as resonant frequency, vibrant displacement of sound
source is researched by analytic method and finite element method. In analytic method, the electricity-magnetism, magnetism-force, and force-vibration conversion models of sound source are established by differential
equations in different coupling physical fields, and the dynamic characteristics based on the conversion models are simulated by combining and solving
differential equations using the MATLAB/SIMULINK. In finite element method,
using transient solver of electromagnetic analysis finite element software
Ansoft, the dynamic characteristics of sound source are solved. Optimizing
the dynamic characteristic of sound source by adjusting magnetic circuit,
drive coil, and elastic component parameters, the resonant frequency and
radiated sound power of sound source are determined. One prototype sound
source design, in which the source level is 184 dB in frequency 73 Hz by
calibration, is fabricated that demonstrated proof-of-concept.
12:15
5aUW14. Simplex underwater acoustic communications using passive
time reversal. Lin Sun, Haisen Li, Bo Zou, and Ruo Li (College of Underwater Acoust. Eng., Harbin Eng. Univ., Harbin Eng. University, No.145
Nantong St., Nangang District, Harbin City, Heilongjiang Province., Harbin
150001, China, sunlinhrb@sina.com)
The spatial-temporal compression, which is achieved through using simple time reversal (TR) process, can reduce the inter-symbol interference and
increase the signal strength. The active TR needs two-way propagation, so it
cannot be used in simplex underwater acoustic communications. Based on
the one-way propagation property of passive TR, a simplex underwater
acoustic communication method using passive TR is proposed. The proposed method is considered in two scenarios: uplink transmission from a
single send-only element to an array and downlink transmission from an
array to a single receive-only element. The principle of proposed method is
analyzed in theory and the performance of proposed method is verified
through experiment. Results demonstrate that passive TR process can
improve the output signal-to-noise ratio and decrease the bit error rate, so
the performance of proposed method is superior to that of simplex acoustic
communication method without using passive TR.
168th Meeting: Acoustical Society of America
2318
This document is frequently updated; the current version can be found online at the Internet site:
<http://scitation.aip.org/content/asa/journal/jasa/info/authors>.
Information for contributors to the
Journal of the Acoustical Society of America (JASA)
Editorial Staff a)
Journal of the Acoustical Society of America, Acoustical Society of America, 1305 Walt Whitman Road,
Suite 300, Melville, NY 11747-4300
The procedures for submitting manuscripts to the Journal of the Acoustical Society of America are
described. The text manuscript, the individual figures, and an optional cover letter are each uploaded
as separate files to the Journal’s Manuscript Submission and Peer Review System. The required
format for the text manuscript is intended so that it will be easily interpreted and copy-edited during
the production editing process. Various detailed policies and rules that will produce the desired
format are described, and a general guide to the preferred style for the writing of papers for the
Journal is given. Criteria used by the editors in deciding whether or not a given paper should be
published are summarized.
PACS numbers: 43.05.Gv
TABLE OF CONTENTS
I.
II.
INTRODUCTION
ONLINE HANDLING OF MANUSCRIPTS
A. Registration
B. Overview of the editorial process
C. Preparation for online submission
D. Steps in online submission
E. Quality check by editorial office
III. PUBLICATION CHARGES
A. Mandatory charges
B. Optional charges
C. Payment of page charges—Rightslink
IV. FORMAT REQUIREMENTS FOR MANUSCRIPTS
A. Overview
B. Keyboarding instructions
C. Order of pages
D. Title page of manuscript
E. Abstract page
F. Section headings
V.
STYLE REQUIREMENTS
A. Citations and footnotes
B. General requirements for references
C. Examples of reference formats
1. Textual footnote style
2. Alphabetical bibliographic list style
D. Figure captions
E. Acknowledgments
F. Mathematical equations
G. Phonetic symbols
H. Figures
I. Tables
VI. THE COVER LETTER
VII. EXPLANATIONS AND CATEGORIES
A. Subject classification, ASA-PACS
B. Suggestions for Associate Editors
C. Types of manuscripts
1. Regular research articles
2. Education in acoustics articles
3. Letters to the editor
4. Errata
5. Comments on published papers
6. Replies to comments
7. Forum letters
8. Tutorial and review papers
9. Book reviews
VIII. FACTORS RELEVANT TO PUBLICATION
DECISIONS
A. Peer review system
B. Selection criteria
C. Scope of the Journal
IX. Policies regarding prior publication
A. Speculative papers
B. Multiple submissions
X.
SUGGESTIONS REGARDING CONTENT
A. Introductory section
B. Main body of text
C. Concluding section
D. Appendixes
E. Selection of references
XI. SUGGESTIONS REGARDING STYLE
A. Quality of writing and word usage
B. Grammatical pitfalls
C. Active voice and personal pronouns
D. Acronyms
E. Computer programs
F. Code words
REFERENCES
I. INTRODUCTION
The present document is intended to serve jointly as (i) a
set of directions that authors should follow when submitting
articles to the Journal of the Acoustical Society of America
and as (ii) a style manual that describes those stylistic
a)
E-mail: jasa@aip.org
2319
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Information for Contributors
2319
features that are desired for the submitted manuscript. This
document extracts many of the style suggestions found in the
AIP Style Manual,1 which is available online at the internet
site <http://www.aip.org/pubservs/style/4thed/toc.html>. The
AIP Style Manual, although now somewhat dated and not
specifically directed toward publication in the Journal of the
Acoustical Society of America (JASA), is a substantially more
comprehensive document, and authors must make use of it
also when preparing manuscripts. If conflicting instructions
are found in the two documents, those given here take precedence. (Authors should also look at recent issues of the
Journal for examples of how specific style issues are handled.)
Conscientious consideration of the instructions and advice
given in the two documents should considerably increase
the likelihood that a submitted manuscript will be rapidly
processed and accepted for publication.
II. ONLINE HANDLING OF MANUSCRIPTS
All new manuscripts intended for possible publication
in the Journal of the Acoustical Society of America should
be submitted by an online procedure. The steps involved
in the processing of manuscripts that lead from the initial
submission through the peer review process to the transmittal
of an accepted manuscript to the production editing office
are handled by a computerized system referred to here as
the Peer X-Press (PXP) system. The Acoustical Society
of America contracts with AIP Publishing LLC for the use
of this system. There is one implementation that is used
for most of the material that is submitted to the Journal of
the Acoustical Society of America (JASA) and a separate
implementation for the special section JASA Express Letters
(JASA-EL) of the Journal.
you do this, a “task page” will appear. At the bottom of the
page there will be an item Modify Profile/Password. Click on
this. Then a Page will appear with the heading Will you please
take a minute to update the profile?
If you are satisfied with your profile and password, then
you go to the top of the Task page and click on the item
Submit Manuscript that appears under Author Tasks. Then
you will see a page titled Manuscript Submission Instructions. Read what is there and then click continue at the bottom of the page.
B. Overview of the editorial process
(1) An author denoted as the corresponding author submits a
manuscript for publication in the Journal.
(2) One of the Journal’s Associate Editors is recruited to
handle the peer-review process for the manuscript.
(3) The Associate Editor recruits reviewers for the manuscript via the online system.
(4) The reviewers critique the manuscript, and submit their
comments online via the Peer X-Press system.
(5) The Associate Editor makes a decision regarding the
manuscript, and then composes online an appropriate decision letter, which may include segments of the reviews,
and which may include attachments.
(6) The Journal’s staff transmits a letter composed by the
Associate Editor to the corresponding author. This letter
describes the decision and further actions that can be
taken.
If revisions to the manuscript are invited, the author may
resubmit a revised manuscript, and the process cycle is repeated. To submit a revision authors should use the link provided in the decision message.
A. Registration
Everyone involved in the handling of manuscripts in the
Journal’s editorial process must first register with the Journal’s implementation of the PXP system, and the undertaking
of separate actions, such as the submission of a manuscript,
requires that one first log-in to the system at http://jasa.peerxpress.org/cgi-bin/main.plex.
If you have never logged into the system, you will need
to get a user name and password. Many ASA members are
already in the data base, so if you are a member, you in
principle may already have a user name and password, but
you will have to find out what they are. On the login page,
you click on the item “Unknown/Forgotten Password.” On
the new page that comes up after you do this, give your first
name and last name. After you have filled in this information, just click on “mailit.” You will then get a e-mail message with the subject line “FORGOTTEN PASSWORD.”
The system will actually give you a new password if you had
ever used the system before. After you get this new password, you can change it to something easy to remember after
you login.
Once you have your “user name” and “password” you go
to the log-in page again, and give this information when you
log-in. You will first be asked to change your password. After
2320
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
C. Preparation for online submission
Before one begins the process of submitting a manuscript
online, one should first read the document Ethical Principles
of the Acoustical Society of America for Research Involving
Human and Non-Human Animals in Research and Publishing
and Presentations which is reached from the site <http://
scitation.aip.org/content/asa/journal/jasa/info/authors>. During
the submission, you will be asked if your research conformed
to the stated ethical principles and if your submission of
the manuscript is in accord with the ethical principles that
the Acoustical Society has set for its journals. If you cannot
confirm that your manuscript and the research reported are in
accord with these principles, then you should not submit your
manuscript.
Another document that you should first read is the document Transfer of Copyright Agreement, which is downloadable from the same site. When you submit your manuscript
online you will be asked to certify that you and your
coauthors agree to the terms set forth in that document. What
is in that document has been carefully worded with extensive
legal advice and which has been arrived at after extensive discussion within the various relevant administrative committees of the Acoustical Society of America. It is regarded
Information for Contributors
2320
as a very liberal document in terms of the rights that are
allowed to the authors. One should also note the clause: The
author(s) agree that, insofar as they are permitted to transfer
copyright, all copies of the article or abstract shall include a
copyright notice in the ASA’s name. (The word “permitted”
means permitted by law at the time of the submission.) The
terms of the copyright agreement are non-negotiable. The
Acoustical Society does not have the resources or legal assistance to negotiate for exceptions for individual papers, so
please do not ask for such special considerations. Please read
the document carefully and decide whether you can provide
an electronic signature (clicking on an appropriate check box)
to this agreement. If you do not believe that you can in good
conscience give such an electronic signature, then you should
not submit your manuscript.
Given that one has met the ethical criteria and agreed to
the terms of the copyright transfer agreement, and that one
has decided to submit a manuscript, one should first gather
together the various items of information that will be requested during the process, and also gather together various
files that one will have to upload.
Information that will be entered into the PeerX-Press
submission form and files to be uploaded include:
(1)
(2)
(3)
(4)
(5)
(6)
(7)
2321
Data for each of the authors:
(a) First name, middle initial, and last name
(b) E-mail address
(c) Work telephone number
(d) Work fax number
(e) Postal address (required for corresponding author,
otherwise optional)
Title and running title of the paper. The running title is
used as the footline on each page of the article. (The
title is limited to 17 words and the running title is limited to six words and up to 50 characters and spaces;
neither may include any acronyms or any words explicitly touting novelty.)
Abstract of the paper. (This must be in the form of a
single paragraph and is limited to 200 words for regular
articles and to 100 words for letters to the editor. (Authors would ordinarily do an electronic pasting from a
text file of their manuscript.)
Principal ASA-PACS number that characterizes the
subject matter of the paper and which will be used to
determine the section of the Journal in which the published paper will be placed. Note that if the PACS number you list first is too generic, e.g., 43.20, that may
result in a delay in processing your paper.
A short prioritized list of Associate Editors suggested
for the handling of the manuscript.
Contact information (name, e-mail address, and institution) of suggested reviewers (if any), and/or names of
reviewers to exclude and reasons why.
Cover letter file (optional, with some exceptions).
Material that would ordinarily have been in the cover
letter is now supplied by answering online questions and
by filling out the online form. However, if an author needs
to supply additional information that should be brought
to the attention of the editor(s) and/or reviewer(s), a
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
cover letter should be written and put into the form of an
electronic file.
(8) Properly prepared manuscript/article file in LaTeX,
Word, or WordPerfect format. (The requirements for a
properly prepared manuscript are given further below.)
It is also possible to submit your file in PDF but this
is not desirable since the entire manuscript must be
retyped. It must be a single stand-alone file. If the author
wishes to submit a LaTeX file, the references should
be included in the file, not in a separate BibTeX file.
Authors should take care to insure that the submitted
manuscript/article file is of reasonable length, no more
than 2 MB.
(9) Properly prepared figure files in TIFF, PS, JPEG, or
EPS (see also, Section V. H); one file for each cited
figure number. The uploading of figures in PDF format
is not allowed. (The captions should be omitted, and
these will appear as a list in the manuscript itself.) The
figures should not have the figure numbers included on
the figures in the files as such, and it is the responsibility of the corresponding author to see that the files are
uploaded in proper order. Authors may upload figures
in a zip file (figure files must be numbered in order
using 1, 2, etc. If figures have parts they must be numbered 1a, 1b, 1c, etc.). [In order to maintain online
color as a free service to authors, the Journal cannot
accept multiple versions of the same file. Authors may
not submit two versions of the same illustration (e. g.,
one for color and one for black & white). When preparing illustrations that will appear in color in the online Journal and in black & white in the printed Journal, authors must ensure that: (i) colors chosen will
reproduce well when printed in black & white and
(ii) descriptions of figures in text and captions will be
sufficiently clear for both print and online versions. For
example, captions should contain the statement “(Color
online).” If one desires color in both versions, these
considerations are irrelevant, although the authors
must guarantee that mandatory additional publication
charges will be paid.]
(10) Supplemental files (if any) that might help the reviewers in making their reviews. If the reading of the paper
requires prior reading of another paper that has been
accepted for publication, but has not yet appeared in print,
then PDF file for that manuscript should be included as
a supplementary file. Also, if the work draws heavily on
previously published material which, while available to
the general public, would be time-consuming or possibly
expensive for the reviewers to obtain, then PDF files of
such relevant material should be included.
(11) Archival supplemental materials to be published with the
manuscript in AIP Publishing’s Supplemental Materials
electronic depository.
In regard to the decision as to what formats one should
use for the manuscript and the figures, a principal consideration may be that the likelihood of the published manuscript
being more nearly to one’s satisfaction is considerably increased if AIP Publishing, during the production process,
Information for Contributors
2321
can make full or partial use of the files you submit. There
are conversion programs, for example, that will convert
LaTeX and MS Word files to the typesetting system that AIP
Publishing uses. If your manuscript is not in either of these
formats, then it will be completely retyped. If the figures are
submitted in EPS, PS, JPEG, or TIFF format, then they will
probably be used directly, at least in part. The uploading of
figures in PDF format is not allowed.
D. Steps in online submission
After logging in, one is brought to the Peer X-Press Task
Page and can select the option of submitting a new manuscript. The resulting process leads the corresponding author
through a sequence of screens.
The first screen will display a series of tabs including:
Files, Manuscript Information, Confirm Manuscript, and
Submit. Clicking on these tabs displays the tasks that must be
completed for each step in the submission. Red arrows denote
steps that have not been completed. Green arrows are displayed
for each tab where the step has been successfully completed.
After submission, all of the individual files, text and
tables, plus figures, that make up the full paper will be
merged into a single PDF file. One reason for having such
a file is that it will generally require less computer memory
space. Another is that files in this format are easily read with
any computer system. However, the originally submitted set
of files, given the acceptance for publication, will be what is
submitted to the Production Editing office for final processing.
is first agreed that certain charges will be paid. If it is evident
that there is a strong chance that a paper’s published length
will exceed 12 pages, the paper will not be processed unless
the authors guarantee that the charges will be paid. If the
paper’s published length exceeds 12 pages or more, there is a
mandatory charge of $80 per page for the entire article. (The
mandatory charge for a 13 page article, for example, would be
$1,080, although there would be no mandatory charge if the
length were 12 pages.)
To estimate the extent of the page charges, count 3
manuscript pages (double-spaced lines, with wide margins) as
equivalent to one printed page, and count 4 figures or tables as
equivalent to one printed page. If this number exceeds 12 and
your institution and/or sponsor will not pay the page charges,
please shorten your paper before submitting it.
Color figures can be included in the online version of the
Journal with no extra charge, providing that these appear
suitably as black and white figures in the print version.
The charges for inclusion of color figures in the print
version of the Journal are $325 per figure file. If figures that
contain parts are submitted in separate files for each part, the
$325 charge applies to each file.
If an author’s institution or research sponsor is unwilling
to pay such charges, the author should make sure that all of
the figures in the paper are suitable for black and white printing,
and that the estimated length is manifestly such that it will not
lead to a printed paper that exceeds 12 pages.
B. Optional charges
E. Quality check by editorial office
Upon receiving system notification of a submission,
staff members in the Editorial Office check that the overall
submission is complete and that the files are properly prepared and suitable for making them available to the Associate Editors and the reviewers. They also check on the estimated length of the manuscript in the event that the author
indicates that page charges will not be paid. If all is in order,
the Manuscript Coordinator initiates the process, using the
ASA-PACS numbers and suggested Associate Editor list
supplied by the author, to recruit an Associate Editor who
is willing to handle the manuscript. At this time the author
also receives a “confirmation of receipt” e-mail message. If
the staff members deem that there are submission defects
that should be addressed, then the author receives a “quality
check” e-mail message. If there are only a small number of
defects, the e-mail message may give an explicit description
of what is needed. In some cases, when they are very numerous, and it is apparent that the author(s) are not aware that
the Journal has a set of format requirements, the e-mail message may simply ask the authors to read the instructions (i.e.,
the present document) and to make a reasonable attempt to
follow them.
III. PUBLICATION CHARGES
A. Mandatory charges
Papers of longer length or with color figures desired for
the print version of the Journal will not be published unless it
2322
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
To encourage a large circulation of the Journal and to
allow the inclusion of a large number of selected research
articles within its volumes, the Journal seeks partial subsidization from the authors and their institutions. Ordinarily, it is
the institutions and/or the sponsors of the research that undertake the subsidization. Individual authors must ask their
institutions or whatever agencies sponsor their research to
pay a page charge of $80 per printed page to help defray the
publication costs of the Journal. (This is roughly 1/3 of the
actual cost per page for the publication of the Journal.) The
institutions and the sponsoring agencies have the option of
declining, although a large fraction of those asked do pay
them. The review and selection of manuscripts for publication proceeds without any consideration on the part of the
Associate Editors as to whether such page charges will be
honored. The publication decision results after consideration
of the factors associated with peer review; the acceptance of
the page charges is irrelevant.
C. Payment of publication charges—Rightslink
When your page proofs are ready for your review, you
will receive an e-mail from AIP Publishing Production
Services. It will include a link to an online Rightslink site
where you can pay your voluntary or mandatory page charges,
color figure charges, or to order reprints of your article. If you
are unable to remit payment online, you will find instructions
for requesting a printed invoice so that you may pay by check
or wire transfer.
Information for Contributors
2322
IV. FORMAT REQUIREMENTS FOR MANUSCRIPTS
A. Overview
For a manuscript submitted by the online procedure to
pass the initial quality control, it is essential that it adhere
to a general set of formatting requirements. Such vary from
journal to journal, so one should not assume that a manuscript appropriate for another journal’s requirements would
be satisfactory for the Journal of the Acoustical Society of
America. The reasons for the Journal’s requirements are
partly to insure a uniform style for publications in the Journal and partly to insure that the copy-editing process will
be maximally effective in producing a quality publication.
For the latter reason, adequate white space throughout the
manuscript is desired to allow room for editorial corrections,
which will generally be hand-written on a printed hard-copy.
While some submitted papers will need very few or no corrections, there is a sufficiently large number of accepted papers of high technical merit that need such editing to make
it desirable that all submissions are in a format that amply
allows for this.
The following is a list of some of the more important
requirements. (More detailed requirements are given in the
sections that follow.)
(1)
The manuscript must be paginated, starting with the
first page.
(2) The entire manuscript must be doubled-spaced. This
includes the author addresses, the abstract, the references, and the list of figure captions. It should contain
no highlighting.
(3) The title and author list is on the first page. The abstract
is ordinarily on a separate page (the second page) unless
there is sufficient room on the title page for it, within the
constrains of ample margins, 12 pt type, double-spacing,
and ample white space. The introduction begins on a
separate page following the page that contains the abstract.
(4) The title must be in lower case, with the only capitalized
words being the first word and proper nouns.
(5) No acronyms should be in the title or the running title
unless they are so common that they can be found in
standard dictionaries or unless they are defined in the
title.
(6) No unsupported claims for novelty or significance
should appear in the title or abstract, such as the use
of the words new, original, novel, important, and
significant.
(7) The abstract should be one paragraph and should be
limited to 200 words (100 words for Letters to the
Editor).
(8) Major section headings should be numbered by capital
roman numerals, starting with the introduction. Text of
such headings should be in capital letters.
(9) Reference citations should include the full titles and
page ranges of all cited papers.
(10) There should be no personal pronouns in the abstract.
(11) No more than one-half of the references should be to the
authors themselves.
(12) The total number of figures should not ordinarily be
more than 20 (See section V. H).
2323
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
(13) Line numbers to assist reviewers in commenting
on the manuscript may be included but they are not
mandatory.
B. Keyboarding instructions
Each submitted paper, even though submitted online,
should correspond to a hard copy manuscript. The electronic
version has to be prepared so that whatever is printed-out will
correspond to the following specifications:
(1) The print-out must be single sided.
(2) The print-out must be configured for standard US letter
paper (8.5⬙ by 11⬙).
(3) The text on any given page should be confined to an area
not to exceed 6.5⬙ by 9⬙. (One inch equals 2.54 cm.) All
of the margins when printed on standard US letter paper
should be at least 1⬙.
(4) The type font must be 12 pt, and the line spacing must
correspond to double spacing (approximately 1/3⬙ or
0.85 cm per line of print). The fonts used for the text
must be of a commonly used easily readable variety such
as Times, Helvetica, New York, Courier, Palatino, and
Computer Modern.
(5) The authors are requested to use computers with adequate
word-processing software in preparing their manuscripts.
Ideally, the software must be sufficiently complete that
all special symbols used in the manuscript are printed.
(The list of symbols available to AIP Publishing for the
publication of manuscripts includes virtually all symbols
that one can find in modern scientific literature. Authors
should refrain from inventing their own symbols.) Italics
are similarly designated with a single straight underline
in black pencil. It is preferred that vectors be designated
by bold face symbols within a published paper rather
than by arrows over the symbols.
(6) Manuscript pages must be numbered consecutively, with
the title page being page 1.
C. Order of pages
The manuscript pages must appear in the following
or der:
(1) Title page. (This includes the title, the list of authors,
their affiliations, with one complete affiliation for each
author appearing immediately after the author’s name,
an abbreviated title for use as a running title in the published version, and any appropriate footlines to title or
authors.)
(2) Abstract page, which may possibly be merged with the
title page if there is sufficient room. (This includes the
abstract with a separate line giving a prioritized listing
of the ASA-PACS numbers that apply to the manuscript.
The selected PACS numbers should be taken only from
the appendix concerned with acoustics of the overall
PACS listing.) Please note that the Journal requires the
abstract to be typed double spaced, just as for all of the
remainder of the manuscript.
(3) Text of the article. This must start on a new page.
(4) Acknowledgments.
Information for Contributors
2323
(5) Appendixes (if any).
(6) Textual footnotes. (Allowed only if the paper cites references by author name and year of publication.)
(7) References. (If the paper cites references by labeling them
with numbers according to the order in which they appear,
this section will also include textual footnotes.)
(8) Tables, each on a separate page and each with a caption
that is placed above the table.
(9) Collected figure captions.
Figures should ordinarily not be included in the “Article” file. Authors do, however, have the option of including
figures embedded in the text, providing there is no ambiguity
in distinguishing figure captions from the manuscript text
proper. This is understood to be done only for the convenience of the reviewers. Such embedded figures will be ignored in the production editing process. The figures that will
be used are those that were uploaded, one by one as separate
files, during the online submission process.
D. Title page of manuscript
The title page should include on separate lines, with appropriate intervening spacing: The article title, the name(s)
of author(s), one complete affiliation for each author, and the
date on which the manuscript is uploaded to the JASA manuscript submission system.
With a distinctive space intervening, the authors must
give, on a separate line, a suggested running title of six
words or less that contains a maximum of 50 characters. The
running title will be printed at the bottom of each printed
page, other than the first, when the paper appears in the Journal. Because the printing of running titles follows an abbreviated identification of the authors, the maximum permissible
length depends critically on the number of the authors and the
lengths of their names. The running title also appears on the
front cover of the Journal as part of an abbreviated table of
contents, and it is important that it give a nontrivial indication
of the article’s content, although some vagueness is to be
expected.
Titles should briefly convey the general subject matter of
the paper and should not serve as abstracts. The upper limit
is set at 17 words. They must be written using only words
and terminology that can be found in standard unabridged
US English dictionaries or in standard scientific/technical
dictionaries, and they must contain no acronyms other than
those that can be found in such dictionaries. (If authors
believe that the inclusion of a less common acronym in the
title will help in information retrieval and/or will help some
readers to better understand what is the subject matter of the
paper, then that acronym should be explicitly defined in the
title.) Ideally, titles should be such that one can easily identify the principal ASA-PACS numbers for the paper, and
consequently they should contain appropriate key words.
This will enable a reader doing a computer-assisted search to
determine whether the paper has any relevance to a given
research topic. Begin the first word of the title with a capital
letter; thereafter capitalize only proper nouns. The Journal
does not allow the use of subjective words such as “original,”
2324
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
“new,” “novel,” “important,” and “significant” in the title. In
general, words whose sole purpose is to tout the importance
of a work are regarded as unnecessary; words that clarify the
nature of the accomplishment are preferred.
In the list of authors, to simplify later indexing, adopt one
form of each name to use on the title pages of all submissions
to the Journal. It is preferred that the first name be spelled
out, especially if the last name is a commonly encountered
last name. If an author normally uses the middle name instead
of the first name, then an appropriate construction would be
one such as J. John Doe. Names must be written with last
name (family name) given last. Omit titles such as Professor,
Doctor, Colonel, Ph.D., and so on.
Each author may include only one current affiliation in
the manuscript. Put the author’s name above the institutional
affiliation. When there is more than one author with the
same institutional affiliation, put all such names above the
stating of that affiliation. (See recent issues of the Journal for
examples.)
In the stating of affiliations, give sufficient (but as briefly
as possible) information so that each author may be contacted
by mail by interested readers; e-mail addresses are optional.
Do not give websites, telephone numbers, or FAX numbers.
Names of states and countries should be written out in full.
If a post office box should be indicated, append this to the
zip code (as in 02537-0339). Use no abbreviations other than
D.C. (for District of Columbia). If the address is in the United
States, omit the country name.
The preferred order of listing of authors is in accord
with the extent of their contributions to the research and
to the actual preparation of the manuscript. (Thus, the last
listed author is presumed to be the person who has done the
least.)
The stated affiliation of any given author should be that
of the institution that employed the author at the time the
work was done. In the event an author was employed simultaneously by several institutions, the stated affiliation should
be that through which the financial support for the research
was channeled. If the current (at the time of publication)
affiliation is different, then that should be stated in a footline.
If an author is deceased then that should be stated in a footline. (Footlines are discussed further below.)
There is no upper limit to the number of authors of any
given paper. If the number becomes so large that the appearance of the paper when in print could look excessively awkward, the authors will be given the option of not explicitly
printing the author affiliations in the heading of the paper.
Instead, these can be handled by use of footlines as described
below. The Journal does not want organizations or institutions to be listed as authors. If there are a very large number
of authors, those who made lesser contributions can be designated by a group name, such a name ending with the word
“group.” A listing of the members of the group possibly including their addresses should be given in a footline.
Footlines to the title and to the authors’ names are consecutively ordered and flagged by lower case alphabetical
letters, as in Fletchera), Huntb), and Lindsayc). If there is any
history of the work’s being presented or published in part
earlier, then a footline flag should appear at the end of the
Information for Contributors
2324
title, and the first footline should be of the form exemplified
below:2
a)
Portions of this work were presented in “A modal distribution study of
violin vibrato,” Proceedings of International Computer Music Conference,
Thessaloniki, Greece, September 1997, and “Modal distribution analysis of
vibrato in musical signals,” Proceedings of SPIE International Symposium
on Optical Science and Technology, San Diego, CA, July 1998.
Authors have the option of giving a footline stating the
e-mail address of one author only (usually the corresponding
author), with an appropriate footline flag after that name and
with each footline having the form:
b)
Author to whom correspondence should be addressed. Electronic mail:
name@servername.com
E. Abstract page
Abstracts are often published separately from actual articles, and thus are more accessible than the articles themselves to many readers. Authors consequently must write abstracts so that readers without immediate access to the entire
article can decide whether the article is worth obtaining. The
abstract is customarily written last; the choice of what should
be said depends critically on what is said in the body of the
paper itself.
The abstract should not be a summary of the paper. Instead, it should give an accurate statement of the subject of
the paper, and it should be written so that it is intelligible
to a broad category of readers. Explicit results need not be
stated, but the nature of the results obtained should be stated.
Bear in mind that the abstract of a journal article, unlike the
abstract of a talk for a meeting, is backed-up by a written
article that is readily (if not immediately) accessible to the
reader.
Limit abstracts to 200 words (100 words for Letters to the
Editor). Displayed equations that are set apart from the text
count as 40 words. Do not use footnotes. If the authors decide
that it is imperative to cite a prior publication in the abstract,
then the reference should be embedded within the text and
enclosed within square brackets. These should be in one of
the two standard JASA formats discussed further below, but
titles of articles need not be given. The abstract should contain
no acknowledgments. In some circumstances, abstracts of
longer than 200 words will be allowed. If an author believes
that a longer abstract is essential for the paper, they should
send an e-mail message to jasa@aip.org with the subject line
“Longer abstract requested.” The text of the desired abstract
should be included in the memo, along with a statement of
why the author believes the longer abstract is essential. The
abstract will be reviewed by the editors, and possibly a revised
wording may be suggested.
Personal pronouns and explicit claims as to novelty
should be assiduously avoided. Do not repeat the title in
the abstract, and write the abstract with the recognition that
the reader has already read the title. Avoid use of acronyms
and unfamiliar abbreviations. If the initial writing leads to
the multiple use of a single lengthy phrase, avoid using an
2325
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
author-created acronym to achieve a reduction in length of
the abstract. Instead, use impersonal pronouns such as it and
these and shorter terms to allude to that phrase. The shortness of the abstract reduces the possibility that the reader will
misinterpret the allusion.
On the same page of the abstract, but separated from
the abstract by several blank lines, the authors must give the
principal ASA-PACS number for the paper, followed by up to
three other ASA-PACS numbers that apply. This should be in
the format exemplified below:
PACS numbers: 43.30.Pc, 43.30.Sf
The principal ASA-PACS number must be the first in this
list. All of the selected PACS numbers must begin with the
number 43, this corresponding to the appendix of the overall
PACS listing that is concerned with acoustics. Authors are
requested not to adopt a principal PACS number in the category of General Linear Acoustics (one beginning with
43.20) unless there is no specific area of acoustics with
which the subject matter can be associated. The more specific is the principal PACS number, the greater likelihood
that an appropriate match may be made with an Associate
Editor, and the greater likelihood that appropriate reviewers
will be recruited. When the paper is printed, the list of ASAPACS numbers will be immediately followed on the same
line by the initials, enclosed in brackets, of the Associate
Editor who handled the manuscript.
F. Section headings
The text of a manuscript, except for very short Letters to
the Editor, is customarily broken up into sections. Four types
of section headings are available: principal heading, first subheading, second subheading, and third subheading. The principal headings are typed boldface in all capital letters and
appear on separate lines from the text. These are numbered
by uppercase roman numerals (I, II, III, IV, etc.), with the
introductory section being principal section I. First subheadings are also typed on separate lines; these are numbered by
capital letters: A, B, C, etc. The typing of first subheadings
is bold-face, with only the first word and proper nouns being
capitalized. Second subheadings are ordered by numbers (1,
2, 3, etc.) and are also typed on separate lines. The typing of
second subheadings is italic bold-face, also with only the first
word and proper nouns capitalized. Third subheadings appear
in the text at the beginning of paragraphs. These are numbered
by lower case letters (a, b, c, etc.) and these are typed in italics
(not bold-faced). Examples of these types of headings can be
found in recent issues of the Journal. (In earlier issues, the
introduction section was not numbered; it is now required to
be numbered as the first principal section.)
Headings to appendixes have the same form as principal
headings, but are numbered by upper-case letters, with an
optional brief title following the identification of the section
as an appendix, as exemplified below:
APPENDIX C: CALCULATION OF IMPEDANCES
If there is only one appendix, the letter designation can
be omitted.
Information for Contributors
2325
V. STYLE REQUIREMENTS
A. Citations and footnotes
Regarding the format of citations made within the text,
authors have two options: (1) textual footnote style and
(2) alphabetical bibliographic list style.
In the textual footnote style, references and footnotes are
cited in the text by superscripted numerals, as in “the basic
equation was first derived by Rayleigh44 and was subsequently modified by Plesset45.” References and footnotes to
text material are intercalated and numbered consecutively in
order of first appearance. If a given reference must be cited at
different places in the text, and the citation is identical in all
details, then one must use the original number in the second
citation.
In the alphabetical bibliographic list style, footnotes as
such are handled as described above and are intended only
to explain or amplify remarks made in the text. Citations to
specific papers are flagged by parentheses that enclose either
the year of publication or the author’s name followed by the
year of publication, as in the phrases “some good theories
exist (Rayleigh, 1904)” and “a theory was advanced by
Rayleigh (1904).” In most of the papers where this style is
elected there are no footnotes, and only a bibliographic list
ordered alphabetically by the last name of the first author
appears at the end of the paper. In a few cases,3 there is a
list of footnotes followed by an alphabetized reference list.
Within a footnote, one has the option of referring to any
given reference in the same manner as is done in the text
proper.
Both styles are in common use in other journals, although the Journal of the Acoustical Society of America is
one of the few that allows authors a choice. Typically, the
textual footnote style is preferred for articles with a smaller
number of references, while the alphabetical bibliographic
list style is preferred for articles with a large number of references. The diversity of the articles published in the Journal
makes it infeasible to require just one style unilaterally.
B. General requirements for references
Regardless of what reference style the manuscript uses,
the format of the references must include the titles of articles.
For articles written in a language other than English, and for
which the Latin alphabet is used, give the actual title first in
the form in which it appeared in the original reference, followed by the English translation enclosed within parentheses.
For titles in other languages, give only the English translation,
followed by a statement enclosed in parentheses identifying
the language of publication. Do not give Latin-alphabet
transliterations of the original title. For titles in English and for
English translations of titles, use the same format as specified
above for the typing of the title on the title page. Begin the first
word of the title with a capital letter; thereafter capitalize only
those words that are specified by standard dictionaries to be
capitalized in ordinary prose.
One must include only references that can be obtained
by the reader. In particular, do not include references that
merely state: “personal communication.” (Possibly, one can
2326
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
give something analogous to this in a textual footnote, but
only as a means of crediting an idea or pinpointing a source.
In such a case an explanatory sentence or sentence fragment
is preferred to the vague term of “personal communication.”)
One should also not cite any paper that has only been submitted to a journal; if it has been accepted, then the citation
should include an estimated publication date. If one cites a
reference, then the listing must contain enough information
that the reader can obtain the paper. If thesis, reports, or
proceedings are cited, then the listing must contain specific
addresses to which one can write to buy or borrow the reference. In general, write the paper in such a manner that its
understanding does not depend on the reader having access to
references that are not easily obtained.
Authors should avoid giving references to material that
is posted on the internet, unless the material is truly archival,
as is the case for most online journals. If referring to nonarchival material posted on the internet is necessary to give
proper credit for priority, the authors should give the date at
which they last viewed the material online. If authors have
supplementary material that would be of interest to the readers of the article, then a proper posting of this in an archival
form is to make use of the AIP Publishing’s supplemental
material electronic depository. Instructions for how one posts
material can be found at the site <http://scitation.aip.org/
content/asa/journal/jasa/info/authors>. Appropriate items
for deposit include multimedia (e.g., movie files, audio files,
animated .gifs, 3D rendering files), color figures, data tables,
and text (e.g., appendices) that are too lengthy or of too
limited interest for inclusion in the printed journal. If authors
desire to make reference to materials posted by persons other
than by the authors, and if the posting is transitory, the authors should first seek to find alternate references of a more
archival form that they might cite instead. In all cases, the
reading of any material posted at a transitory site must not
be a prerequisite to the understanding of the material in the
paper itself, and when such material is cited, the authors
must take care to point out that the material will not necessarily be obtainable by future readers.
In the event that a reference may be found in several
places, as in the print version and the online version of a
journal, refer first to the version that is most apt to be archived.
In citing articles, give both the first and last pages that
include it. Including the last page will give the reader some
indication of the magnitude of the article. The copying en
toto of a lengthy article, for example, may be too costly for
the reader’s current purposes, especially if the chief objective
is merely to obtain a better indication of the actual subject
matter of the paper than is provided by the title.
The use of the expression “et al.” in listing authors’
names is encouraged in the body of the paper, but must not
be used in the actual listing of references, as reference lists in
papers are the primary sources of large data bases that persons use, among other purposes, to search by author. This
rule applies regardless of the number of authors of the cited
paper.
References to unpublished material in the standard format of other references must be avoided. Instead, append a
Information for Contributors
2326
graceful footnote or embed within the text a statement that
you are making use of some material that you have acquired
from another person—whatever material you actually use
of this nature must be peripheral to the development of the
principal train of thought of the paper. A critical reader will
not accept its validity without at least seeing something in
print. If the material is, for example, an unpublished derivation, and if the derivation is important to the substance of the
present paper, then repeat the derivation in the manuscript
with the original author’s permission, possibly including that
person as a coauthor.
Journal titles must ordinarily be abbreviated, and each
abbreviation must be in a “standard” form. The AIP Style
Manual1 gives a lengthy list of standard abbreviations that
are used for journals that report physics research, but the
interdisciplinary nature of acoustics is such that the list omits
many journals that are routinely cited in the Journal of the
Acoustical Society of America. For determination of what
abbreviations to use for journals not on the list, one can skim
the reference lists that appear at the ends of recent articles in
the Journal. The general style for making such abbreviations
(e.g., Journal is always abbreviated by “J.,” Applied is always
abbreviated by “Appl.,” International is always abbreviated by
“Int.,” etc.) must in any event emerge from a study of such
lists, so the authors should be able to make a good guess as to
the standard form. Should the guess be in error, this will often
be corrected in the copy-editing process. Egregious errors
are often made when the author lifts a citation from another
source without actually looking up the original source. An
author might be tempted, for example, to abbreviate a journal
title as “Pogg. Ann.,” taking this from some citation in a
19th century work. The journal cited is Annalen der Physik,
sometimes published with the title Annalen der Physik und
Chemie, with the standard abbreviation being “Ann. Phys.
(Leipzig).” The fact that J. C. Poggendorff was at one time the
editor of this journal gives very little help in the present era in
distinguishing it among the astronomical number of journals
that have been published. For Poggendorff’s contemporaries,
however, “Pogg. Ann.” had a distinct meaning.
Include in references the names of publishers of book
and standards and their locations. References to books and
proceedings must include chapter numbers and/or page
ranges.
C. Examples of reference formats
The number of possible nuances in the references that
one may desire to cite is very large, and the present document
cannot address all of them; a study of the reference lists at
the ends of articles in recent issues in the Journal will resolve
most questions. The following two lists, one for each of the
styles mentioned above, give some representative examples
for the more commonly encountered types of references. If
the authors do not find a definitive applicable format in the
examples below or in those they see in scanning past issues,
then it is suggested that they make their best effort to create
an applicable format that is consistent with the examples
that they have seen, following the general principles that the
information must be sufficiently complete that: (1) any present
2327
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
or future reader can decide whether the work is worth looking
at in more detail; (2) such a reader, without great effort, can
look at, borrow, photocopy, or buy a copy of the material;
and (3) a citation search, based on the title, an author name,
a journal name, or a publication category, will result in the
present paper being matched with the cited reference.
1. Textual footnote style
1
Y. Kawai, Prediction of noise propagation from a depressed road by using
boundary integral equations” (in Japanese), J. Acoust. Soc. Jpn. 56, 143–147
(2000).
2
L. S. Eisenberg, R. V. Shannon, A. S. Martinez, J. Wygonski, and A. Boothroyd, “Speech recognition with reduced spectral cues as a function of age,”
J. Acoust. Soc. Am. 107, 2704–2710 (2000).
3
J. B. Pierrehumbert, The Phonology and Phonetics of English Intonation
(Ph.D. dissertation, Mass. Inst. Tech., Cambridge, MA, 1980); as cited by
4D. R. Ladd, I. Mennen, and A. Schepman, J. Acoust. Soc. Am. 107, 2685–
2696 (2000).
4
F. A. McKiel, Jr., “Method and apparatus or sibilant classification in a speech
recognition system,” U. S. Patent No. 5,897,614 (27 April 1999). A brief
review by D. L. Rice appears in: J. Acoust. Soc. Am. 107, p. 2323 (2000).
5
A. N. Norris, “Finite-amplitude waves in solids, in Nonlinear Acoustics,
edited by M. F. Hamilton and D. T. Blackstock (Academic Press, San Diego,
1998), Chap. 9, pp. 263–277.
6
V. V. Muzychenko and S. A. Rybak, “Amplitude of resonance sound scattering by a finite cylindrical shell in a fluid” (in Russian), Akust. Zh. 32,
129–131 (1986); English transl.: Sov. Phys. Acoust. 32, 79–80 (1986).
7
M. Stremel and T. Carolus, “Experimental determination of the fluctuating
pressure on a rotating fan blade,” on the CD-ROM: Berlin, March 14–19,
Collected Papers, 137th Meeting of the Acoustical Society of America
and the 2nd Convention of the European Acoustics Association (ISBN
3-9804458-5-1, available from Deutsche Gesellschaft fuer Akustik, Fachbereich Physik, Universitaet Oldenburg, 26111 Oldenburg, Germany), paper 1PNSB_7.
8
ANSI S12.60-2002 (R2009) American National Standard Acoustical Performance Criteria, Design Requirements, and Guidelines for Schools (American
National Standards Institute, New York, 2002).
2. Alphabetical bibliographic list style
American National Standards Inst. (2002). ANSI S12.60 (R2009) American
National Standard Acoustical Performance Criteria, Design Requirements,
and Guidelines for Schools (American National Standards Inst., New
York).
Ando, Y. (1982). “Calculation of subjective preference in concert halls,” J.
Acoust. Soc. Am. Suppl. 1 71, S4-S5.
Bacon, S. P. (2000). “Hot topics in psychological and physiological acoustics:
Compression,” J. Acoust. Soc. Am. 107, 2864(A).
Bergeijk, W. A. van, Pierce, J. R., and David, E. E., Jr. (1960). Waves and the
Ear (Doubleday, Garden City, NY), Chap. 5, pp. 104-143.
Flatté, S. M., Dashen, R., Munk, W. H., Watson, K. M., and Zachariasen, F.
(1979). Sound Transmission through a Fluctuating Ocean (Cambridge
University Press, London), pp. 31-47.
Hamilton, W. R. (1837). “Third supplement to an essay on the theory of
systems of waves,” Trans. Roy. Irish Soc. 17 (part 1), 1-144; reprinted in:
The Mathematical Papers of Sir William Rowan Hamilton, Vol. II: Dynamics, edited by A. W. Conway and A. J. McConnell (Cambridge University
Press, London), pp. 162-211.
Helmholtz, H. (1859). “Theorie der Luftschwingungen in Röhren mit offenen Enden” (“Theory of air oscillations in tubes with open ends”), J. reine
ang. Math. 57, 1-72.
Kim, H.-S., Hong, J.-S., Sohn, D.-G., and Oh., J.-E. (1999). “Development of
an active muffler system for reducing exhaust noise and flow restriction in a
heavy vehicle,” Noise Control Eng. J. 47, 57-63.
Simpson, H. J., and Houston, B. H. (2000). “Synthetic array measurements
for waves propagating into a water-saturated sandy bottom ...,” J. Acoust.
Soc. Am. 107, 2329-2337.
Other examples may be found in the reference lists of
papers recently published in the Journal.
Information for Contributors
2327
D. Figure captions
The illustrations in the Journal have figure captions
rather than figure titles. Clarity, rather than brevity, is desired, so captions can extend over several lines. Ideally, a
caption must be worded so that a casual reader, on skimming
an article, can obtain some indication as to what an illustration is depicting, without actually reading the text of the
article. If an illustration is taken from another source, then
the caption must acknowledge and cite that source. Various
examples of captions can be found in the articles that appear
in recent issues of the Journal.
If the figure will appear in black and white in the printed
edition and in color online, the statement “(Color online)”
should be added to the figure caption. For color figures that
will appear in black and white in the printed edition of the
Journal, the reference to colors in the figure may not be
included in the caption, e.g., red circles, blue lines.
E. Acknowledgments
The section giving acknowledgments must not be numbered and must appear following the concluding section. It
is preferred that acknowledgments be limited to those who
helped with the research and with its formulation and to
agencies and institutions that provided financial support. Administrators, administrative assistants, associate editors,
and persons who assisted in the nontechnical aspects of the
manuscript preparation must not be acknowledged. In many
cases, sponsoring agencies require that articles give an acknowledgment and specify the format in which the acknowledgment must be stated—doing so is fully acceptable. Generally, the Journal expects that the page charges will be
honored for any paper that carries an acknowledgment to a
sponsoring organization.
F. Mathematical equations
Authors are expected to use computers with appropriate
software to typeset mathematical equations.
Authors are also urged to take the nature of the actual
layout of the journal pages into account when writing mathematical equations. A line in a column of text is typically 60
characters, but mathematical equations are often longer. To
insure that their papers look attractive when printed, authors
must seek to write sequences of equations, each of which fits
into a single column, some of which define symbols appearing
in another equation, even if such results in a greater number
of equations. If an equation whose length will exceed that
of a single column is unavoidable, then the authors must
write the equation so that it is neatly breakable into distinct
segments, each of which fits into a single column. The casting
of equations in a manner that requires the typesetting to revert
to a single column per page (rather than two columns per
page) format must be assiduously avoided. To make sure that
this possibility will not occur, authors familiar with desk-top
publishing software and techniques may find it convenient to
temporarily recast manuscripts into a form where the column
width corresponds to 60 text characters, so as to see whether
none of the line breaks within equations will be awkward.
2328
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Equations are numbered consecutively in the text in
the order in which they appear, the number designation is in
parentheses and on the right side of the page. The numbering
of the equations is independent of the section in which they
appear for the main body of the text. However, for each
appendix, a fresh numbering begins, so that the equations in
Appendix B are labeled (B1), (B2), etc. If there is only one
appendix, it is treated as if it were Appendix A in the numbering of equations.
G. Phonetic symbols
The phonetic symbols included in a JASA manuscript
should be taken from the International Phonetic Alphabet
(IPA), which is maintained by the International Phonetic
Association, whose home page is http://www.langsci.ucl.
ac.uk/ipa/. The display of the most recent version of the
alphabet can be found at http://www.langsci.ucl.ac.uk/ipa/
ipachart.html.
The total set of phonetic symbols that can be used by
AIP Publishing during the typesetting process is the set
included among the Unicode characters. This includes most
of the symbols and diacritics of the IPA chart, plus a few
compiled combinations, additional tonal representations, and
separated diacritics. A list of all such symbols is given in
the file phonsymbol.pdf which can be downloaded by going
to the JASA website http://scitation.aip.org/content/asa/
journal/jasa/info/authors and then clicking on the item List of
Phonetic Symbols. This file gives, for each symbol (displayed
in 3 different Unicode fonts, DoulosSIL, GentiumPlus, and
CharisSILCompact): its Unicode hex ID number, the Unicode
character set it is part of, its Unicode character name, and
its IPA definition (taken from the IPA chart). Most of these
symbols and their Unicode numbers are also available from
Professor John Wells of University College London at http://
www.phon.ucl.ac.uk/home/wells/ipa-unicode.htm#alfa,
without the Unicode character names and character set names.
The method of including such symbols in a manuscript
is to use, in conjunction with a word processor, a Unicodecompliant font that includes all symbols required. Fonts that
are not Unicode-compliant should not be used. Most computers
come with Unicode fonts that give partial coverage of the IPA.
Some sources where one can obtain Unicode fonts for Windows,
MacOS, and Linux with full IPA coverage are http://www.
phon.ucl.ac.uk/home/wells/ipa-unicode.htm and http://scripts.
sil.org/cms/scripts/page.php?item_id=SILFontList.
Further
information about which fonts contain a desired symbol set can
be found at http://www.alanwood.net/unicode/fontsbyrange.
html#u0250 and adjacent pages at that site. While authors
may use any Unicode-compliant font in their manuscript, AIP
Publishing reserves the right to replace the author’s font with
a Unicode font of its choice (currently one of the SIL fonts
Doulos, Gentium, or Charis, but subject to change in the future).
For LaTeX manuscripts, PXP’s LaTeX-processing
environment (MikTeX) supports the use of TIPA fonts. TIPA
fonts are available through the Comprehensive TeX Archive
Network at http://www.ctan.org/ (download from http://www.
ctan.org/pkg/tipa).
Information for Contributors
2328
H. Figures
Each figure should be manifestly legible when reduced
to one column of the printed journal page. Figures requiring
the full width of a journal page are discouraged, but exceptions can be made if the reasons for such are sufficiently
evident. The inclusion of figures in the manuscript should be
such that the manuscript, when published, should ordinarily
have no more than 30% of the space devoted to figures, and
such that the total number of figures should ordinarily not be
more than 20. In terms of the restriction of the total space for
figures, each figure part will be considered as occupying a
quarter page. Because of the advances in technology and the
increasingly wider use of computers in desk-top publishing,
it is strongly preferred that authors use computers exclusively in the preparation of illustrations. If any figures are
initially in the form of hard copy, they should be scanned
with a high quality scanner and converted to electronic form.
Each figure that is to be included in the paper should be cast
into one of several acceptable formats (TIFF, EPS, JPEG, or
PS) and put into a separate file.
The figures are numbered in the order in which they are
first referred to in the text. There must be one such referral
for every figure in the text. Each figure must have a caption,
and the captions are gathered together into a single list that
appears at the end of the manuscript. The numbering of the
figures, insofar as the online submission process is concerned, is achieved by uploading the individual figure files in
the appropriate sequence. The author should take care to
make sure that the sequence is correct, but the author will
also have the opportunity to view the merged manuscript and
to check on this sequencing.
For the most part, figures must be designed so that they
will fit within one column (3-3/8⬙) of the page, and yet be
intelligible to the reader. In rare instances, figures requiring
full page width are allowed, but the choice for using such a
figure must not be capricious.
A chief criticism of many contemporary papers is that
they contain far too many computer-generated graphical illustrations that present numerical results. An author develops
a certain general computational method (realized by software) and then uses it to exhaustively discuss a large number
of special cases. This practice must be avoided. Unless there
is an overwhelmingly important single point that the sequence of figures demonstrates as a whole, an applicable rule
of thumb is that the maximum number of figures of a given
type must be four.
The clarity of most papers is greatly improved if the
authors include one or more explanatory sketches. If, for
example, the mathematical development presumes a certain
geometrical arrangement, then a sketch of this arrangement
must be included in the manuscript. If the experiment is carried out with a certain setup of instrumentation and apparatuses, then a sketch is also appropriate. Various clichés,
such as Alice’s—“and what is the use of a book without
pictures?”—are strongly applicable to journal articles in
acoustics. The absence of any such figures in a manuscript,
even though they might have improved the clarity of the
2329
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
paper, is often construed as an indication of a callous lack of
sympathy for the reader’s potential difficulties when attempting to understand a paper.
Color figures can be included in the online version of the
Journal with no extra charge provided that these appear suitably as black and white figures in the print edition.
I. Tables
Tables are numbered by capital roman numerals
(TABLE III, TABLE IV, etc.) and are collected at the end of
the manuscript, following the references and preceding the
figure captions, one table per page. There should be a descriptive caption (not a title) above each table in the manuscript.
Footnotes to individual items in a table are designated
by raised lower case letters (0.123a, Martinb, etc.) The
footnotes as such are given below the table and should be
as brief as practicable. If the footnotes are to references
already cited in the text, then they should have forms such
as— aReference 10—or—bFirestone (1935)—depending on
the citation style adopted in the text. If the reference is not
cited in the text, then the footnote has the same form as
a textual footnote when the alphabetical bibliographic list
style is used. One would cast the footnote as in the second
example above and then include a reference to a 1935 work
by Firestone in the paper’s overall bibliographic list. If,
however, the textual footnote style is used and the reference
is not given in the text itself, an explicit reference listing
must be given in the table footnote itself. This should contain
the bare minimum of information necessary for a reader to
retrieve the reference. In general, it is recommended that
no footnote refer to references that are not already cited in
the text.
VI. THE COVER LETTER
The submission of an electronic file containing a cover
letter is now optional. Most of the Journal’s requirements
previously met by the submission of a signed cover letter are
now met during the detailed process of online submission.
The fact that the manuscript was transmitted by the corresponding author who was duly logged onto the system is
taken as prima facie proof that the de facto transmittal letter
has been signed by the corresponding author.
There are, however, some circumstances where a cover
letter file might be advisable or needed:
(1) If persons who would ordinarily have been included
as authors have given permission or requested that their names
not be included, then that must be so stated. (This requirement
is imposed because some awkward situations have arisen in the
past in which persons have complained Information for that
colleagues or former colleagues have deliberately omitted their
names as authors from papers to which they have contributed.
The Journal also has the policy that a paper may still be
published, even if one of the persons who has contributed to the
work refuses to allow his or her name to be included among the
list of authors, providing there is no question of plagiarism.)
Information for Contributors
2329
Unless a cover letter listing such exceptions is submitted, the
submittal process implies that the corresponding author is
attesting that the author list is complete.
(2) If there has been any prior presentation or any overlap
in concept with any other manuscripts that have been either
published or submitted for publication, this must be stated in
a cover letter. If the manuscript has been previously submitted
elsewhere for publication, and subsequently withdrawn, this
must also be disclosed. If none of these apply for the submitted
manuscript, then the submission process is construed to imply
that the corresponding author is attesting to such a fact.
(3) (Optional.) Reasons why the authors have selected to
submit their paper to JASA rather than some other journal.
These would ordinarily be supplied if the authors are concerned that there may be some questions as to the paper
meeting the “truly acoustics” criterion or of its being within
the scope of the Journal. If none of the references cited in
the submitted paper are to articles previously published in
the Journal, it is highly advisable that some strong reasons
be given for why the authors believe the paper falls within
the scope of the Journal.
(4) If the online submission includes the listing of one or
more persons who the authors prefer not be used as reviewers,
an explanation in a cover letter would be desirable.
(5) If the authors wish to make statements which they
feel are appropriate to be read by editors, but are inappropriate to be included in the actual manuscript, then such
should be included in a cover letter.
Cover letters are treated by the Peer X-Press system as
being distinct from rebuttal letters.
Rebuttal letters should be submitted with revised manuscripts, and the contents are usually such that the authors
give, when appropriate, rebuttals to suggestions and criticisms of the reviewers, and give detailed discussion of how
and why the revised manuscript differs from what was originally submitted.
file for the index of each volume is jasin.pdf. The listing of
the ASA-PACS numbers is at the beginning of this file.) It is
the authors’ responsibility to identify a principal ASA-PACS
number corresponding to the subject matter of the manuscript
and also to identify all other ASA-PACS numbers (up to a
total of four) that apply.
B. Suggestions for Associate Editors
In the suggestion of an Associate Editor who should
handle a specific manuscript, authors should consult a document titled “Associate Editors identified with PACS classification items” obtainable at the JASA web site <http://
scitation.aip.org/content/asa/journal/jasa/info/about>. Here
the Associate Editors are identified by their initials, and
the relation of the initials to the names is easily discerned
from the listing of Associate Editors on the back cover of
each issue, on the title page of each volume, and at the
online site <http://scitation.aip.org/content/asa/journal/jasa/
info/about>. (On the CD distribution of the Journal, the
appropriate file is jasae.pdf.)
Authors are not constrained to select Associate Editors
specifically identified with their choice of principal ASAPACS number and should note that the Journal has special
Associate Editors for Mathematical Acoustics, Computational
Acoustics, and Education in Acoustics. Review and tutorial
articles are ordinarily invited; submission of unsolicited review
articles or tutorial articles (other than those which can be
construed as papers on education in acoustics) without prior
discussion with the Editor-in-Chief is discouraged. Authors
should suggest the Associate Editor for Education in Acoustics
for tutorial papers that contain material which might be used
in standard courses on acoustics or material that supplements
standard textbooks.
C. Types of manuscripts
VII. EXPLANATIONS AND CATEGORIES
A. Subject classiication, ASA-PACS
Authors are asked in their online submittal and in their
manuscript to identify the subject classification of their paper
using the ASA-PACS system. The subject index of the Journal presently follows a specialized extension of the Physics
and Astronomy Classification Scheme4 (PACS) maintained
by AIP Publishing. Numbers in this scheme pertaining to
Acoustics have the general form: 43.nn.Aa, where n denotes
a digit, A denotes a capital alphabetical letter, and a denotes
a lower case letter. An amplified version of the section 43
listing appears as an appendix to AIP Publishing’s document,
and this is here referred to as the ASA-PACS system. The
ASA-PACS listing for acoustics appears at the end of
each volume of the Journal preceding the index (June and
December issues). It can also be found by first going to the
Journal’s site <http://scitation.aip.org/content/asa/journal/
jasa/info/authors> and then clicking the item: Physics and
Astronomy Classification Scheme (PACS), Section 43, Acoustics. (On the CD distribution of the Journal, the appropriate
2330
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Categories of papers that are published in the Journal
include the following:
1. Regular research articles
These are papers which report original research. There is
neither a lower limit nor an upper limit on their length, although authors must pay page charges if the length results in
more than 12 printed pages. The prime requirement is that
such papers must contain a complete account of the reported
research.
2. Education in acoustics articles
Such papers should be of potential interest to acoustics
educators. Examples include descriptions of laboratory experiments and of classroom demonstrations. Papers that describe computer simulations of basic acoustical phenomena
also fall within this category. Tutorial discussions on how to
present acoustical concepts, including mathematical derivations that might give students additional insight, are possible
contributions.
Information for Contributors
2330
3. Letters to the editor
These are shorter research contributions that can be
any of the following: (i) an announcement of a research
result, preliminary to the full of the research; (ii) a scientific
or technical discussion of a topic that is timely; (iii) brief
alternate derivations or alternate experimental evidence concerning acoustical phenomena; (iv) provocative articles that
may stimulate further research. Brevity is an essential feature
of a letter, and the Journal suggests 3 printed journal pages
as an upper limit, although it will allow up to 4 printed pages
in exceptional cases.
The Journal’s current format has been chosen so as to
give letters greater prominence. Their brevity in conjunction
with the possible timeliness of their contents gives impetus to a
quicker processing and to a shorter time lag between submission
and appearance in printed form in the Journal. (The quickest
route to publication that the Acoustical Society currently offers
is submission to the special section JASA Express Letters
(JASA-EL) of the Journal. For information regarding JASAEL, visit the site <http://scitation.aip.org/content/asa/journal/
jasael/info/authors>.)
Because the desire for brevity is regarded as important,
the author is not compelled to make a detailed attempt to
place the work within the context of current research; the
citations are relatively few and the review of related research
is limited. The author should have some reason for desiring
a more rapid publication than for a normal article, and the
editors and the reviewers should concur with this. The work
should have a modicum of completeness, to the extent that
the letter “tells a story” that is at least plausible to the reader,
and it should have some nontrivial support for what is being
related. Not all the loose strings need be tied together. Often
there is an implicit promise that the publication of the letter
will be followed up by a regular research article that fills in
the gaps and that does all the things that a regular research
article should do.
4. Errata
These must be corrections to what actually was printed.
Authors must explicitly identify the passages or equations
in the paper and then state what should replace them. Long
essays on why a mistake was made are not desired. A typical
line in an errata article would be of the form: Equation (23)
on page 6341 is incorrect. The correct version is ... . For
detailed examples, the authors should look at previously published errata articles in the Journal.
5. Comments on published papers
Occasionally, one or more readers, after reading a published paper, will decide to submit a paper giving comments
about that paper. The Journal welcomes submissions of
this type, although they are reviewed to make sure that the
comments are reasonable and that they are free of personal
slurs. The format of the title of a comments paper is rigidly
prescribed, and examples can be found in previous issues of
the Journal. The authors of the papers under criticism are
2331
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
frequently consulted as reviewers, but their unsubstantiated
opinion as to whether the letter is publishable is usually not
given much weight.
6. Replies to comments
Authors whose previously published paper has stimulated the submission of a comments paper, and which has
subsequently been accepted, have the opportunity to reply to
the comments. They are usually (but not invariably) notified
of the acceptance of the comments paper, and the Journal
prefers that the comments and the reply be published in successive pages of the same issue, although this is not always
practicable. Replies are also reviewed using criteria similar
to those of comments papers. As in the case of comments
papers, the format of the title of a reply paper is rigidly
prescribed, and examples can be found in the previous issues
of the Journal.
7. Forum letters
Forum letters are analogous to the “letters to the editor”
that one finds in the editorial section of major newspapers.
They may express opinions or advocate actions. They may
also relate anecdotes or historical facts that may be of general
interest to the readers of the Journal. They need not have a title
and should not have an abstract; they also should be brief,
and they should not be of a highly technical nature. These
are also submitted using the Peer X-Press system, but are not
handled as research articles. The applicable Associate Editor
is presently the Editor-in-Chief. For examples of acceptable
letters and the format that is desired, prospective authors of
such letters should consult examples that have appeared in
recent issues of the Journal.
8. Tutorial and review papers
Review and tutorial papers are occasionally accepted for
publication, but are difficult to handle within the peer-review
process. All are handled directly by the Editor-in-Chief, but
usually with extensive discussion with the relevant Associate
Editors. Usually such are invited, based on recommendations
from the Associate Editors and the Technical Committees
of the Society, and the tentative acceptance is based on a
submitted outline and on the editors’ acquaintance with the
prospective author’s past work. The format of such papers
is similar to those of regular research articles, although
there should be a table of contents following the abstract
for longer research articles. Submission is handled by the
online system, but the cover letter should discuss the history
of prior discussions with the editors. Because of the large
expenditure of time required to write an authorative review
article, authors are advised not to begin writing until they
have some assurance that there is a good likelihood of the
submission eventually being accepted.
9. Book reviews
All book reviews must be first invited by the Associate
Editor responsible for book reviews. The format for such
Information for Contributors
2331
reviews is prescribed by the Associate Editor, and the PXP
submittal process is used primarily to facilitate the incorporation of the reviews into the Journal.
VIII. FACTORS RELEVANT TO PUBLICATION
DECISIONS
A. Peer review system
The Journal uses a peer review system in the determination of which submitted manuscripts should be published.
The Associate Editors make the actual decisions; each editor
has specialized understanding and prior distinguished accomplishments in the subfield of acoustics that encompasses
the contributed manuscript. They seek advice from reviewers
who are knowledgeable in the general subject of the paper,
and the reviewers give opinions on various aspects of the
work; primary questions are whether the work is original and
whether it is correct. The Associate Editor and the reviewers
who examine the manuscript are the authors’ peers: persons
with comparable standing in the same research field as the
authors themselves. (Individuals interested in reviewing for
JASA or for JASA-EL can convey that interest via an e-mail
message to the Editor-in-Chief at <jasa@aip.org>.)
B. Selection criteria
Many submitted manuscripts are not selected for publication. Selection is based on the following factors: adherence
to the stylistic requirements of the Journal, clarity and eloquence of exposition, originality of the contribution, demonstrated understanding of previously published literature pertaining to the subject matter, appropriate discussion of the
relationships of the reported research to other current research or applications, appropriateness of the subject matter
to the Journal, correctness of the content of the article,
completeness of the reporting of results, the reproducibility
of the results, and the significance of the contribution. The
Journal reserves the right to refuse publication of any
submitted article without giving extensively documented
reasons, although the editors usually give suggestions that
can help the authors in the writing and submission of future
papers. The Associate Editor also has the option, but not
an obligation, of giving authors an opportunity to submit a
revised manuscript addressing specific criticisms raised in
the peer review process. The selection process occasionally
results in mistakes, but the time limitations of the editors
and the reviewers preclude extraordinary steps being taken to
insure that no mistakes are ever made. If an author feels that
the decision may have been affected by an a priori adverse
bias (such as a conflict of interest on the part of one of the
reviewers), the system allows authors to express the reasons
in writing and ask for an appeal review.
C. Scope of the Journal
Before one decides to submit a paper to the Journal of the
Acoustical Society, it is prudent to give some thought as to
whether the paper falls within the scope of the Journal. While
this can in principal be construed very broadly, it is often the
case that another journal would be a more appropriate choice.
2332
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
As a practical matter, the Journal would find it difficult to
give an adequate peer review to a submitted manuscript that
does not fall within the broader areas of expertise of any of
its Associate Editors. In the Journal’s peer-review process,
extensive efforts are made to match a submitted manuscript
with an Associate Editor knowledgeable in the field, and
the Editors have the option of declining to take on the task.
It is a tacit understanding that no Associate Editor should
accept a paper unless he or she understands the gist of the
paper and is able to make a knowledgeable assessment of the
relevance of the advice of the selected reviewers. If no one
wishes to handle a manuscript, the matter is referred to the
Editor-in-Chief and a possible resulting decision is that the
manuscript is outside the de facto scope of the Journal. When
such happens, it is often the case that the article either cites
no previously published papers in the Journal or else cites no
recent papers in any of the other journals that are commonly
associated with acoustics. Given that the Journal has been
in existence for over 80 years and has published of the order
of 35,000 papers on a wide variety of acoustical topics over
its lifetime, the absence of any references to previously
published papers in the Journal raises a flag signaling the
possibility that the paper lies outside the de facto scope of
the Journal.
Authors concerned that their work may be construed by
the Editors as not being within the scope of the Journal can
strengthen their case by citing other papers published in the
Journal that address related topics.
The Journal ordinarily selects for publication only articles that have a clear identification with acoustics. It would,
for example, not ordinarily publish articles that report results
and techniques that are not specifically applicable to acoustics, even though they could be of interest to some persons
whose work is concerned with acoustics. An editorial5 published in the October 1999 issue gives examples that are not
clearly identifiable with acoustics.
IX. POLICIES REGARDING PRIOR PUBLICATION
The Journal adheres assiduously to all applicable copyright laws, and authors must not submit articles whose publication will result in a violation of such laws. Furthermore,
the Journal follows the tradition of providing an orderly
archive of scientific research in which authors take care that
results and ideas are fully attributed to their originators. Conscious plagiarism is a serious breach of ethics, if not illegal.
(Submission of an article that is plagiarized, in part or in full,
may have serious repercussions on the future careers of the
authors.) Occasionally, authors rediscover older results and
submit papers reporting these results as though they were
new. The desire to safeguard the Journal from publishing
any such paper requires that submitted articles have a sufficient discussion of prior related literature to demonstrate the
authors’ familiarity with the literature and to establish the
credibility of the assertion that the authors have carried out a
thorough literature search.
In many cases, the authors themselves may have either
previously circulated, published, or presented work that has
substantial similarities with what is contained within the
Information for Contributors
2332
contributed manuscript. In general, JASA will not publish
work that has been previously published. (An exception
is when the previous publication is a letter to the editor,
and when pertinent details were omitted because of the
brief nature of the earlier reporting.) Presentations at
conferences are not construed as prior publication; neither
is the circulation of preprints or the posting of preprints
on any web site, providing the site does not have the
semblance of an archival online journal. Publication as such
implies that the work is currently, and for the indefinite
future, available, either for purchase or on loan, to a broad
segment of the research community. Often the Journal will
consider publishing manuscripts with tangible similarities
to other work previously published by the authors—
providing the following conditions are met: (1) the titles
are different; (2) the submitted manuscript contains no
extensive passages of text or figures that are the same as
in the previous publication; (3) the present manuscript is
a substantial update of the previous publication; (4) the
previous publication has substantially less availability than
would a publication in JASA; (5) the current manuscript
gives ample referencing to the prior publication and
explains how the current manuscript differs from the prior
publication. Decisions regarding such cases are made by the
Associate Editors, often in consultation with the Editor-inChief. (Inquiries prior to submission as to whether a given
manuscript with some prior history of publication may be
regarded as suitable for JASA should be addressed to the
Editor-in-Chief at <jasa@aip.org>.)
The Journal will not consider any manuscript for publication that is presently under consideration by another journal or which is substantially similar to another one under
consideration. If it should learn that such is the case, the
paper will be rejected and the editors of the other journal will
be notified.
Authors of an article previously published as a letter
to the editor, either as a regular letter or as a letter in
the JASA-EL (JASA Express Letters) section of the
Journal, where the original account was either abbreviated
or preliminary are encouraged to submit a more
comprehensive and up-dated account of their research to
the Journal.
A. Speculative papers
In some cases, a paper may be largely speculative; a new
theory may be offered for an as yet imperfectly understood
phenomenon, without complete confirmation by experiment.
Although such papers may be controversial, they often become the most important papers in the long-term development of a scientific field. They also play an important role
in the stimulation of good research. Such papers are intrinsically publishable in JASA, although explicit guidelines for
their selection are difficult to formulate. Of major importance
are (i) that the logical development be as complete as practicable, (ii) that the principal ideas be plausible and consistent with what is currently known, (iii) that there be no known
counter-examples, and (iv) that the authors give some hints
as to how the ideas might be checked by future experiments
2333
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
or numerical computations. In addition, the authors should
cite whatever prior literature exists that might indicate that
others have made similar speculations.
B. Multiple submissions
The current online submittal process requires that each
paper be submitted independently. Each received manuscript
will be separately reviewed and judged regarding its merits
for publication independently of the others. There is no formal mechanism for an author to request that two submissions, closely spaced in their times of submission, be regarded as a single submission.
In particular, the submission of two manuscripts, one
labeled “Part I” and the other labeled “Part II” is not allowed.
Submission of a single manuscript with the label “Part I” is
also not allowed. An author may submit a separate manuscript labeled “Part II,” if the text identifies which previously
accepted paper is to be regarded as “Part I.” Doing so may be
a convenient method for alerting potential readers to the fact
that the paper is a sequel to a previous paper by the author.
The author should not submit a paper so labeled, however,
unless the paper to be designated as “Part I” has already been
accepted, either for JASA or another journal.
The Associate Editors are instructed not to process any
manuscript that cannot be read without the help of as yet
unpublished papers that are still under review. Consequently,
authors are requested to hold back the submission of “sequels” to previously submitted papers until the disposition of
those papers is determined. Alternately, authors should write
the “sequels” so that the reading and comprehension of those
manuscripts does not require prior reading and access of papers whose publication is still uncertain.
X. SUGGESTIONS REGARDING CONTENT
A. Introductory section
Every paper begins with introductory paragraphs. Except for short Letters to the Editor, these paragraphs appear
within a separate principal section, usually with the heading
“Introduction.”
Although some discussion of the background of the
work may be advisable, a statement of the precise subject
of the work must appear within the first two paragraphs. The
reader need not fully understand the subject the first time it is
stated; subsequent sentences and paragraphs should clarify
the statement and should supply further necessary background. The extent of the clarification must be such that a
nonspecialist will be able to obtain a reasonable idea of what
the paper is about. The introduction should also explain to
the nonspecialist just how the present work fits into the context of other current work done by persons other than the
authors themselves. Beyond meeting these obligations, the
writing should be as concise as practicable.
The introduction must give the authors’ best arguments
as to why the work is original and significant. This is customarily done via a knowledgeable discussion of current and
prior literature. The authors should envision typical readers or
typical reviewers, and this should be a set of people that is not
Information for Contributors
2333
inordinately small, and the authors must write so as to convince
them. In some cases, both originality and significance will be
immediately evident to all such persons, and the arguments
can be brief. In other cases, the authors may have a daunting
task. It must not be assumed that readers and reviewers will
give the authors the benefit of the doubt.
B. Main body of text
The writing in the main body of the paper must follow a
consistent logical order. It should contain only material that
pertains to the main premise of the paper, and that premise
should have been stated in the introduction. While tutorial
discussions may in some places be appropriate, such should
be kept to a minimum and should be only to the extent necessary to keep the envisioned readers from becoming lost.
The writing throughout the text, including the introduction, must be in the present tense. It may be tempting to refer
to subsequent sections and passages in the manuscript in the
future tense, but the authors must assiduously avoid doing
so, using instead phrases such as “is discussed further below.”
Whenever pertinent results, primary or secondary, are
reached in the progress of the paper, the writing should point
out that these are pertinent results in such a manner that it
would get the attention of a reader who is rapidly scanning
the paper.
The requirement of a consistent logical order implies
that the logical steps appear in consecutive order. Readers
must not be referred to subsequent passages or to appendixes
to fill in key elements of the logical development. The fact
that any one such key element is lengthy or awkward is
insufficient reason to relegate it to an appendix. Authors can,
however, flag such passages giving the casual reader the option of skipping over them on first reading. The writing nevertheless must be directed toward the critical reader—a person who accepts no aspect of the paper on faith. (If the paper
has some elements that are primarily speculative, then that
should be explicitly stated, and the development should be
directed toward establishing the plausibility of the speculation for the critical reader.)
To achieve clarity and readability, the authors must explicitly state the purposes of lengthy descriptions or of
lengthy derivations at the beginning of the relevant passages.
There should be no mysteries throughout the manuscript as
to the direction in which the presentation is going.
Authors must take care that no reader becomes needlessly lost because of the use of lesser-known terminology.
All terms not in standard dictionaries must be defined when
they are first used. Acronyms should be avoided, but, when
they are necessary, they must be explicitly defined when first
used. The terminology must be consistent; different words
should not be used to represent the same concept.
Efforts must be taken to avoid insulting the reader with
the use of gratuitous terms or phrases such as obvious, wellknown, evident, or trivial. If the adjectives are applicable,
then they are unnecessary. If not, then the authors risk incurring the ill-will of the readers.
2334
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
If it becomes necessary to bring in externally obtained
results, then the reader must be apprised, preferably by an
explicit citation to accessible literature, of the source of
such results. There must be no vague allusions, such as “It
has been found that...” or “It can be shown that...” If the allusion is to a mathematical derivation that the authors have
themselves carried out, but which they feel is not worth describing in detail, then they should briefly outline how the
derivation can be carried out, with the implication that a
competent reader can fill in the necessary steps without difficulty.
For an archival journal such as JASA, reproducibility of
reported results is of prime importance. Consequently, authors must give a sufficiently detailed account, so that all
results, other than anecdotal, can be checked by a competent
reader with comparable research facilities. If the results are
numerical, then the authors must give estimates of the probable errors and state how they arrived at such estimates. (Anecdotal results are typically results of field experiments or
unique case studies; such are often worth publishing as they
can stimulate further work and can be used in conjunction
with other results to piece together a coherent understanding
of broader classes of phenomena.)
C. Concluding section
The last principal section of the article is customarily
labeled “Conclusions” or “Concluding Remarks.” This
should not repeat the abstract, and it should not restate the
subject of the paper. The wording should be directed toward
a person who has some, if not thorough, familiarity with the
main body of the text and who knows what the paper is all
about. The authors should review the principal results of the
paper and should point out just where these emerged in the
body of the text. There should be a frank discussion of the
limitations, if any, of the results, and there should be a broad
discussion of possible implications of these results.
Often the concluding section gracefully ends with speculations on what research might be done in the future to build
upon the results of the present paper. Here the authors must
write in a collegial tone. There should be no remarks stating
what the authors themselves intend to do next. They must be
careful not to imply that the future work in the subject matter
of the paper is the exclusive domain of the authors, and there
should be no allusions to work in progress or to work whose
publication is uncertain. It is conceivable that readers stimulated to do work along the lines suggested by the paper will
contact the authors directly to avoid a duplication of effort,
but that will be their choice. The spirit expressed in the paper
itself should be that anyone should be free to follow-up on the
suggestions made in the concluding section. A successful paper
is one that does incite such interest on the part of the readers
and one which is extensively cited in future papers written by
persons other than the authors themselves.
D. Appendixes
The Journal prefers that articles not include appendixes
unless there are strong reasons for their being included.
Details of mathematical developments or of experimental
Information for Contributors
2334
procedures that are critical to the understanding of the
substance of a paper must not be relegated to an appendix.
(Authors must bear in mind that readers can easily skim over
difficult passages in their first reading of a paper.) Lengthy
proofs of theorems may possibly be placed in appendixes
providing their stating as such in the main body of the text
is manifestly plausible. Short appendixes are generally
unnecessary and impede the comprehension of the paper.
Appendixes may be used for lengthy tabulations of data, of
explicit formulas for special cases, and of numerical results.
Editors and reviewers, however, may question whether their
inclusion is necessary.
result. Authors should assume that any reader has access to
some such textbook, and the authors should tacitly treat the
result as well-known and not requiring a reference citation.
Authors must not cite any reference that the authors
have not explicitly seen, unless the paper has a statement to
that effect, accompanied by a statement of how the authors
became aware of the reference. Such citations should be limited to crediting priority, and there must be no implied recommendations that readers should read literature which the
authors themselves have not read.
XI. SUGGESTIONS REGARDING STYLE
A. Quality of writing and word usage
E. Selection of references
References are typically cited extensively in the introduction, and the selection of such references can play an
important role in the potential usefulness of the paper to
future readers and in the opinions that readers and reviewers
form of the paper. No hard and fast rules can be set down as
to how authors can best select references and as to how they
should discuss them, but some suggestions can be found in
an editorial6 published in the May 2000 issue. If a paper falls
within the scope of the Journal, one would ordinarily expect to
find several references to papers previously published in JASA.
Demonstration of the relevance of the work is often accomplished via citations, with accompanying discussion, to
recent articles in JASA and analogous journals. The implied
claims to originality can be strengthened via citations, with
accompanying discussion, to prior work related to the subject
of the paper, sufficient to establish credibility that the authors
are familiar with the literature and are not duplicating previous published work. Unsupported assertions that the authors
are familiar with all applicable literature and that they have
carried out an exhaustive literature survey are generally unconvincing to the critical reader.
Authors must not make large block citations of many
references (e.g., four or more). There must be a stated reason
for the citation of each reference, although the same reason
can sometimes apply simultaneously to a small number of
references. The total number of references should be kept as
small a number as is consistent with the principal purposes
of the paper (45 references is a suggested upper limit for a
regular research article). Although nonspecialist readers may
find a given paper to be informative in regard to the general
state of a given field, the authors must not consciously write
a research paper so that it will fulfill a dual function of being
a review paper or of being a tutorial paper.
Less literate readers often form and propagate erroneous
opinions concerning priority of ideas and discoveries based
on the reading of recent papers, so authors must make a
conscious attempt to cite original sources. Secondary sources
can also be cited, if they are identified as such and especially
if they are more accessible or if they provide more readable
accounts. In such cases, reasons must be given as to why the
secondary sources are being cited. References to individual
textbooks for results that can be found in a large number of
analogous textbooks should not be given, unless the cited
textbook gives a uniquely clear or detailed discussion of the
2335
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
The Journal publishes articles in the English language
only. There are very few differences of substance between
British English style (as codified in the Oxford English
Dictionary7) and US English style, but authors frequently
must make choices in this respect, such as between alternate
spelling of words that end in either -or or -our, or in either
-ized or -ised, or in either -er or -re. Although now a de facto
international journal, JASA because of its historical origins
requires manuscripts to follow US English style conventions.
Articles published in JASA are expected to adhere to high
standards of scholarly writing. A formal writing style free of
slang is required. Good conversational skills do not necessarily
translate to good formal writing skills. Authors are expected
to make whatever use is necessary of standard authoritative
references in regard to English grammar and writing style in
preparing their manuscripts. Many good references exist—
among those frequently used by professional writers are
Webster’s Third New International Dictionary, Unabridged,8
Merriam-Webster’s Collegiate Dictionary, 11th Edition,9
Strunk and White s Elements of Style,10 and the Chicago
Manual of Style.11 (The Third New International is AIP
Publishing’s standard dictionary.) All authors are urged to
do their best to produce a high quality readable manuscript,
consistent with the best traditions of scholarly and erudite
writing. Occasional typographical errors and lapses of
grammar can be taken care of in the copy-editing phase of
the production process, and the instructions given here are
intended that there be ample white space in the printed-out
manuscript that such copy-editing can be carried out. Receipt
of a paper whose grammatical and style errors are so excessive
that they cannot be easily fixed by copy-editing will generally
result in the authors being notified that the submission is not
acceptable. Receipt of such a notification should not be construed as a rejection of the manuscript—the authors should
take steps, possibly with external help, to revise the manuscript so that it overcomes these deficiencies. (Authors needing
help or advice on scientific writing in the English language are
encouraged to contact colleagues, both within and outside their
own institutions, to crititque the writing in their manuscripts.
Unfortunately, the staff of the Journal does not have the time
to do this on a routine basis.)
There are some minor discrepancies in the stylistic rules
that are prescribed in various references—these generally arise
because of the differences in priorities that are set in different
publication categories. Newspapers, for example, put high
Information for Contributors
2335
emphasis on the efficient use of limited space for conveying
the news and for catching the interest of their readers. For
scholarly journals, on the other hand, the overwhelming
priority is clarity. In the references cited above, this is the
basis for most of the stated rules. In following this tradition,
the Journal, for example, requires a rigorous adherence to the
serial comma rule (Strunk’s rule number 2): In a series of three
or more terms with a single conjunction, use a comma after
each term except the last. Thus a JASA manuscript would refer
to the “theory of Rayleigh, Helmholtz, and Kirchhoff” rather
than to the “theory of Rayleigh, Helmholtz and Kirchhoff.”
The priority of clarity requires that authors only use
words that are likely to be understood by a large majority of
potential readers. Usable words are those whose definitions
may be found either in a standard unabridged English dictionary (such as the Webster’s Third New International mentioned above), in a standard scientific dictionary such as the
Academic Press Dictionary of Science and Technology,12 or
in a dictionary specifically devoted to acoustics such as the
Dictionary of Acoustics13 by C. L. Morfey. In some cases,
words and phrases that are not in any dictionary may be
in vogue among some workers in a given field, especially
among the authors and their colleagues. Authors must give
careful consideration to whether use of such terms in their
manuscript is necessary; and if the authors decide to use
them, precise definitions must be stated within the manuscript. Unilateral coinage of new terms by the authors is
discouraged. In some cases, words with different meanings
and with different spellings are pronounced exactly the same,
and authors must be careful to choose the right spelling.
Common errors are to interchange principal and principle
and to interchange role and roll.
B. Grammatical pitfalls
There are only a relatively small number of categories
of errors that authors frequently make in the preparation of
manuscripts. Authors should be aware of these common pitfalls and double-check that their manuscripts contain no errors in these categories. Some errors will be evident when
the manuscript is read aloud; others, depending on the background of the writers, may not be. Common categories are
(1) dangling participles, (2) lack of agreement in number
(plural versus singular) of verbs with their subjects, (3) omission of necessary articles (such as a, an, and the) that precede
nouns, (4) the use of incorrect case forms (subjective, objective,
possessive) for pronouns (e.g., who versus whom), and (5) use
of the incorrect form (present, past, past participle, and future)
in regard to tense for a verb. Individual authors may have their
own peculiar pitfalls, and an independent casual reading of
the manuscript by another person will generally pinpoint
such pitfalls. Given the recognition that such exist, a diligent
author should be able to go through the manuscript and find all
instances where errors of the identified types occur.
C. Active voice and personal pronouns
Many authorities on good writing emphasize that authors should use the active rather than the passive voice.
Doing so in scholarly writing, especially when mathematical
2336
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
expressions are present, is often infeasible, but the advice has
merit. In mathematical derivations, for example, some authors use the tutorial we to avoid using the passive voice, so
that one writes: “We substitute the expression on the right
side of Eq. (5) into Eq. (2) and obtain ...,” rather than: “The
right side of Eq. (5) is substituted into Eq. (2), with the result
being ... .” A preferable construction is to avoid the use of
the tutorial we and to use transitive verbs such as yields,
generates, produces, and leads to. Thus one would write
the example above as: “Substitution of Eq. (5) into Eq. (2)
yields ... .” Good writers frequently go over an early draft of a
manuscript, examine each sentence and phrase written using
the passive voice, and consider whether they can improve the
sentence by rewriting it.
In general, personal pronouns, including the “tutorial we,”
are preferably avoided in scholarly writing, so that the tone is
impersonal and dispassionate. In a few cases, it is appropriate
that an opinion be given or that a unique personal experience
be related, and personal pronouns are unavoidable. What
should be assiduously avoided are any egotistical statements
using personal pronouns. If a personal opinion needs to be
expressed, a preferred construction is to refer to the author in
the third person, such as: “the present writer believes that ... .”
D. Acronyms
Acronyms have the inconvenient feature that, should the
reader be unfamiliar with them, the reader is clueless as to
their meaning. Articles in scholarly journals should ideally
be intelligible to many generations of future readers, and
formerly common acronyms such as RCA (Radio Corporation of America, recently merged into the General Electric
Corporation) and REA (Rural Electrification Authority) may
have no meaning to such readers. Consequently, authors are
requested to use acronyms sparingly and generally only
when not using them would result in exceedingly awkward
prose. Acronyms, such as SONAR and LASER (currently
written in lower case, sonar and laser, as ordinary words),
that have become standard terms in the English language and
that can be readily found in abridged dictionaries, are exceptions. If the authors use acronyms not in this category, then
the meaning of the individual letters should be spelled out at
the time such an acronym is first introduced. An article containing, say, three or more acronyms in every paragraph will
be regarded as pretentious and deliberately opaque.
E. Computer programs
In some cases the archival reporting of research suggests
that authors give the names of specific computer programs
used in the research. If the computation or data processing
could just as well have been carried out with the aid of any
one of a variety of such programs, then the name should be
omitted. If the program has unique features that are used in the
current research, then the stating of the program name must be
accompanied by a brief explanation of the principal premises
and functions on which the relevant features are based. One
overriding consideration is that the Journal wishes to avoid
implied endorsements of any commercial product.
Information for Contributors
2336
F. Code words
Large research projects and large experiments that involve several research groups are frequently referred to by
code words. Research articles in the Journal must be intelligible to a much broader group of readers, both present and
future, than those individuals involved in the projects with
which such a code word is associated. If possible, such code
words should either not be used or else referred to in only a
parenthetical sense. If attempting to do this leads to exceptionally awkward writing, then the authors must take special
care to explicitly explain the nature of the project early in the
paper. They must avoid any impression that the paper is specifically directed toward members of some in-group.
REFERENCES
1
AIP Publication Board (R. T. Beyer, chair), AIP Style Manual (American
Institute of Physics, 2 Huntington Quadrangle, Suite 1NO1, Melville, NY
11747, 1990, 4th ed.). This is available online at <http://www.aip.org/
pubservs/style/4thed/toc.html>.
2
M. Mellody and G. H. Wakefield, “The time-frequency characteristics of
violin vibrato: Modal distribution analysis and synthesis,” J. Acoust. Soc.
Am. 107, 598-611 (2000).
3
See, for example, the paper: B. Møhl, M. Wahlberg, P. Madsen, L. A. Mller,
and A. Surlykke, “Sperm whale clicks: Directionality and source level
revisited,” J. Acoust. Soc. Am. 107, 638–648 (2000).
2337
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
4
American Institute of Physics, Physics and Astronomy Classification
Scheme 2003. A paper copy is available from AIP Publishing LLC, 1305
Walt Whitman Road, Suite 300, Melville, NY 11747-4300. It is also available
online at the site <http://www.aip.org/pacs/index.html>.
5
A. D. Pierce, Current criteria for selection of articles for publication, J.
Acoust. Soc. Am. 106, 1613–1616 (1999).
6
A. D. Pierce, Literate writing and collegial citing, J. Acoust. Soc. Am. 107,
2303–2311 (2000).
7
The Oxford English Dictionary, edited by J. Simpson and E. Weiner (Oxford University Press, 1989, 2nd edition), 20 volumes. Also published as
Oxford English Dictionary (Second Edition) on CD-ROM, version 2.0 (Oxford University Press, 1999). An online version is available by subscription
at the Internet site <http://www.oed.com/public/welcome>.
8
Webster’s Third New International Dictionary of the English Language,
Unabridged, Philip Babcock Gove, Editor-in-Chief (Merriam-Webster
Inc., Springfield, MA, 1993, principal copyright 1961) This is the eighth
in a series of dictionaries that has its beginning in Noah Webster’s American
Dictionary of the English Language (1828).
9
Merriam-Webster’s Collegiate Dictionary, 11th Edition (Merriam-Webster,
Springfield, MA, 2003, principal copyright 1993). (A freshly updated version is issued annually.)
10
W. Strunk, Jr. and E. B. White, The Elements of Style, with forward by
Roger Angell (Allyn and Bacon, 1999, 4th edition).
11
The Chicago Manual of Style: The Essential Guide for Writers, Editors, and
Publishers, with preface by John Grossman (University of Chicago Press,
1993, 14th edition).
12
Academic Press Dictionary of Science and Technology, edited by Christopher Morris (Academic Press, Inc., 1992).
13
C. L. Morfey, Dictionary of Acoustics (Academic Press, Inc., 2000).
Information for Contributors
2337
ASSOCIATE EDITORS IDENTIFIED WITH PACS CLASSIFICATION ITEMS
The Classification listed here is based on the Appendix to Section 43, “Acoustics,” of the current edition of the Physics and Astronomy Classification Scheme
PACS of AIP Publishing LLC. The full and most current listing of PACS can be found at the internet site <http://www.aip.org/pubservs/pacs.html>. In the
full PACS listing, all of the acoustics items are preceded by the primary classification number 43. The listing here omits the prefatory 43; a listing in the AIP
Publishing document such as 43.10.Ce will appear here as 10.Ce.
The present version of the Classification scheme is intended as a guide to authors of manuscripts submitted to the Journal who are asked at the time of
submission to suggest an Associate Editor who might handle the processing of their manuscript. Authors should note that they can also have their manuscripts
processed from any of the special standpoints of (i) Applied Acoustics, (ii) Computational Acoustics, (iii) Mathematical Acoustics, or (iv) Education in
Acoustics, and that there are special Associate Editors who have the responsibility for processing manuscripts from each of these standpoints.
The initials which appear in brackets following most of the listings correspond to the names of persons on the Editorial Board i.e., Associate Editors who
customarily edit material that falls within that classification. A listing of full names and institutional affiliations of members of the Editorial Board can be
found on the back cover of each issue of the Journal. A more detailed listing can be found at the internet site <http://asadl.org/jasa/for_authors_jasa>. The
most current version of the present document can also be found at that site.
[05]
05.Bp
05.Dr
05.Ft
05.Gv
05.Hw
05.Ky
05.Ma
05.Nb
05.Pc
05.Re
05.Sf
[10]
10.Ce
10.Df
10.Eg
10.Gi
10.Hj
10.Jk
10.Km
10.Ln
10.Mq
10.Nq
10.Pr
10.Qs
10.Sv
10.Vx
[15]
[20]
20.Bi
20.Dk
20.El
20.Fn
2338
Acoustical Society of America
Constitution and bylaws [EM]
History [ADP]
Honorary members [EM]
Publications ARLO. Echoes. ASA Web page,
electronic archives and references [ADP]
Meetings [EM]
Members and membership lists, personal
notes, fellows [EM]
Administrative committee activities [EM]
Technical committee activities; Technical
Council [EM]
Prizes, medals, and other awards [EM]
Regional chapters [EM]
Obituaries
General
Conferences, lectures, and announcements (not
of the Acoustical Society of America) [EM]
Other acoustical societies and their
publications]; online journals and other
electronic publications [ADP]
Biographical, historical, and personal notes
(not of the Acoustical Society of America)
[EM]
Editorials, Forum [ADP], [NX]
Books and book reviews [PLM]
Bibliographies [EM], [ADP]
Patents [DLR], [SAF]
Surveys and tutorial papers relating to
acoustics research, tutorial papers on applied
acoustics [ADP], [NX]
Tutorial papers of historical and philosophical
nature [ADP], [NX], [WA]
News with relevance to acoustics
nonacoustical theories of interest to
acoustics [EM], [ADP]
Information technology, internet,
nonacoustical devices of interest to acoustics
[ADP], [NX]
Notes relating to acoustics as a profession
[ADP], [NX]
Education in acoustics, tutorial papers of
interest to acoustics educators [LLT], [WA],
[BEA], [VWS], [PSW]
Errata [ADP]
Standards [SB], [PDS]
General linear acoustics
Mathematical theory of wave propagation
[MD], [SFW], [ANN], [RM], [RKS], [KML],
[CAS]
Ray acoustics [JES], [SFW], [ANN], [JAC],
[KML], [TFD], [TK]
Reflection, refraction, diffraction of acoustic
waves [JES], [OU], [SFW], [RM], [KML],
[GH], [TFD], [TK]
Scattering of acoustic waves [LLT], [JES],
[OU], [SFW], [RM], [KML], [GH], [TK]
20.Gp
20.Hq
20.Jr
20.Ks
20.Mv
20.Px
20.Rz
20.Tb
20.Wd
20.Ye
[25]
25.Ba
25.Cb
25.Dc
25.Ed
25.Fe
25.Gf
25.Hg
25.Jh
25.Lj
25.Nm
25.Qp
25.Rq
25.Ts
25.Uv
25.Vt
25.Yw
25.Zx
[28]
28.Bj
Reflection, refraction, diffraction,
interference, and scattering of elastic and
poroelastic waves [OU], [RM], [DF],
[RKS], [JAT], [DSB], [GH]
Velocity and attenuation of acoustic waves
[MD], [OU], [SFW], [TRH], [RAS], [NPC],
[JAT], [GH]
Velocity and attenuation of elastic and
poroelastic waves [ANN], [NPC], [RKS],
[GH]
Standing waves, resonance, normal modes
[LLT], [SFW], [RM], [JDM]
Waveguides, wave propagation in tubes and
ducts [OU], [LH], [RK], [JBL]
Transient radiation and scattering [LLT],
[JES], [ANN], [MDV], [DDE]
Steady-state radiation from sources,
impedance, radiation patterns, boundary
element methods [SFW], [RM], [FCS]
Interaction of vibrating structures with
surrounding medium [LLT], [RM], [FCS],
[LH]
Analogies [JDM]
Measurement methods and instrumentation
[SFW], [TRH], [JDM], [GH]
Nonlinear acoustics
Parameters of nonlinearity of the medium
[MD], [OAS], [ROC]
Macrosonic propagation, finite amplitude
sound; shock waves [OU], [MDV], [PBB],
[OAS], [ROC]
Nonlinear acoustics of solids [MD], [ANN],
[OAS]
Effect of nonlinearity on velocity and
attenuation [MD], [OAS], [ROC]
Effect of nonlinearity on acoustic surface
waves [MD], [MFH], [OAS]
Standing waves; resonance [OAS], [MFH]
Interaction of intense sound waves with noise
[OAS], [PBB]
Reflection, refraction, interference, scattering,
and diffraction of intense sound waves [OU],
[MDV], [PBB]
Parametric arrays, interaction of sound with
sound, virtual sources [TRH]
Acoustic streaming [JDM], [OAS], [LH]
Radiation pressure [ROC]
Solitons, chaos [MFH]
Nonlinear acoustical and dynamical systems
[MFH], [ROC]
Acoustic levitation [MFH]
Intense sound sources [ROC], [TRH]
Nonlinear acoustics of bubbly liquids [TGL],
[SWY]
Measurement methods and instrumentation
for nonlinear acoustics [ROC]
Aeroacoustics and atmospheric sound
Mechanisms affecting sound propagation in
air, sound speed in the air [DKW], [VEO],
[KML]
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
28.Dm
28.En
28.Fp
28.Gq
28.Hr
28.Js
28.Kt
28.Lv
28.Mw
28.Py
28.Ra
28.Tc
28.Vd
28.We
[30]
30.Bp
30.Cq
30.Dr
30.Es
30.Ft
30.Gv
30.Hw
30.Jx
30.Ky
30.Lz
Infrasound and acoustic-gravity waves
[DKW], [PBB]
Interaction of sound with ground surfaces,
ground cover and topography, acoustic
impedance of outdoor surfaces [OU], [KVH],
[VEO], [KML]
Outdoor sound propagation through a
stationary atmosphere, meteorological factors
[DKW], [KML], [TK]
Outdoor sound propagation and scattering in
a turbulent atmosphere, and in non-uniform
flow fields [VEO], [PBB], [KML]
Outdoor sound sources [JWP], [PBB], [TK]
Numerical models for outdoor propagation
[VEO], [NAG], [DKW]
Aerothermoacoustics and combustion
acoustics [AH], [JWP], [LH]
Statistical characteristics of sound fields and
propagation parameters [DKW], [VEO]
Shock and blast waves, sonic boom [VWS],
[ROC], [PBB]
Interaction of fluid motion and sound.
Doppler effect and sound in flow ducts [JWP],
[AH], [LH]
Generation of sound by fluid flow,
aerodynamic sound, and turbulence, [JWP],
[AH], [PBB], [TK], [LH]
Sound-in-air measurements, methods and
instrumentation for location, navigation,
altimetry, and sound ranging [JWP], [KVH],
[DKW]
Measurement methods and instrumentation
to determine or evaluate atmospheric
parameters, winds, turbulence, temperatures,
and pollutants in air [JWP], [DKW]
Measurement methods and instrumentation
for remote sensing and for inverse problems
[DKW]
Underwater sound
Normal mode propagation of sound in water
[BTH], [AMT], [MS], [NPC], [TFD]
Ray propagation of sound in water [JES],
[BTH], [JAC], [TFD]
Hybrid and asymptotic propagation theories,
related experiments [BTH], [JAC], [TFD]
Velocity, attenuation, refraction, and
diffraction in water, Doppler effect [BTH],
[DRD], [JAC], [TFD]
Volume scattering [BTH], [APL]
Backscattering, echoes, and reverberation
in water due to combinations of boundaries
[BTH], [APL]
Rough interface scattering [BTH], [JES], [APL]
Radiation from objects vibrating under water,
acoustic and mechanical impedance [BTH],
[DSB], [DF], [EGW], [DDE]
Structures and materials for absorbing sound
in water; propagation in fluid-filled permeable
material [BTH], [NPC], [FCS], [TRH]
Underwater applications of nonlinear
acoustics; explosions [BTH], [NAG], [OAS],
[SWY]
168th Meeting: Acoustical Society of America
2338
30.Ma
30.Nb
30.Pc
30.Qd
30.Re
30.Sf
30.Tg
30.Vh
30.Wi
30.Xm
30.Yj
30.Zk
[35]
35.Ae
35.Bf
35.Cg
35.Dh
35.Ei
35.Fj
35.Gk
35.Hl
35.Kp
35.Lq
35.Mr
35.Ns
35.Pt
35.Rw
35.Sx
35.Ty
2339
Acoustics of sediments; ice covers,
viscoelastic media; seismic underwater
acoustics [BTH], [NAG], [MS], [DSB]
Noise in water; generation mechanisms and
characteristics of the field [BTH], [KGS],
[MS], [JAC], [SWY]
Ocean parameter estimation by acoustical
methods; remote sensing; imaging,
inversion, acoustic tomography [BTH],
[KGS], [AMT], [MS], [JAC], [ZHM],
[HCS], [SED], [TFD], [APL]
Global scale acoustics; ocean basin
thermometry, transbasin acoustics [BTH], [JAC]
Signal coherence or fluctuation to sound
propagation/scattering in the ocean [BTH],
[KGS], [HCS], [TFD]
Acoustical detection of marine life; passive
and active [BTH], [CF], [DKM], [AMT],
[MS], [MCH], [APL]
Navigational instruments using underwater
sound [BTH], [HCS], [JAC]
Active sonar systems [BTH], [JES], [TRH],
[ZHM], [DDE]
Passive sonar systems and algorithms,
matched field processing in underwater
acoustics [BTH], [KGS], [HCS], [AMT],
[MS], [SED], [ZHM]
Underwater measurement and calibration
instrumentation and procedures [BTH],
[JAC], [TRH], [DDE]
Transducers and transducer arrays for
underwater sound; transducer calibration
[BTH], [TRH], [DDE]
Experimental modeling [BTH], [JES], [MS],
[TFD]
35.Ud
Ultrasonics, quantum acoustics, and
physical effects of sound
Ultrasonic velocity, dispersion, scattering,
diffraction, and attenuation in gases [VMK],
[MRH], [AGP], [GH], [TK]
Ultrasonic velocity, dispersion, scattering,
diffraction, and attenuation in liquids, liquid
crystals, suspensions, and emulsions [VMK],
[MRH], [AGP], [DSB], [NAG], [JDM], [GH]
Ultrasonic velocity, dispersion, scattering,
diffraction, and attenuation in solids; elastic
constants [VMK], [MRH], [AGP], [MD],
[MFH], [JDM], [JAT], [RKS], [GH], [TK]
Pretersonics (sound of frequency above
10 GHz); Brillouin scattering [VMK],
[MRH], [AGP], [MFH], [RLW]
Acoustic cavitation, vibration of gas bubbles
in liquids [VMK], [MRH], [AGP], [TGL],
[NAG], [DLM]
Ultrasonic relaxation processes in gases,
liquids, and solids [VMK], [MRH], [AGP],
[NAG]
Phonons in crystal lattices, quantum acoustics
[VMK], [MRH], [AGP], [DF], [LPF], [JDM]
Sonoluminescence [VMK], [MRH], [AGP],
[NAG], [TGL]
Plasma acoustics [VMK], [MRH], [AGP],
[MFH], [JDM]
Low-temperature acoustics, sound in liquid
helium [VMK], [MRH], [AGP], [JDM]
Acoustics of viscoelastic materials [VMK],
[MRH], [AGP], [LLT], [MD], [OU], [FCS],
[KVH], [GH]
Acoustical properties of thin films [VMK],
[MRH], [AGP], [ADP], [TK]
Surface waves in solids and liquids [VMK],
[MRH], [AGP], [MD], [ANN], [GH], [TK]
Magnetoacoustic effect; oscillations and
resonance [VMK], [MRH], [AGP], [DAB],
[DF], [LPF]
Acoustooptical effects, optoacoustics,
acoustical visualization, acoustical microscopy,
and acoustical holography [VMK], [MRH],
[AGP], [JDM], [TK]
Other physical effects of sound [VMK],
[MRH], [AGP], [MFH], [NAG]
38.Ja
35.Vz
35.Wa
35.Xd
35.Yb
35.Zc
[38]
38.Ar
38.Bs
38.Ct
38.Dv
38.Ew
38.Fx
38.Gy
38.Hz
38.Kb
38.Lc
38.Md
38.Ne
38.Pf
38.Qg
38.Rh
38.Si
38.Tj
38.Vk
38.Wl
38.Yn
38.Zp
Thermoacoustics, high temperature acoustics,
photoacoustic effect [VMK], [MRH], [AGP],
[JDM], [TB]
Chemical effects of ultrasound [VMK],
[MRH], [AGP], [TGL]
Biological effects of ultrasound, ultrasonic
tomography [VMK], [MRH], [AGP], [DLM],
[MCH], [SWY]
Nuclear acoustical resonance, acoustical
magnetic resonance [VMK], [MRH], [AGP],
[JDM]
Ultrasonic instrumentation and measurement
techniques [VMK], [MRH], [AGP], [ROC],
[GH], [KAW], [TK]
Use of ultrasonics in nondestructive testing,
industrial processes, and industrial products
[VMK], [MRH], [AGP], [MD], [JAT],
[ANN], [BEA], [GH], [TK]
Transduction; acoustical devices for the
generation and reproduction of sound
Transducing principles, materials, and
structures: general [MS], [DAB], [TRH], [DDE]
Electrostatic transducers [MS], [KG], [DAB],
[TRH], [MRB], [DDE]
Magnetostrictive transducers [DAB], [TRH],
[DDE]
Electromagnetic and electrodynamic
transducers [MS], [DAB], [TRH], [DDE]
Feedback transducers [MS]
Piezoelectric and ferroelectric transducers
[DAB], [KG], [TRH], [MRB], [DDE]
Semiconductor transducers [MS], [MRB]
Transducer arrays, acoustic interaction effects
in arrays [DAB], [TRH], [MS], [BEA],
[MRB], [DDE]
Loudspeakers and horns, practical sound
sources [MS], [MRB], [DDE]
Microphones and their calibration [MS], [MRB]
Amplifiers, attenuators, and audio controls
[MS]
Sound recording and reproducing systems,
general concepts [MAH], [MRB]
Mechanical, optical, and photographic
recording and reproducing systems [MS]
Hydroacoustic and hydraulic transducers [DAB]
Magnetic and electrostatic recording and
reproducing systems [MS]
Surface acoustic wave transducers [MS], [TK]
Telephones, earphones, sound power
telephones, and intercommunication systems
[MS]
Public address systems, sound-reinforcement
systems [ADP]
Stereophonic reproduction [ADP], [MRB]
Broadcasting (radio and television) [ADP]
Impulse transducers [MS]
Acoustooptic and photoacoustic transducers
[DAB], [MS]
40.Kd
40.Le
40.Ng
40.Ph
40.Qi
40.Rj
40.Sk
40.Tm
40.Vn
40.Yq
[50]
50.Ba
50.Cb
50.Ed
50.Fe
50.Gf
50.Hg
50.Jh
50.Ki
50.Lj
50.Nm
50.Pn
50.Qp
50.Rq
50.Sr
50.Vt
[40]
40.At
40.Cw
40.Dx
40.Ey
40.Fz
40.Ga
40.Hb
40.Jc
Structural acoustics and vibration
Experimental and theoretical studies of vibrating
systems [AJH], [NJK], [EAM], [KML], [EGW],
[DDE], [DF], [DAB], [FCS], [LC]
Vibrations of strings, rods, and beams [AJH],
[NJK], [EAM], [DDE], [EGW], [DAB],
[LPF], [JAT], [LC], [BEA]
Vibrations of membranes and plates [AJH],
[NJK], [EAM], [LLT], [MD], [EGW], [DAB],
[DF], [LPF], [LC], [JBL], [DDE]
Vibrations of shells [AJH], [NJK], [EAM],
[DAB], [DF], [LPF], [EGW], [LC], [DDE]
Acoustic scattering by elastic structures
[AJH], [NJK], [EAM], [LLT], [KML], [ANN],
[DSB], [TK], [EGW], [DDE]
Nonlinear vibration [AJH], [NJK], [EAM],
[JAT]
Random vibration [AJH], [NJK], [EAM]
Shock and shock reduction and absorption
[AJH], [NJK], [EAM], [OU]
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
50.Yw
[55]
55.Br
55.Cs
55.Dt
55.Ev
Impact and impact reduction, mechanical
transients [AJH], [NJK], [EAM], [FCS]
Techniques for nondestructive evaluation
and monitoring, acoustic emission [AJH],
[NJK], [EAM], [JAT], [BEA], [TK]
Effects of vibration and shock on biological
systems, including man [AJH], [NJK], [EAM],
[MCH]
Seismology and geophysical prospecting;
seismographs [AJH], [NJK], [EAM], [MFH],
[RKS], [ANN]
Effect of sound on structures, fatigue;
spatial statistics of structural vibration [AJH],
[NJK], [EAM], [JAT], [DDE]
Radiation from vibrating structures into fluid
media [AJH], [NJK], [EAM], [LLT], [KML],
[FCS], [EGW], [LC], [LH], [DDE]
Inverse problems in structural acoustics and
vibration [AJH], [NJK], [EAM], [KML],
[EGW], [LC], [DDE]
Vibration isolators, attenuators, and dampers
[AJH], [NJK], [EAM], [LC]
Active vibration control [AJH], [NJK], [EAM],
[BSC], [LC]
Instrumentation and techniques for tests and
measurement relating to shock and vibration,
including vibration pickups, indicators, and
generators, mechanical impedance [AJH],
[NJK], [EAM], [LC]
Noise: its effects and control
Noisiness: rating methods and criteria [GB],
[SF], [BSF]
Noise spectra, determination of sound power
[GB], [KVH]
Noise generation [KVH], [RK]
Noise masking systems [BSF]
Noise control at source: redesign,
application of absorptive materials and
reactive elements, mufflers, noise silencers,
noise barriers, and attenuators, etc. [OU],
[SFW], [RK], [FCS], [AH], [LC], [JBL],
[LH]
Noise control at the ear [FCS], [BSF]
Noise in buildings and general machinery
noise [RK], [KVH], [KML]
Active noise control [BSC], [LC]
Transportation noise sources: air, road, rail,
and marine vehicles [GB], [SFW], [SF],
[JWP], [KVH], [KML]
Aerodynamic and jet noise [SF], [JWP],
[AH], [LH]
Impulse noise and noise due to impact
[GB], [KVH], [SF]
Effects of noise on man and society [GB],
[BSF], [SF]
Environmental noise, measurement, analysis,
statistical characteristics [GB], [BSF], [SF]
Community noise, noise zoning, by-laws, and
legislation [GB], [BSF], [SF]
Topographical and meteorological factors in
noise propagation [PBB], [VEO]
Instrumentation and techniques for noise
measurement and analysis [GB], [KVH],
[RK]
Architectural acoustics
Room acoustics: theory and experiment;
reverberation, normal modes, diffusion,
transient and steady-state response [NX],
[MV], [JES], [FCS]
Stationary response of rooms to noise; spatial
statistics of room response; random testing
[NX], [MV], [JES]
Sound absorption in enclosures: theory and
measurement; use of absorption in offices,
commercial and domestic spaces [NX], [MV],
[JES], [FCS]
Sound absorption properties of materials:
theory and measurement of sound
absorption coefficients; acoustic impedance
and admittance [NX], [MV], [OU], [FCS]
168th Meeting: Acoustical Society of America
2339
55.Fw
55.Gx
55.Hy
55.Jz
55.Ka
55.Lb
55.Mc
55.Nd
55.Pe
55.Rg
55.Ti
55.Vj
55.Wk
[58]
58.Bh
58.Dj
58.Fm
58.Gn
58.Hp
58.Jq
58.Kr
58.Ls
58.Mt
58.Pw
58.Ry
58.Ta
58.Vb
58.Wc
Auditorium and enclosure design [NX],
[MV], [JES], [NX]
Studies of existing auditoria and enclosures
[NX], [MV], [JES]
Subjective effects in room acoustics, speech
in rooms [NX], [MV], [JES], [MAH]
Sound-reinforcement systems for rooms and
enclosures [NX], [MV], [MAH]
Computer simulation of acoustics in
enclosures, modeling [NX], [LLT], [MV],
[JES], [SFW], [NAG]
Electrical simulation of reverberation [NX],
[MV], [MAH]
Room acoustics measuring instruments,
computer measurement of room properties
[NX], [MV], [JES]
Reverberation room design: theory, applications
to measurements of sound absorption,
transmission loss, sound power [NX], [MV]
Anechoic chamber design, wedges [NX],
[ADP]
Sound transmission through walls and
through ducts: theory and measurement [NX],
[LLT], [FCS], [LC], [BEA]
Sound-isolating structures, values of
transmission coefficients [NX], [LLT], [LC]
Vibration-isolating supports in building
acoustics [NX], [ADP]
Damping of panels [NX], [LLT]
Acoustical measurements and
instrumentation
Acoustic impedance measurement [DAB],
[FCS]
Sound velocity [DKW], [TB], [GH], [TK]
Sound level meters, level recorders, sound
pressure, particle velocity, and sound
intensity measurements, meters, and
controllers [MS], [DKW], [TB], [KAW]
Acoustic impulse analyzers and
measurements [ADP]
Tuning forks, frequency standards; frequency
measuring and recording instruments; time
standards and chronographs [MS]
Wave and tone synthesizers [MAH]
Spectrum and frequency analyzers and
filters; acoustical and electrical oscillographs;
photoacoustic spectrometers; acoustical delay
lines and resonators [ADP]
Acoustical lenses and microscopes [ADP]
Phase meters [ADP]
Rayleigh disks [ADP]
Distortion: frequency, nonlinear, phase, and
transient; measurement of distortion [MS]
Computers and computer programs in
acoustics [FCS], [DSB], [VWS]
Calibration of acoustical devices and systems
[DAB]
Electrical and mechanical oscillators [ADP]
60.Hj
60.Jn
60.Kx
60.Lq
60.Mn
60.Np
60.Pt
60.Qv
60.Rw
60.Sx
60.Tj
60.Uv
60.Vx
60.Wy
[64]
64.Bt
64.Dw
64.Fy
64.Gz
64.Ha
64.Jb
64.Kc
64.Ld
64.Me
64.Nf
64.Pg
64.Qh
[60]
60.Ac
60.Bf
60.Cg
60.Dh
60.Ek
60.Fg
60.Gk
2340
Acoustic signal processing
Theory of acoustic signal processing [KGS],
[MAH]
Acoustic signal detection and classification,
applications to control systems [JES], [MRB],
[PJL], [ZHM], [MAH], [JAC]
Statistical properties of signals and noise
[KGS], [MAH], [TFD]
Signal processing for communications:
telephony and telemetry, sound pickup and
reproduction, multimedia [MAH], [HCS],
[MRB]
Acoustic signal coding, morphology, and
transformation [MAH]
Acoustic array systems and processing,
beam-forming [JES], [ZHM], [HCS], [AMT],
[MRB], [BEA], [TFD]
Space-time signal processing other than
matched field processing [JES], [ZHM],
[JAC], [MRB]
64.Ri
64.Sj
64.Tk
64.Vm
64.Wn
64.Yp
Time-frequency signal processing, wavelets
[KGS], [ZHM], [CAS], [PJL]
Source localization and parameter estimation
[JES], [KGS], [MAH], [ZHM], [MRB],
[SED]
Matched field processing [AIT], [AMT], [SED]
Acoustic imaging, displays, pattern
recognition, feature extraction [JES], [KGS],
[BEA], [MRB]
Adaptive processing [DKW], [MRB]
Acoustic signal processing techniques for
neural nets and learning systems [MAH],
[AMT]
Signal processing techniques for acoustic
inverse problems [ZHM], [MRB], [SED]
Signal processing instrumentation, integrated
systems, smart transducers, devices and
architectures, displays and interfaces for
acoustic systems [MAH], [MRB]
Remote sensing methods, acoustic tomography
[DKW], [JAC], [ZHM], [AMT]
Acoustic holography [JDM], [OAS], [EGW],
[MRB]
Wave front reconstruction, acoustic timereversal, and phase conjugation [OAS],
[HCS], [EGW], [BEA], [MRB]
Model-based signal processing [ZHM],
[MRB], [PJL]
Acoustic sensing and acquisition [MS],
[DKW]
Non-stationary signal analysis, non-linear
systems, and higher order statistics [PJL]
Physiological acoustics
Models and theories of the auditory system
[BLM], [ICB], [FCS], [CAS], [CA], [ELP]
Anatomy of the cochlea and auditory nerve
[BLM], [AMS], [ANP], [SFW], [RRF],
[CAS], [CA]
Anatomy of the auditory central nervous
system [BLM], [AMS], [ANP], [RRF],
[CAS], [CA]
Biochemistry and pharmacology of the
auditory system [BLM], [CAS], [CA]
Acoustical properties of the outer ear; middleear mechanics and reflex [BLM], [FCS],
[CAS], [CA], [ELP]
Otoacoustic emissions [BLM], [MAH],
[CAS], [CA], [ELP]
Cochlear mechanics [BLM], [KG], [CAS],
[CA], [ELP]
Physiology of hair cells [BLM], [KG], [CAS],
[CA], [ELP]
Effects of electrical stimulation, cochlear
implant [BLM], [ICB], [CAS], [CA], [ELP]
Cochlear electrophysiology [BLM], [ICB],
[KG], [CAS], [CA], [ELP]
Electrophysiology of the auditory nerve
[BLM], [AMS], [ICB], [CAS], [CA], [ELP]
Electrophysiology of the auditory central
nervous system [BLM], [AMS], [ICB],
[CAS], [ELP]
Evoked responses to sounds [BLM], [ICB],
[CAS], [CA], [ELP]
Neural responses to speech [BLM], [ICB],
[CAS], [ELP]
Physiology of sound generation and detection
by animals [BLM], [AMS], [MCH], [CAS]
Physiology of the somatosensory system
[BLM], [MCH]
Effects of noise and trauma on the auditory
system [BLM], [ICB], [CAS], [ELP]
Instruments and methods [BLM], [KG],
[MAH], [CAS]
66.Dc
66.Ed
66.Fe
66.Gf
66.Hg
66.Jh
66.Ki
66.Lj
66.Mk
66.Nm
66.Pn
66.Qp
66.Rq
66.Sr
66.Ts
66.Vt
66.Wv
66.Yw
[70]
70.Aj
70.Bk
70.Dn
70.Ep
70.Fq
70.Gr
70.Jt
70.Kv
70.Mn
[71]
71.An
71.Bp
71.Es
71.Ft
71.Gv
71.Hw
71.Ky
[66]
66.Ba
66.Cb
Psychological acoustics
Models and theories of auditory processes
[EB], [CAS], [ELP], [JFC]
Loudness, absolute threshold [MAS], [ELP]
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
71.Lz
71.Qr
Masking [VMR], [EAS], [FJG], [LRB], [EB],
[ELP], [JFC]
Auditory fatigue, temporary threshold shift
[EAS], [MAS], [ELP], [EB]
Discrimination: intensity and frequency
[VMR], [FJG], [EB]
Detection and discrimination of sound by
animals [ADP]
Pitch [ADP]
Timbre, timbre in musical acoustics [DD]
Subjective tones [JFC]
Perceptual effects of sound [VMR], [VB],
[DB], [EB], [JFC]
Temporal and sequential aspects of hearing;
auditory grouping in relation to music [EAS],
[FJG], [DB], [EB], [DD]
Phase effects [EB], [JFC]
Binaural hearing [VB], [LRB], [EB], [ELP],
[NAG], [JFC]
Localization of sound sources [VB], [FJG],
[LRB], [EB], [ELP], [JFC]
Dichotic listening [FJG], [LRB], [EB], [DD],
[ELP], [JFC]
Deafness, audiometry, aging effects [DS],
[FJG], [ICB], [MAS], [ELP], [JFC]
Auditory prostheses, hearing aids [DB], [VB],
[FJG], [ICB], [MAS], [JFC], [EB], [ELP]
Hearing protection [FCS]
Vibration and tactile senses [MCH]
Instruments and methods related to hearing
and its measurement [ADP]
Speech production
Anatomy and physiology of the vocal tract,
speech aerodynamics, auditory kinetics [ZZ],
[CYE], [CHS], [SSN], [LK]
Models and theories of speech production
[ZZ], [CYE], [CHS]
Disordered speech [ZZ], [CYE], [LK][CHS],
[DAB]
Development of speech production [CYE],
[DAB], [CHS], [ZZ], [LK]
Acoustical correlates of phonetic segments
and suprasegmental properties: stress,
timing, and intonation [CYE], [SSN],
[DAB], [CGC],
Larynx anatomy and function; voice
production characteristics [CYE], [CHS],
[LK], [ZZ]
Instrumentation and methodology for
speech production research [DAB], [CHS],
[LK], [ZZ]
Cross-linguistics speech production and
acoustics [DAB], [LK]
Relations between speech production and
perception [CYE], [DAB], [CHS], [CGC],
[ZZ]
Speech perception
Models and theories of speech perception
[TCB], [MSS], [ICB], [MAH], [CGC]
Perception of voice and talker characteristics
[TCB], [MSS], [CGC], [JHM], [MSV],
[MAH]
Vowel and consonant perception; perception
of words, sentences, and fluent speech [TCB],
[MSS], [DB], [CGC], [MAH]
Development of speech perception [TCB],
[MSS], [CA], [MAH], [DB]
Measures of speech perception (intelligibility
and quality) [TCB], [MSS], [VB], [ICB],
[CGC], [MAH], [MAS]
Cross-language perception of speech [TCB],
[MSS], [MAH], [CGC]
Speech perception by the hearing impaired
[TCB], [MSS], [DB], [VB], [FJG], [ICB], [EB]
Speech perception by the aging [TCB],
[MSS], [DB], [MAH]
Neurophysiology of speech perception
[TCB], [MSS], [ICB], [MAH]
168th Meeting: Acoustical Society of America
2340
71.Rt
71.Sy
[72]
72.Ar
72.Bs
72.Ct
72.Dv
72.Fx
72.Gy
72.Ja
72.Kb
72.Lc
72.Ne
72.Pf
72.Qr
72.Uv
2341
Sensory mechanisms in speech perception
[TCB], [MSS], [ICB], [MAH], [DB]
Spoken language processing by humans
[TCB], [MSS], [DB], [MSV], [MAH],
[CGC]
Speech processing and communication
systems
Speech analysis and analysis techniques;
parametric representation of speech [CYE],
[SSN]
Neural networks for speech recognition
[CYE], [SSN]
Acoustical methods for determining vocal
tract shapes [CYE], [SSN], [ZZ]
Speech-noise interaction [CYE], [SSN]
Talker identification and adaptation
algorithms [CYE], [SSN]
Narrow, medium, and wideband speech
coding [CYE], [SSN]
Speech synthesis and synthesis techniques
[CYE], [SSN], [SAF]
Speech communication systems and dialog
systems [CYE]
Time and frequency alignment procedures for
speech [CYE], [SSN]
Automatic speech recognition systems
[CYE], [SSN]
Automatic talker recognition systems [CYE],
[SSN]
Auditory synthesis and recognition [CYE],
[SSN]
Forensic acoustics [CYE]
[75]
75.Bc
75.Cd
75.De
75.Ef
75.Fg
75.Gh
75.Hi
75.Kk
75.Lm
75.Mn
75.Np
75.Pq
75.Qr
75.Rs
75.St
75.Tv
75.Wx
75.Xz
75.Yy
75.Zz
Music and musical instruments
Scales, intonation, vibrato, composition
[DD], [MAH]
Music perception and cognition [DD], [MAH],
[DB]
Bowed stringed instruments [TRM], [JW]
Woodwinds [TRM], [JW], [AH]
Brass instruments and other lip vibrated
instruments [TRM], [JW], [ZZ]
Plucked stringed instruments [TRM], [JW]
Drums [TRM]
Bells, gongs, cymbals, mallet percussion and
similar instruments [TRM]
Free reed instruments [TRM], [JW], [AH], [ZZ]
Pianos and other struck stringed instruments
[TRM]
Pipe organs [TRM], [JW]
Reed woodwind instruments [AH], [TRM],
[JW], [ZZ]
Flutes and similar instruments [AH], [TRM],
[JW]
Singing [DD], [TRM], [JW]
Musical performance, training, and analysis
[DD], [DB]
Electroacoustic and electronic instruments
[DD]
Electronic and computer music [MAH]
Automatic music recognition, classification
and information retrieval [DD], [SSN]
Instrumentation measurement methods for
musical acoustics [TRM], [JW]
Analysis, synthesis, and processing of
musical sounds [DD], [MAH]
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
[80]
80.Cs
80.Ev
80.Gx
80.Jz
80.Ka
80.Lb
80.Nd
80.Pe
80.Qf
80.Sh
80.Vj
Bioacoustics
Acoustical characteristics of biological media:
molecular species, cellular level tissues
[MLD], [RRF], [DLM], [TK], [SWY], [GH],
[KAW], [TK]
Acoustical measurement methods in
biological systems and media [CCC], [DLM],
[MLD], [RRF], [SWY], [GH], [KAW]
Mechanisms of action of acoustic energy
on biological systems: physical processes,
sites of action [MLD], [ANP], [RRF], [GH],
[SWY], [KAW]
Use of acoustic energy (with or without other
forms) in studies of structure and function
of biological systems [MLD], [TJR], [ANP],
[RRF], [DLM], [GH], [SWY], [KAW]
Sound production by animals: mechanisms,
characteristics, populations, biosonar [MLD],
[WWA], [CT], [AMS], [ANP], [DKM], [JJF],
[AMT], [ZZ]
Sound reception by animals: anatomy,
physiology, auditory capacities, processing
[MLD], [AMS], [ANP], [DKM], [JJF]
Effects of noise on animals and associated
behavior, protective mechanisms [MLD],
[AMS], [ANP], [DKM], [JJF], [AMT]
Agroacoustics [RRF], [WA], [MCH]
Medical diagnosis with acoustics [MDV],
[DLM], [GH], [SWY], [KAW]
Medical use of ultrasonics for tissue
modification (permanent and temporary)
[DLM], [ROC], [MDV], [GH], [SWY], [KAW]
Acoustical medical instrumentation and
measurement techniques [DLM], [MCH],
[MDV], [GH], [SWY], [KAW]
168th Meeting: Acoustical Society of America
2341
ETHICAL PRINCIPLES OF THE ACOUSTICAL SOCIETY OF AMERICA
FOR RESEARCH INVOLVING HUMAN AND NON-HUMAN
ANIMALS IN RESEARCH AND PUBLISHING AND PRESENTATIONS
The Acoustical Society of America (ASA) has endorsed the following ethical principles associated with the use of human and non-human
animals in research, and for publishing and presentations. The principles endorsed by the Society follow the form of those adopted by the American
Psychological Association (APA), along with excerpts borrowed from the Council for International Organizations of Medical Sciences (CIOMS). The
ASA acknowledges the difficulty in making ethical judgments, but the ASA wishes to set minimum socially accepted ethical standards for publishing
in its journals and presenting at its meetings. These Ethical Principles are based on the principle that the individual author or presenter bears the
responsibility for the ethical conduct of their research and is publication or presentation.
Authors of manuscripts submitted for publication in a journal of the Acoustical Society of America or presenting a paper at a meeting of the
Society are obligated to follow the ethical principles of the Society. Failure to accept the ethical principles of the ASA shall result in the immediate
rejection of manuscripts and/or proposals for publication or presentation. False indications of having followed the Ethical Principles of the ASA
may be brought to the Ethics and Grievances Committee of the ASA.
APPROVAL BY APPROPRIATE GOVERNING
AUTHORITY
The ASA requires all authors to abide by the principles of ethical
research as a prerequisite for participation in Society-wide activities (e.g.,
publication of papers, presentations at meetings, etc.). Furthermore, the Society endorses the view that all research involving human and non-human
vertebrate animals requires approval by the appropriate governing authority
(e.g., institutional review board [IRB], or institutional animal care and use
committee [IACUC], Health Insurance Portability and Accountability Act
[HIPAA], or by other governing authorities used in many countries) and
adopts the requirement that all research must be conducted in accordance
with an approved research protocol as a precondition for participation in
ASA programs. If no such governing authority exists, then the intent of the
ASA Ethical Principles described in this document must be met. All research
involving the use of human or non-human animals must have met the ASA
Ethical Principles prior to the materials being submitted to the ASA for
publication or presentation.
USE OF HUMAN SUBJECTS IN RESEARCH-Applicable
when human subjects are used in the research
Research involving the use of human subjects should have been approved by an existing appropriate governing authority (e.g., an institutional
review board [IRB]) whose policies are consistent with the Ethical Principles
of the ASA or the research should have met the following criteria:
Informed Consent
When obtaining informed consent from prospective participants in a
research protocol that has been approved by the appropriate and responsiblegoverning body, authors must have clearly and simply specified to the participants beforehand:
1. The purpose of the research, the expected duration of the study, and
all procedures that were to be used.
2. The right of participants to decline to participate and to withdraw
from the research in question after participation began.
3. The foreseeable consequences of declining or withdrawing from a
study.
4. Anticipated factors that may have influenced a prospective participant’s willingness to participate in a research project, such as potential risks,
discomfort, or adverse effects.
5. All prospective research benefits.
6. The limits of confidentially.
7. Incentives for participation.
8. Whom to contact for questions about the research and the rights
of research participants. The office/person must have willingly provided an
atmosphere in which prospective participants were able to ask questions and
receive answers.
Authors conducting intervention research involving the use of experimental treatments must have clarified, for each prospective participant, the
following issues at the outset of the research:
1. The experimental nature of the treatment;
2. The services that were or were not to be available to the control
group(s) if appropriate;
2342
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
3. The means by which assignment to treatment and control groups
were made;
4. Available treatment alternatives if an individual did not wish to
participate in the research or wished to withdraw once a study had begun;
and
5. Compensation for expenses incurred as a result of participating in a
study including, if appropriate, whether reimbursement from the participant
or a third-party payer was sought.
Informed Consent for Recording Voices and Images in
Research
Authors must have obtained informed consent from research participants prior to recording their voices or images for data collection unless:
1. The research consisted solely of naturalistic observations in public
places, and it was not anticipated that the recording would be used in a
manner that could have caused personal identification or harm, or
2. The research design included deception. If deceptive tactics
were a necessary component of the research design, consent for the use of
recordings was obtained during the debriefing session.
Client/Patient, Student, and Subordinate
Research Participants
When authors conduct research with clients/patients, students, or subordinates as participants, they must have taken steps to protect the prospective
participants from adverse consequences of declining or withdrawing from
participation.
Dispensing With Informed Consent for
Research
Authors may have dispensed with the requirement to obtain informed
consent when:
1. It was reasonable to assume that the research protocol in question did
not create distress or harm to the participant and involves:
a. The study of normal educational practices, curricula, or classroom
management methods that were conducted in educational settings
b. Anonymous questionnaires, naturalistic observations, or archival
research for which disclosure of responses would not place participants at
risk of criminal or civil liability or damage their financial standing, employability, or reputation, and confidentiality
c. The study of factors related to job or organization effectiveness
conducted in organizational settings for which there was no risk to participants’ employability, and confidentiality.
2. Dispensation is permitted by law.
3. The research involved the collection or study of existing data, documents, records, pathological specimens, or diagnostic specimens, if these
sources are publicly available or if the information is recorded by the investigator in such a manner that subjects cannot be identified, directly or through
identifiers linked to the subjects.
Offering Inducements for Research
Participation
(a) Authors must not have made excessive or inappropriate financial
or other inducements for research participation when such inducements are
likely to coerce participation.
168th Meeting: Acoustical Society of America
2342
(b) When offering professional services as an inducement for research
participation, authors must have clarified the nature of the services, as well as
the risks, obligations, and limitations.
Deception in Research
(a) Authors must not have conducted a study involving deception unless they had determined that the use of deceptive techniques was justified by
the study’s significant prospective scientific, educational, or applied value and
that effective non-deceptive alternative procedures were not feasible.
(b) Authors must not have deceived prospective participants about
research that is reasonably expected to cause physical pain or severe emotional distress.
(c) Authors must have explained any deception that was an integral
feature of the design and conduct of an experiment to participants as early as
was feasible, preferably at the conclusion of their participation, but no later
than at the conclusion of the data collection period, and participants were
freely permitted to withdraw their data.
Debrieing
(a) Authors must have provided a prompt opportunity for participants
to obtain appropriate information about the nature, results, and conclusions
of the research project for which they were a part, and they must have taken
reasonable steps to correct any misconceptions that participants may have had
of which the experimenters were aware.
(b) If scientific or humane values justified delaying or withholding
relevant information, authors must have taken reasonable measures to
reduce the risk of harm.
(c) If authors were aware that research procedures had harmed a participant, they must have taken reasonable steps to have minimized the harm.
HUMANE CARE AND USE OF NON-HUMAN
VERTEBRATE ANIMALS IN RESEARCH-Applicable when
non-human vertebrate animals are used in the
research
The advancement of science and the development of improved means
to protect the health and well being both of human and non-human vertebrate animals often require the use of intact individuals representing a wide
variety of species in experiments designed to address reasonable scientific
questions. Vertebrate animal experiments should have been undertaken only
after due consideration of the relevance for health, conservation, and the
advancement of scientific knowledge. (Modified from the Council for International Organizations of Medical Science (CIOMS) document: “International Guiding Principles for Biomedical Research Involving Animals1985”).
Research involving the use of vertebrate animals should have been approved
by an existing appropriate governing authority (e.g., an institutional animal
care and use committee [IACUC]) whose policies are consistent with the
Ethical Principles of the ASA or the research should have met the following
criteria:
The proper and humane treatment of vertebrate animals in research
demands that investigators:
1. Acquired, cared for, used, interacted with, observed, and disposed
of animals in compliance with all current federal, state, and local laws and
regulations, and with professional standards.
2. Are knowledgeable of applicable research methods and are experienced in the care of laboratory animals, supervised all procedures involving
animals, and assumed responsibility for the comfort, health, and humane
treatment of experimental animals under all circumstances.
3. Have insured that the current research is not repetitive of previously
published work.
4. Should have used alternatives (e.g., mathematical models, computer
simulations, etc.) when possible and reasonable.
5. Must have performed surgical procedures that were under appropriate anesthesia and followed techniques that avoided infection and minimized
pain during and after surgery.
6. Have ensured that all subordinates who use animals as a part of their
employment or education received instruction in research methods and in the
care, maintenance, and handling of the species that were used, commensurate
with the nature of their role as a member of the research team.
2343
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
7. Must have made all reasonable efforts to minimize the number of
vertebrate animals used, the discomfort, the illness, and the pain of all animal
subjects.
8. Must have made all reasonable efforts to minimize any harm to the
environment necessary for the safety and well being of animals that were
observed or may have been affective as part of a research study.
9. Must have made all reasonable efforts to have monitored and then
mitigated any possible adverse affects to animals that were observed as a
function of the experimental protocol.
10. Who have used a procedure subjecting animals to pain, stress, or
privation may have done so only when an alternative procedure was unavailable; the goal was justified by its prospective scientific, educational, or
applied value; and the protocol had been approved by an appropriate review
board.
11. Proceeded rapidly to humanely terminate an animal’s life when it
was necessary and appropriate, always minimizing pain and always in accordance with accepted procedures as determined by an appropriate review
board.
PUBLICATION and PRESENTATION ETHICS-For
publications in ASA journals and presentations at ASA
sponsored meetings
Plagiarism
Authors must not have presented portions of another’s work or data as
their own under any circumstances.
Publication Credit
Authors have taken responsibility and credit, including authorship
credit, only for work they have actually performed or to which they have
substantially contributed. Principal authorship and other publication credits
accurately reflect the relative scientific or professional contributions of the
individuals involved, regardless of their relative status. Mere possession of
an institutional position, such as a department chair, does not justify authorship credit. Minor contributions to the research or to the writing of the paper
should have been acknowledged appropriately, such as in footnotes or in an
introductory statement.
Duplicate Publication of Data
Authors did not publish, as original data, findings that have been previously published. This does not preclude the republication of data when they
are accompanied by proper acknowledgment as defined by the publication
policies of the ASA.
Reporting Research Results
If authors discover significant errors in published data, reasonable steps
must be made in as timely a manner as possible to rectify such errors. Errors
can be rectified by a correction, retraction, erratum, or other appropriate
publication means.
DISCLOSURE OF CONFLICTS OF INTEREST
If the publication or presentation of the work could directly benefit the
author(s), especially financially, then the author(s) must disclose the nature
of the conflict:
1) The complete affiliation(s) of each author and sources of funding for
the published or presented research should be clearly described in the paper
or publication abstract.
2) If the publication or presentation of the research would directly lead
to the financial gain of the authors(s), then a statement to this effect must
appear in the acknowledgment section of the paper or presentation abstract or
in a footnote of a paper.
3) If the research that is to be published or presented is in a controversial area and the publication or presentation presents only one view in
regard to the controversy, then the existence of the controversy and this view
must be provided in the acknowledgment section of the paper or presentation abstract or in a footnote of a paper. It is the responsibility of the author
to determine if the paper or presentation is in a controversial area and if the
person is expressing a singular view regarding the controversy.
168th Meeting: Acoustical Society of America
2343
Sustaining Members of the Acoustical Society of America
The Acoustical Society is grateful for the financial assistance being given by the Sustaining Members listed below and invites applications
for sustaining membership from other individuals or corporations who are interested in the welfare of the Society.
Application for membership may be made to the Executive Director of the Society and is subject to the approval of the Executive Council.
Dues of $1000.00 for small businesses (annual gross below $100 million) and $2000.00 for large businesses (annual gross above $100
million or staff of commensurate size) include a subscription to the Journal as well as a yearly membership certificate suitable for
framing. Small businesses may choose not to receive a subscription to the Journal at reduced dues of $500/year.
Additional information and application forms may be obtained from Elaine Moran, Office Manager, Acoustical Society of America,
1305 Walt Whitman Road, Suite 300, Melville, NY 11747-4300. Telephone: (516) 576-2360; E-mail: asa@aip.org
Acentech Incorporated
JBL Professional
www.acentech.com
Cambridge, Massachusetts
Consultants in Acoustics, Audiovisual and Vibration
www.jblpro.com
Northridge, California
Loudspeakers and Transducers of All Types
ACO Paciic Inc.
Knowles Electronics, Inc.
www.acopacific.com
Belmont, California
Measurement Microphones, the ACOustic Interface™ System
www.knowlesinc.com
Itasca, Illinois
Manufacturing Engineers: Microphones, Recording, and Special
Audio Products
Applied Physical Sciences Corp.
www.aphysci.com
Groton, Connecticut
Advanced R&D and Systems Solutions for Complex National Defense Needs
Massa Products Corporation
BBN Technologies
www.massa.com
Hingham, Massachusetts
Design and Manufacture of Sonar and Ultrasonic Transducers
Computer-Controlled OEM Systems
www.bbn.com
Cambridge, Massachusetts
R&D company providing custom advanced research based solutions
Meyer Sound Laboratories, Inc.
Boeing Commercial Airplane Group
www.meyersound.com
Berkeley, California
Manufacture Loudspeakers and Acoustical Test Equipment
www.boeing.com
Seattle, Washington
Producer of Aircraft and Aerospace Products
National Council of Acoustical Consultants
Bose Corporation
www.bose.com
Framingham, Massachusetts
Loudspeaker Systems for Sound Reinforcement and Reproduction
D’Addario & Company, Inc.
www.daddario.com
Farmingdale, New York
D’Addario strings for musical instruments, Evans drumheads, Rico woodwind
reeds and Planet Waves accessories
www.ncac.com
Indianapolis, Indiana
An Association of Independent Firms Consulting in Acoustics
Raytheon Company
Integrated Defense Systems
www.raytheon.com
Portsmouth, Rhode Island
Sonar Systems and Oceanographic Instrumentation: R&D
in Underwater Sound Propagation and Signal Processing
Science Applications International Corporation
G.R.A.S.
Sound & Vibration ApS
www.gras.dk
Vedbaek, Denmark
Measurement microphones, Intensity probes, Calibrators
Industrial Acoustics Company
Acoustic and Marine Systems Operation
Arlington, Virginia
Underwater Acoustics; Signal Processing; Physical Oceanography; Hydrographic Surveys; Seismology; Undersea and Seismic Systems
Shure Incorporated
InfoComm International Standards
www.shure.com
Niles, Illinois
Design, development, and manufacture of cabled and wireless microphones
for broadcasting, professional recording, sound reinforcement, mobile communications, and voice input–output applications; audio circuitry equipment;
high fidelity phonograph cartridges and styli: automatic mixing systems; and
related audio components and accessories. The firm was founded in 1925.
www.infocomm.org
Fairfax, Virginia
Advancing Audiovisual Communications Globally
Sperian Hearing Protection, LLC
www.industrialacoustics.com
Bronx, New York
Research, Engineering and Manufacturing–Products and Services for Noise
Control and Acoustically Conditioned Environments
International Business Machines Corporation
www.ibm.com/us/
Yorktown Heights, New York
Manufacturer of Business Machines
2344
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
www.howardleight.com
San Diego, California
Howard Leight hearing protection, intelligent protection for military environments, in-ear dosimetry, real-world verification of attenuation, and education
supported by the NVLAP-accredited Howard Leight Acoustical Testing Laboratory
168th Meeting: Acoustical Society of America
2344
Thales Underwater Systems
Wenger Corporation
www.tms-sonar.com
Somerset, United Kingdom
Prime contract management, customer support services, sonar design and
production, masts and communications systems design and production
www.wengercorp.com
Owatonna, Minnesota
Design and Manufacturing of Architectural
Acoustical Products including Absorbers, Diffusers, Modular Sound
Isolating Practice Rooms, Acoustical Shells and Clouds for Music
Rehearsal and Performance Spaces
3M Occupational Health & Environmental Safety
Division
www.3m.com/occsafety
Minneapolis, Minnesota
Products for personal and environmental safety, featuring E·A·R and Peltor
brand hearing protection and fit testing, Quest measurement instrumentation,
audiological devices, materials for control of noise, vibration, and mechanical
energy, and the E·A·RCALSM laboratory for research, development, and
education, NVLAP-accredited since 1992.
Hearing conservation resource center
www.e-a-r.com/hearingconservation
Wyle Laboratories
www.wyle.com/services/arc.html
Arlington, Virginia
The Wyle Acoustics Group provides a wide range of professional services
focused on acoustics, vibration, and their allied technologies, including services to the aviation industry
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
ACOUSTICAL ·
SOCIETY ·
OF ·
AMERICA
APPLICATION FOR SUSTAINING MEMBERSHIP
The Bylaws provide that any person, corporation, or organization contributing annual dues as fixed by the Executive
Council shall be eligible for election to Sustaining Membership in the Society.
Dues have been fixed by the Executive Council as follows: $1000 for small businesses 共annual gross below $100
million兲; $2000 for large businesses 共annual gross above $100 million or staff of commensurate size兲. Dues include one
year subscription to The Journal of the Acoustical Society of America and programs of Meetings of the Society. Please
do not send dues with application. Small businesses may choose not to receive a subscription to the Journal at
reduced dues of $500/year. If elected, you will be billed.
Name of Company
Address
Size of Business:
关
兴
Small business
关
兴 Small business—No Journal
关
兴
Large business
Type of Business
Please enclose a copy of your organization’s brochure.
In listing of Sustaining Members in the Journal we should like to indicate our products or services as follows:
共please do not exceed fifty characters兲
Name of company representative to whom journal should be sent:
It is understood that a Sustaining Member will not use the membership for promotional purposes.
Signature of company representatives making application:
Please send completed applications to: Executive Director, Acoustical Society of America, 1305 Walt Whitman Road,
Suite 300, Melville, NY 11747-4300, (516) 576-2360
2345
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2345
MEMBERSHIP INFORMATION AND APPLICATION INSTRUCTIONS
Applicants may apply for one of four grades of membership, depending on their qualifications: Student Member, Associate Member,
Corresponding Electronic Associate Member or full Member. To apply for Student Membership, fill out Parts I and II of the application; to
apply for Associate, Corresponding Electronic Associate, or full Membership, or to transfer to these grades, fill out Parts I and III.
BENEFITS OF MEMBERSHIP
JASA Online–Vol. 1 (1929) to present
JASA tables of contents e-mail alerts
JASA, printed or CD ROM
JASA Express Letters–online
Acoustics Today–the quarterly magazine
Proceedings of Meetings on Acoustics
Noise Control and Sound, It’s Uses and Control–
online archival magazines
Acoustics Research Letters Online (ARLO)–
online archive
Programs for Meetings
Meeting Calls for Papers
Reduced Meeting Registration Fees
5 Free ASA standards per year-download only
Standards Discounts
Society Membership Directory
Electronic Announcements
Physics Today
Eligibility to vote and hold office in ASA
Eligibility to be elected Fellow
Participation in ASA Committees
Full Member
*
*
*
*
*
*
Associate
*
*
*
*
*
*
ce-Associate
*
*
Student
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
Online
Online
*
*
*
Online
*
*
*
*
*
Online
Online
*
*
*
Online
*
*
Online
Online
Online
*
*
Online
Online
*
*
*
Online
*
*
*
*
*
*
QUALIFICATIONS FOR EACH GRADE OF MEMBERSHIP AND ANNUAL DUES
Student: Any student interested in acoustics who is enrolled in an accredited college or university for half time or more (at least eight
semester hours). Dues: $45 per year.
Associate: Any individual interested in acoustics. Dues: $95 per year. After five years, the dues of an Associate increase to that of a full
Member.
Corresponding Electronic Associate: Any individual residing in a developing country who wishes to have access to ASA’s online
publications only including The Journal of the Acoustical Society of America and Meeting Programs [see http://acousticalsociety.org/
membership/membership_and_benefits]. Dues $45 per year.
Member: Any person active in acoustics, who has an academic degree in acoustics or in a closely related field or who has had the
equivalent of an academic degree in scientific or professional experience in acoustics, shall be eligible for election to Membership in the
Society. A nonmember applying for full Membership will automatically be made an interim Associate Member, and must submit $95 with
the application for the first year’s dues. Election to full Membership may require six months or more for processing; dues as a full Member
will be billed for subsequent years.
JOURNAL OPTIONS AND COSTS FOR FULL MEMBERS AND ASSOCIATE MEMBERS ONLY
• ONLINE JOURNAL. All members will receive access to the The Journal of the Acoustical Society of America (JASA) at no charge in
addition to dues.
• PRINT JOURNAL. Twelve monthly issues of The Journal of the Acoustical Society of America. Cost: $35 in addition to dues.
• CD-ROM. The CD ROM mailed bimonthly. This option includes all of the material published in the Journal on CD ROM. Cost: $35 in
addition to dues.
• COMBINATION OF THE CD-ROM AND PRINTED JOURNAL. The CD-ROM mailed bimonthly and the printed journal mailed
monthly. Cost: $70 in addition to dues.
• EFFECTIVE DATE OF MEMBERSHIP. If your application for membership and dues payment are received by 15 September, your
membership and Journal subscription will begin during the current year and you will receive all back issues for the year. If you select the
print journal option. If your application is received after 15 September, however, your dues payment will be applied to the following year and
your Journal subscription will begin the following year.
OVERSEAS AIR DELIVERY OF JOURNALS
Members outside North, South, and Central America can choose to have print journals sent by air freight at a cost of $165 in addition to dues.
JASA on CD-ROM is sent by air mail at no charge in addition to dues.
ACOUSTICAL SOCIETY OF AMERICA
1305 Walt Whitman Road, Suite 300, Melville, NY 11747-4300, asa@aip.org
APPLICATION FOR MEMBERSHIP
Applicants may apply for one of four grades of membership, depending on their qualifications: Student Member, Associate Member,
Corresponding Electronic Associate Member or full Member. To apply for Student Membership, fill out Parts I and II of this form; to apply
for Associate, Corresponding Electronic Associate, or full Membership, or to transfer to these grades, fill out Parts I and III.
PART I. TO BE COMPLETED BY ALL APPLICANTS (Please print or type all entries)
CHECK ONE BOX
IN EACH COLUMN
ON THE RIGHT
NON-MEMBER APPLYING FOR:
MEMBER REQUESTING TRANSFER TO:
STUDENT MEMBERSHIP
ASSOCIATE MEMBERSHIP
CORRESPONDING ELECTRONIC
ASSOCIATE MEMBERSHIP
FULL MEMBERSHIP
Note that your choice of
journal option may increase or decrease the
amount you must remit.
SELECT JOURNAL OPTION:
Student members will automatically receive access to The Journal of the Acoustical Society of America online at no charge in addition to
dues. Remit $45. (Note: Student members may also receive the Journal on CD ROM at an additional charge of $35.)
Corresponding Electronic Associate Members will automatically receive access to The Journal of the Acoustical Society of America and
Meeting Programs online at no charge in addition to dues. Remit $45.
Applicants for Associate or full Membership must select one Journal option from those listed below. Note that your selection of journal
option determines the amount you must remit.
[ ] Online access only—$95
[ ] Online access plus print Journal $130
[ ] Online access plus CD ROM—$130
[ ] Online access plus print Journal and CD ROM combination—$165
Applications received after 15 September: Membership and Journal subscriptions begin the following year.
OPTIONAL AIR DELIVERY: Applicants from outside North, South, and Central America may choose air freight delivery of print journals
for an additional charge of $165. If you wish to receive journals by air, remit the additional amount owed with your dues. JASA on CD-ROM
is sent by air mail at no charge in addition to dues.
MOBILE PHONE: AREA CODE/NUMBER
CHECK PERFERRED ADDRESS FOR MAIL:
HOME
ORGANIZATION
Part I Continued
PART I CONTINUED: ACOUSTICAL AREAS OF INTEREST TO APPLICANT. Indicate your three main areas of interest below, using
1 for your main interest, 2 for your second, and 3 for your third interest. (DO NOT USE CHECK MARKS.)
ACOUSTICAL OCEANOGRAPHY M
ANIMAL BIOACOUSTICS L
ARCHITECTURAL ACOUSTICS A
BIOMEDICAL ACOUSTICS K
ENGINEERING ACOUSTICS B
MUSICAL ACOUSTICS C
NOISE & NOISE CONTROL D
PHYSICAL ACOUSTICS E
PSYCHOLOGICAL &
PHYSIOLOGICAL ACOUSTICS F
SIGNAL PROCESSING IN ACOUSTICS N
SPEECH COMMUNICATION H
STRUCTURAL ACOUSTICS
& VIBRATION G
UNDERWATER ACOUSTICS J
PART II: APPLICATION FOR STUDENT MEMBERSHIP
PART III: APPLICATION FOR ASSOCIATE MEMBERSHIP, CORRESPONDING ELECTRONIC ASSOCIATE
MEMBERSHIP OR FULL MEMBERSHIP (and interim Associate Membership)
SUMMARIZE YOUR MAJOR PROFESSIONAL EXPERIENCE on the lines below: list employers, duties and position titles, and dates,
beginning with your present position. Attach additional sheets if more space is required.
SPONSORS AND REFERENCES: An application for full Membership requires the names, addresses, and signatures of two references who
must be full Members or Fellows of the Acoustical Society. Names and signatures are NOT required for Associate Membership, Corresponding Electronic Associate Membership or Student Membership applications.
MAIL THIS COMPLETED APPLICATION, WITH APPROPRIATE PAYMENT TO: ACOUSTICAL SOCIETY OF AMERICA,
1305 WALT WHITMAN ROAD, SUITE 300, MELVILLE, NY 11747-4300.
METHOD OF PAYMENT
씲 Check or money order enclosed for $
씲 American Express
씲 VISA 씲 MasterCard
共U.S. funds/drawn on U.S. bank兲
Signature
Account Number
(Credit card orders must be signed)
Expiration Date
씲씲씲씲씲씲씲씲씲씲씲씲씲씲씲씲
Mo.
씲씲 Yr. 씲씲
Security Code
씲씲씲씲
Due to security risks and Payment Card Industry (PCI) data security standards e-mail is NOT an acceptable way to transmit credit card
information. Please return this form by Fax (631-923-2875) or by postal mail.
Regional Chapters and Student Chapters
Anyone interested in becoming a member of a regional chapter or in learning if a meeting of the chapter will be held while he/she is
in the local area of the chapter, either permanently or on travel, is welcome to contact the appropriate chapter representative. Contact
information is listed below for each chapter representative.
Anyone interested in organizing a regional chapter in an area not covered by any of the chapters below is invited to contact the
Cochairs of the Committee on Regional Chapters for information and assistance: Catherine Rogers, University of South Florida,
Tampa, FL, crogers@cas.usf.edu and Evelyn M. Hoglund, Ohio State University, Columbus, OH 43204, hoglund1@osu.edu
AUSTIN STUDENT CHAPTER
GREATER BOSTON
MID-SOUTH
Benjamin C. Treweek
10000 Burnet Rd.
Austin, TX 78758
Email: btreweek@utexas.edu
Eric Reuter
Reuter Associates, LLC
10 Vaughan Mall, Ste. 201A
Portsmouth, NH 03801
Tel: 603-430-2081
Email: ereuter@reuterassociates.com
Tiffany Gray
NCPA
Univ. of Mississippi
University, MS 38677
Tel: 662-915-5808
Email: midsouthASAchapter@gmail.com
GEORGIA INSTITUTE OF TECHNOLOGY STUDENT CHAPTER
UNIVERSITY OF NEBRASKA
STUDENT CHAPTER
BRIGHAM YOUNG UNIVERSITY
STUDENT CHAPTER
Kent L. Gee
Dept. of Physics & Astronomy
Brigham Young Univ.
N283 ESC
Provo, UT 84602
Tel: 801-422-5144
Email: kentgee@byu.edu
www.acoustics.byu.edu
CENTRAL OHIO
Angelo Campanella
Campanella Associates
3201 Ridgewood Dr.
Hilliard, OH 43026-2453
Tel: 614-876-5108
Email: a.campanella@att.net
Charlise Lemons
Georgia Institute of Technology
Atlanta, GA 30332-0405
Tel: 404-822-4181
Email: clemons@gatech.edu
Hyun Hong
Architectural Engineering
Univ. of Nebraska
Peter Kiewit Institute
1110 S. 67th St.
Omaha, NE 68182-0681
Tel: 402-305-7997
Email: unoasa@gmail.com
UNIVERSITY OF HARTFORD
STUDENT CHAPTER
Robert Celmer
Mechanical Engineering Dept., UT-205
Univ. of Hartford
200 Bloomfield Ave.
West Hartford, CT 06117
Tel: 860-768-4792
Email: celmer@hartford.edu
NARRAGANSETT
David A. Brown
Univ. of Massachusetts, Dartmouth
151 Martime St.
Fall River, MA 02723
Tel: 508-910-9852
Email: dbacoustics@cox.net
CHICAGO
Lauren Ronsse
Columbia College Chicago
33 E, Congress Pkwy., Ste. 601
Chicago, IL 60605
Email: lronsse@colum.edu
UNIVERSITY OF CINCINNATI
STUDENT CHAPTER
Kyle T. Rich
Biomedical Engineering
Univ. of Cincinnati
231 Albert Sabin Way
Cincinnati, OH 45267
Email: richkt@mail.uc.edu
UNIVERSITY OF KANSAS
STUDENT CHAPTER
Robert C. Coffeen
Univ. of Kansas
School of Architecture, Design, and Planning
Marvin Hall
1465 Jayhawk Blvd.
Lawrence, KS 66045
Tel: 785-864-4376
Email: coffeen@ku.edu
LOS ANGELES
Neil A. Shaw
www.asala.org
COLUMBIA COLLEGE CHICAGO
STUDENT CHAPTER
METROPOLITAN NEW YORK
Sandra Guzman
Dept. of Audio Arts and Acoustics
Columbia College Chicago
33 E. Congress Pkwy., Rm. 6010
Chicago, IL 60605
Email: sguzman@colum.edu
Richard F. Riedel
Riedel Audio Acoustics
443 Potter Blvd.
Brightwaters, NY 11718
Tel: 631-968-2879
Email: riedelaudio@optonline.net
FLORIDA
MEXICO CITY
Richard J. Morris
Communication Science and Disorders
Florida State Univ.
201 W. Bloxham
Tallahassee, FL 32306-1200
Email: richard.morris@cci.fsu.edu
Sergio Beristain
Inst. Mexicano de Acustica AC
PO Box 12-1022
Mexico City 03001, Mexico
Tel: 52-55-682-2830
Email: sberista@hotmail.com
2349
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
NORTH CAROLINA
Noral Stewart
Stewart Acoustical Consultants
7330 Chapel Hill Rd., Ste.101
Rayleigh, NC
Email: noral@sacnc.com
NORTH TEXAS
Peter F. Assmann
School of Behavioral and Brain Sciences
Univ. of Texas-Dallas
Box 830688 GR 4.1
Richardson, TX 75083
Tel: 972-883-2435
Email: assmann@utdallas.edu
NORTHEASTERN UNIVERSITY
STUDENT CHAPTER
Victoria Suha
Email: suha.v@husky.neu.ed
ORANGE COUNTY
David Lubman
14301 Middletown Ln.
Westminster, CA 92683
Tel: 714-373-3050
Email: dlubman@dlacoustics.com
168th Meeting: Acoustical Society of America
2349
PENNSYLVANIA STATE
UNIVERSITY STUDENT CHAPTER
Anand Swaminathan
Pennsylvania State Univ.
201 Applied Science Bldg.
University Park, PA 16802
Tel: 848-448-5920
Email: azs563@psu.edu
www.psuasa.org
PHILADELPHIA
Kenneth W. Good, Jr.
Armstrong World Industries, Inc.
2500 Columbia Ave.
Lancaster, PA 17603
Tel: 717-396-6325
Email: kwgoodjr@armstrong.com
SAN DIEGO
UPPER MIDWEST
Paul A. Baxley
SPAWAR Systems Center, Pacific
49575 Gate Road, Room 170
San Diego, CA 92152-6435
Tel: 619-553-5634
Email: paul.baxley@navy.mil
David Braslau
David Braslau Associates, Inc.
6603 Queen Ave. South, Ste. N
Richfield, MN 55423
Tel: 612-331-4571
Email: david@braslau.com
SEATTLE STUDENT CHAPTER
WASHINGTON, DC
Camilo Perez
Applied Physics Lab.
Univ. of Washington
1013 N.E. 40th St,
Seattle, WA 98105-6698
Email: camipiri@uw.edu
Matthew V. Golden
Scantek, Inc.
6430 Dobbin Rd., Ste. C
Columbia, MD 21045
Tel: 410-290-7726
Email: m.golden@scantek.com
PURDUE UNIVERSITY
STUDENT CHAPTER
Kao Ming Li
Purdue Univ.
585 Purdue Mall
West Lafayette, IN 47907
Tel: 765-494-1099
Email: mmkmli@purdue.edu
Email: purdueASA@gmail.com
2350
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2350
ACOUSTICAL SOCIETY OF AMERICA
BOOKS, CDS, DVD, VIDEOS ON ACOUSTICS
ACOUSTICAL DESIGN OF MUSIC EDUCATION
FACILITIES. Edward R. McCue and Richard H. Talaske,
Eds. Plans, photographs, and descriptions of 50 facilities with
explanatory text and essays on the design process. 236 pp, paper,
1990. Price: $23. Item # 0-88318-8104
summary of Harvey Fletcher’s 33 years of acoustics work at Bell
Labs. A new introduction, index, and complete bibliography of
Fletcher’s work are important additions to this classic volume.
487 pp, hardcover 1995 (original published 1953). Price: $40.
Item # 1-56396-3930
ACOUSTICAL DESIGN OF THEATERS FOR DRAMA
PERFORMANCE: 1985–2010. David T. Bradley, Erica E.
Ryherd, & Michelle C. Vigeant, Eds. Descriptions, color images,
and technical and acoustical data of 130 drama theatres from
around the world, with an acoustics overview, glossary, and
essays reflecting on the theatre design process. 334 pp, hardcover
2010. Price: $45. Item # 978-0-9846084-5-4
AEROACOUSTICS OF FLIGHT VEHICLES: THEORY
AND PRACTICE. Harvey H. Hubbard, Ed. Two-volumes
oriented toward flight vehicles emphasizing the underlying
concepts of noise generation, propagation, predicting and control.
Vol. 1 589 pp/Vol. 2 426 pp, hardcover 1994 (original published
1991). Price per 2-vol. set: $58. Item # 1-56396-404X
ACOUSTICAL DESIGNING IN ARCHITECTURE. Vern O.
Knudsen and Cyril M. Harris. Comprehensive, non-mathematical
treatment of architectural acoustics; general principles of
acoustical designing. 408 pp, paper, 1980 (original published
1950). Price: $23. Item # 0-88318-267X
ACOUSTICAL MEASUREMENTS. Leo L. Beranek. Classic
text with more than half revised or rewritten. 841 pp, hardcover
1989 (original published 1948). Available on Amazon.com
ACOUSTICS. Leo L. Beranek. Source of practical acoustical
concepts and theory, with information on microphones,
loudspeakers and speaker enclosures, and room acoustics. 491
pp, hardcover 1986 (original published 1954). Available on
Amazon.com
ACOUSTICS—AN INTRODUCTION TO ITS PHYSICAL
PRINCIPLES AND APPLICATIONS. Allan D. Pierce.
Textbook introducing the physical principles and theoretical
basis of acoustics, concentrating on concepts and points of view
that have proven useful in applications such as noise control,
underwater sound, architectural acoustics, audio engineering,
nondestructive testing, remote sensing, and medical ultrasonics.
Includes problems and answers. 678 pp, hardcover 1989 (original
published 1981). Price: $33. Item # 0-88318-6128
ACOUSTICS, ELASTICITY AND THERMODYNAMICS
OF POROUS MEDIA: TWENTY-ONE PAPERS BY M. A.
BIOT. Ivan Tolstoy, Ed. Presents Biot’s theory of porous media
with applications to acoustic wave propagation, geophysics,
seismology, soil mechanics, strength of porous materials, and
viscoelasticity. 272 pp, hardcover 1991. Price: $28. Item #
1-56396-0141
ACOUSTICS OF AUDITORIUMS IN PUBLIC BUILDINGS.
Leonid I. Makrinenko, John S. Bradley, Ed. Presents developments
resulting from studies of building physics. 172 pp, hardcover
1994 (original published 1986). Price: $38. Item # 1-56396-3604
ACOUSTICS OF WORSHIP SPACES. David Lubman
and Ewart A. Wetherill, Eds. Drawings, photographs, and
accompanying data of worship houses provide information on the
acoustical design of chapels, churches, mosques, temples, and
synagogues. 91 pp, paper 1985. Price: $23. Item # 0-88318-4664
ASA EDITION OF SPEECH AND HEARING IN
COMMUNICATION. Harvey Fletcher; Jont B. Allen, Ed. A
COLLECTED PAPERS ON ACOUSTICS. Wallace Clement
Sabine. Classic work on acoustics for architects and acousticians.
304 pp, hardcover 1993 (originally published 1921). Price: $28.
Item # 0-932146-600
CONCERT HALLS AND OPERA HOUSES. Leo L. Beranek.
Over 200 photos and architectural drawings of 100 concert halls
and opera houses in 31 countries with rank-ordering of 79 halls
and houses according to acoustical quality. 653 pp. hardcover
2003. Price: $50. Item # 0-387-95524-0
CRYSTAL ACOUSTICS. M.J.P. Musgrave. For physicists
and engineers who study stress-wave propagation in anisotropic
media and crystals. 406 pp. hardcover (originally published
1970). Price: $34. Item # 0-9744067-0-8
DEAF ARCHITECTS & BLIND ACOUSTICIANS? Robert
E. Apfel. A primer for the student, the architect and the planner.
105 pp. paper 1998. Price: $22. Item #0-9663331-0-1
THE EAR AS A COMMUNICATION RECEIVER. Eberhard
Zwicker & Richard Feldtkeller. Translated by Hannes Müsch,
Søren Buus, Mary Florentine. Translation of the classic Das Ohr
Als Nachrichtenempfänger. Aimed at communication engineers
and sensory psychologists. Comprehensive coverage of the
excitation pattern model and loudness calculation schemes. 297
pp, hardcover 1999 (original published 1967). Price: $50. Item
# 1-56396-881-9
ELECTROACOUSTICS: THE ANALYSIS OF TRANSDUCTION, AND ITS HISTORICAL BACKGROUND.
Frederick V. Hunt. Analysis of the conceptual development
of electroacoustics including origins of echo ranging, the
crystal oscillator, evolution of the dynamic loudspeaker, and
electromechanical coupling, 260 pp, paper 1982 (original
published 1954). Available on Amazon.com
ELEMENTS OF ACOUSTICS. Samuel Temkin. Treatment of
acoustics as a branch of fluid mechanics. Main topics include
propagation in uniform fluids at rest, trans-mission and reflection
phenomena, attenuation and dispersion, and emission. 515 pp.
hardcover 2001 (original published 1981). Price: $30. Item #
1-56396-997-1
EXPERIMENTS IN HEARING. Georg von Békésy. Classic
on hearing containing vital roots of contemporary auditory
knowledge. 760 pp, paper 1989 (original published 1960). Price:
$23. Item # 0-88318-6306
FOUNDATIONS OF ACOUSTICS. Eugen Skudrzyk. An
advanced treatment of the mathematical and physical foundations
of acoustics. Topics include integral transforms and Fourier
analysis, signal processing, probability and statistics, solutions
to the wave equation, radiation and diffraction of sound. 790 pp.
hardcover 2008 (originally published 1971). Price: $60. Item #
3-211-80988-0
HALLS FOR MUSIC PERFORMANCE: TWO DECADES
OF EXPERIENCE, 1962–1982. Richard H. Talaske, Ewart A.
Wetherill, and William J. Cavanaugh, Eds. Drawings, photos,
and technical and physical data on 80 halls; examines standards
of quality and technical capabilities of performing arts facilities.
192 pp, paper 1982. Price: $23. Item # 0-88318-4125
HALLS FOR MUSIC PERFORMANCE: ANOTHER TWO
DECADES OF EXPERIENCE 1982–2002. Ian Hoffman,
Christopher Storch, and Timothy Foulkes, Eds. Drawings,
color photos, technical and physical data on 142 halls. 301 pp,
hardcover 2003. Price: $56. Item # 0-9744067-2-4
HANDBOOK OF ACOUSTICAL MEASUREMENTS
AND NOISE CONTROL, THIRD EDITION. Cyril M.
Harris. Comprehensive coverage of noise control and measuring
instruments containing over 50 chapters written by top experts
in the field. 1024 pp, hardcover 1998 (original published 1991).
Price: $56. Item # 1-56396-774
HEARING: ITS PSYCHOLOGY AND PHYSIOLOGY.
Stanley Smith Stevens & Hallowell Davis. Volume leads readers
from the fundamentals of the psycho-physiology of hearing to a
complete understanding of the anatomy and physiology of the
ear. 512 pp, paper 1983 (originally published 1938). OUT-OFPRINT
PROPAGATION OF SOUND IN THE OCEAN. Contains
papers on explosive sounds in shallow water and long-range
sound transmission by J. Lamar Worzel, C. L. Pekeris, and
Maurice Ewing. hardcover 2000 (original published 1948). Price:
$37. Item #1-56396-9688
RESEARCH PAPERS IN VIOLIN ACOUSTICS 1975–1993.
Carleen M. Hutchins, Ed., Virginia Benade, Assoc. Ed. Contains
120 research papers with an annotated bibliography of over 400
references. Introductory essay relates the development of the
violin to the scientific advances from the early 15th Century to
the present. Vol. 1, 656 pp; Vol. 2, 656 pp. hardcover 1996. Price:
$120 for the two-volume set. Item # 1-56396-6093
NONLINEAR ACOUSTICS. Mark F. Hamilton and David T.
Blackstock. Research monograph and reference for scientists
and engineers, and textbook for a graduate course in nonlinear
acoustics. 15 chapters written by leading experts in the field. 455
pp, hardcover, 2008 (originally published in 1996). Price: $45.
Item # 0-97440-6759
NONLINEAR ACOUSTICS. Robert T. Beyer. A concise
overview of the depth and breadth of nonlinear acoustics with
an appendix containing references to new developments. 452 pp,
hardcover 1997 (originally published 1974). Price: $45. Item #
1-56396-724-3
NONLINEAR UNDERWATER ACOUSTICS. B. K. Novikov,
O. V. Rudenko, V. I. Timoshenko. Translated by Robert T. Beyer.
Applies the basic theory of nonlinear acoustic propagation
to directional sound sources and receivers, including design
nomographs and construction details of parametric arrays. 272
pp., paper 1987. Price: $34. Item # 0-88318-5229
OCEAN ACOUSTICS. Ivan Tolstoy and Clarence S. Clay.
Presents the theory of sound propagation in the ocean and
compares the theoretical predictions with experimental data.
Updated with reprints of papers by the authors supplementing
and clarifying the material in the original edition. 381 pp, paper
1987 (original published 1966). Available on Amazon.com
ORIGINS IN ACOUSTICS. Frederick V. Hunt. History of
acoustics from antiquity to the time of Isaac Newton. 224 pp,
hardcover 1992. Price: $19. Item # 0-300-022204
PAPERS IN SPEECH COMMUNICATION. Papers charting
four decades of progress in understanding the nature of human
speech production, and in applying this knowledge to problems of
speech processing. Contains papers from a wide range of journals
from such fields as engineering, physics, psychology, and speech
and hearing science. 1991, hardcover.
Speech Production. Raymond D. Kent, Bishnu S. Atal, Joanne
L. Miller, Eds. 880 pp. Item # 0-88318-9585
Speech Processing. Bishnu S. Atal, Raymond D. Kent, Joanne
L. Miller, Eds. 672 pp. Item # 0-88318-9607
Price: $38 ea.
RIDING THE WAVES. Leo L. Beranek. A life in sound, science,
and industry. 312 pp. hardcover 2008. Price: $20. Item # 978-0-26202629-1
THE SABINES AT RIVERBANK. John W. Kopec. History
of Riverbank Laboratories and the role of the Sabines (Wallace
Clement, Paul Earls, and Hale Johnson) in the science of
architectural acoustics. 210 pp. hardcover 1997. Price: $19. Item
# 0-932146-61-9
SONICS, TECHNIQUES FOR THE USE OF SOUND
AND ULTRASOUND IN ENGINEERING AND SCIENCE.
Theodor F. Hueter and Richard H. Bolt. Work encompassing the
analysis, testing, and processing of materials and products by
the use of mechanical vibratory energy. 456 pp, hardcover 2000
(original published 1954). Price: $30. Item # 1-56396-9556
SOUND IDEAS. Deborah Melone and Eric W. Wood. Early
days of Bolt Beranek and Newman Inc. to the rise of Acentech
Inc. 363 pp. hardcover 2005. Price: $25. Item # 200-692-0681
SOUND, STRUCTURES, AND THEIR INTERACTION.
Miguel C. Junger and David Feit. Theoretical acoustics, structural
vibrations, and interaction of elastic structures with an ambient
acoustic medium. 451 pp, hardcover 1993 (original published
1972). Price: $23. Item # 0-262-100347
THEATRES FOR DRAMA PERFORMANCE: RECENT
EXPERIENCE IN ACOUSTICAL DESIGN. Richard H.
Talaske and Richard E. Boner, Eds. Plans, photos, and descriptions
of theatre designs, supplemented by essays on theatre design and
an extensive bibliography. 167 pp, paper 1987. Price: $23. Item
# 0-88318-5164
THERMOACOUSTICS. Gregory W. Swift. A unifying
thermoacoustic perspective to heat engines and refrigerators.
Includes a CD ROM with animations and DELTAE and its User’s
Guide. 300 pp, paper, 2002. Price: $50. Item # 0-7354-0065-2
VIBRATION AND SOUND. Philip M. Morse. Covers the broad
spectrum of acoustics theory, including wave motion, radiation
problems, propagation of sound waves, and transient phenomena.
468 pp, hardcover 1981 (originally published 1936). Price: $28.
Item # 0-88318-2874
VIBRATION OF PLATES. Arthur W. Leissa. 353 pp, hardcover
1993 (original published 1969). Item # 1-56396-2942
VIBRATION OF SHELLS. Arthur W. Leissa. 428 pp, hardcover
1993 (original published 1973). Item # 1-56396-2934
SET ITEM # 1-56396-KIT. Monographs dedicated to the
organization and summarization of knowledge existing in the
field of continuum vibrations. Price: $28 ea.; $50 for 2-volume
set.
CDs, DVD, VIDEOS, STANDARDS
Auditory Demonstrations (CD). Teaching adjunct for lectures or courses on hearing and auditory effects. Provides signals for teaching
laboratories. Contains 39 sections demonstrating various characteristics of hearing. Includes booklet containing introductions and
narrations of each topic and bibliographies for additional information. Issued in1989. Price: $23. Item # AD-CD-BK
Measuring Speech Production (DVD). Demonstrations for use in teaching courses on speech acoustics, physiology, and instrumentation.
Includes booklet describing the demonstrations and bibliographies for more information. Issued 1993. Price: $52. Item # MS-DVD
Scientific Papers of Lord Rayleigh (CD ROM). Over 440 papers covering topics on sounds, mathematics, general mechanics,
hydrodynamics, optics and properties of gasses by Lord Rayleigh (John William Strutt) the author of the Theory of Sound. Price: $40.
Item # 0-9744067-4-0
Proceedings of the Sabine Centennial Symposium (CD ROM). Held June 1994. Price: $50. Item # INCE25-CD
Fifty Years of Speech Communication (VHS). Lectures presented by distinguished researchers at the ASA/ICA meeting in June 1998
covering development of the field of Speech Communication. Lecturers: G. Fant, K.N. Stevens, J.L. Flanagan, A.M. Liberman, L.A.
Chistovich—presented by R.J. Porter, Jr., K.S. Harris, P. Ladefoged, and V. Fromkin. Issued in 2000. Price: $30. Item # VID-Halfcent
Speech Perception (VHS). Presented by Patricia K. Kuhl. Segments include: I. General introduction to speech/language processing;
Spoken language processing; II. Classic issues in speech perception; III. Phonetic perception; IV. Model of developmental speech
perception; V. Cross-modal speech perception: Links to production; VI. Biology and neuroscience connections. Issued 1997. Price:
$30. Item # SP-VID
Standards on Acoustics. Visit http://scitation.aip.org/content/asa/standards to purchase for download National (ANSI) and International
(ISO) Standards on topics ranging from measuring environmental sound to standards for calibrating microphones.
Order the following from ASA, 1305 Walt Whitman Road, Suite 300, Melville, NY 11747-4300; asa@aip.org; Fax: 631-9232875 Telephone orders not accepted. Prepayment required by check (drawn on US bank) or by VISA, MasterCard, American
Express.
Study of Speech and Hearing at Bell Telephone Laboratories (CD). Nearly 10,000 pages of internal documents from AT&T archives
including historical documents, correspondence files, and laboratory notebooks on topics from equipment requisitions to discussions of
project plans, and experimental results. Price: $20.
Collected Works of Distinguished Acousticians CD - Isadore Rudnick (CD + DVD). 3 disc set includes reprints of papers by
Isadore Rudnick from scientific journals, a montage of photographs with colleagues and family, and video recordings of the Memorial
Session held at the 135th meeting of the ASA. Price: $50.
Technical Memoranda issued by Acoustics Research Laboratory-Harvard University (CD). The Harvard Research Laboratory
was established in 1946 to support basic research in acoustics. Includes 61 reports issued between 1946 and 1971 on topics such as
radiation, propagation, scattering, bubbles, cavitation, and properties of solids, liquids, and gasses. Price $25.
ORDER FORM FOR ASA BOOKS, CDS, DVD, VIDEOS
1. Payment must accompany order. Payment may be made by check or
international money order in U.S. funds drawn on U.S. bank or by VISA,
MasterCard, or American Express credit card.
2. Send orders to: Acoustical Society of America, Publications, P.O. Box 1020,
Sewickley, PA 15143-9998; Tel.: 412-741-1979; Fax: 412-741-0609.
Item #
3. All orders must include shipping costs (see below).
4. A 10% discount applies on orders of 5 or more copies of the same title only.
5. Returns are not accepted.
Quantity
Title
Price
Total
Subtotal
Shipping costs for all orders are based on weight and distance.
For quote visit http://www.abdi-ecommerce10.com/asa,
email: asapubs@abdintl.com, or call 412-741-1979
10% discount on orders of 5 or more of the same title
Total
Name _______________________________________________________________________________________________________
Address _____________________________________________________________________________________________________
_____________________________________________________________________________________________________________
City ________________________ State ______________________ ZIP/Postal _________________ Country ____________________
Tel.: ________________________ Fax: ________________________ Email: ____________________________________________
Method of Payment
[
] Check or money order enclosed for $__________ (U.S. funds/drawn on U.S. bank made payable to the Acoustical Society of
America)
[
] VISA
[
] MasterCard
[
] American Express
Cardholders signature_________________________________________________________________
(Credit card orders must be signed)
Card # ____________________________________________________________ Expires Mo. __________________ Yr._______________
THANK YOU FOR YOUR ORDER!
Due to security risks and Payment Card Industry (PCI) data security standards e-mail is NOT an acceptable way to transmit credit card
information. Please use our secure web page to process your credit card payment (http://www.abdi-ecommerce10.com/asa) or securely fax
this form to (516-576-2377).
The Scientific Papers of Lord Rayleigh are now available on CD ROM from the Acoustical Society
of America. The CD contains over 440 papers covering topics on sound, mathematics, general
mechanics, hydrodynamics, optics, and properties of gasses. Files are in pdf format and readable
with Adobe Acrobat® reader.
Lord Rayleigh was indisputably the single most significant contributor to the world’s literature in
acoustics. In addition to his epochal two volume treatise, The Theory of Sound, he wrote some 440
articles on acoustics and related subjects during the fiRy years of his distinguished research career. He
is generally regarded as one of the best and clearest writers of scientific articles of his generation, and
his papers continue to be read and extensively cited by modem researchers in acoustics.
ISBN 0-9744067-4-0
Price: $40.00
$40 ASA members; $70 nonmembers
Due to security risks and Payment Card Industry (PCI) data security standards e-mail is NOT an acceptable way
to transmit credit card information. Please use our secure web page to process your credit card payment (http://
www.abdi-ecommerce10.com/asa) or securely fax this form to (516-576-2377).
Due to security risks and Payment Card Industry (PCI) data security standards e-mail is NOT an acceptable
way to transmit credit card information. Please use our secure web page to process your credit card payment
(http://www.abdi-ecommerce10.com/asa) or securely fax this form to (516-576-2377).
AUTHOR INDEX
to papers presented at
168th Meeting: Acoustical Society of America
Abadi, Shima H.–2092
Abawi, Ahmad T.–2086
Abbasi, Mustafa Z.–2166, 2219
Abdelaziz, Mohammed–2243
Abel, Markus–2163
Abel, Markus W.–2163
Abell, Alexandra–2127
Abercrombie, Clemeth–2090
Abkowitz, Paul M.–2268
Abraham, Douglas A.–2225
Abuhabshah, Rami–2125
Acquaviva, Andrew A.–2252
Adelman-Larsen, Niels W.–2116
Adibi, Yasaman–2280
Agarwal, Amal–2084
Agnew, Zarinah–2243
Aguirre, Sergio L.–2097, 2282
Ahmad, Syed A.–2125
Ahn, SangKeun–2209
Ahnert, Wolfgang–2089
Aho, Katherine–2256
Ahroon, William A.–2165
Ahuja, K.–2169
Ainslie, Michael A.–2217, 2247,
2297, Cochair Session 3aUW
(2216)
Akamatsu, Tmonari–2152
Akamatsu, Tomonari–2155
Akrofi, Kwaku–2309
Albert, Donald G.–2139
Alberts, W. C. Kirkpatrick–2139,
2169
Albin, Aaron L.–2082
Alexander, Jennifer–2106
Alexander, Joshua–2311
Alexander, Joshua M.–2310
Ali, Hussnain–2083
Alizad, Azra–2159
Alkayed, Nabil J.–2280
Allen, Jont B.–2251
Allgood, Daniel C.–2136
Almekkawy, Mohamed Khaled–2280
Alù, Andrea–2099
Alu, Andrea–2099, 2281
Alvarez, Alberto–2155
Alvord, David–2169
Alwan, Abeer–2259, 2295, Cochair
Session 4aSCa (2259)
Alzqhoul, Esam A.–2083
Amador, Carolina–2124
Amano, Shigeaki–2175
Ammi, Azzdine Y.–2280
Amon, Dan–2112
Amundin, Mats–2248
Anderson, Brian E.–2252, 2265,
Cochair Session 4aSPb (2265)
Anderson, Paul–2308, 2310
Anderson, Paul W.–2242
Andrews, Mark–2226
Andrews, Russel D.–2091
Andriolo, Artur–2073, 2277
Anikin, Igor I.–2318
Antoni, Jérôme–2171
2357
Antoniak, Maria–2175
Archangeli, Diana–2082, 2104
Archangeli, Diana B.–2105
Arena, David A.–2219
Argo, Theodore F.–2165
Aristizabal, Sara–2124
Arnhold, Anja–2173
Aronov, Boris–2131
Arora, Manish–2256
Arrieta, Rodolf–2268
Ashida, Hiroki–2168
Assous, Said–2255, Cochair Session
4aPAa (2254)
Astolfi, Arianna–2294
Atagi, Eriko–2109
Athanasopoulou, Angeliki–2176
Attenborough, Keith–2078, Cochair
Session 1aNS (2076), Cochair
Session 1pNS (2098)
Au, Jenny–2256
Au, Whitlow–2075
Au, Whitlow W.–2246
Au, Whitlow W. L.–2154
Aubert, Allan–2079
Auchere, Jean christophe–2253
August, Tanya–2212
Aumann, Aric R.–2286
Aunsri, Nattapol–2085
Avendano, Alex–2279
Awuor, Ivy–2302
Azad, Hassan–2218
Azbaid El Ouahabi, Abdelhalim–
2076
Azusawa, Aki–2168
Barbieri, Renato–2282, 2305
Barbone, Paul E.–2141, 2159, Chair
Session 3pID (2222)
Barbosa, Adriano–2310
Barbosa, Adriano V.–2105
Barcenas, Teresa–2212
Barclay, David–2317
Barkley, Yvonne M.–2154
Barlow, Jay–2117, 2245
Barthe, Peter G.–2125
Bartram, Nina–2261
Bash, Rachel E.–2307
Basile, David–2192
Bassuet, Alban–2218
Batchelor, Heidi A.–2277
Battaglia, Paul–2218
Baumann-Pickering, Simone–2073,
Cochair Session 3aAB (2184)
Baumgartner, Mark F.–2093, 2116
Baxter, Christopher D. P.–2156
Beauchamp, James–2202
Beauchamp, James W.–2150
Becker, Kara–2295
Beckman, Mary E.–2174
Belding, Heather–2291
Bell, Joel–2246
Belmonte, Andrew–2207
Benech, Nicolas–2196
Benke, Harald–2091, 2248
Benoit-Bird, Kelly J.–2186
Bent, Tessa–2109, 2199, 2212, 2273,
Chair Session 1pSCb (2106)
Beranek, Leo L.–2130, 2162
Berg, Katelyn–2311
Berger, Elliott H.–2134, 2135, 2165,
Baars, Woutijn J.–2101
Cochair Session 2aNSa (2133),
Babaniyi, Olalekan A.–2159
Cochair Session 2pNSa (2165)
Bader, Kenneth B.–2095, 2199
Bergeson-Dana, Tonya R.–2262
Bader, Rolf–2132, 2163, Chair
Bergler, Kevin–2279
Session 2pMU (2163)
Beristain, Sergio–2118, 2182
Badiey, Mohsen–2119, 2148, 2317
Bernadin, Shonda–2293
Baelde, Maxime–2284
Bernal, Ximena–2184
Baese-Berk, Melissa M.–2146
Berry, David–2259
Baggeroer, Arthur–2148
Berry, Matthew G.–2100
Baggeroer, Arthur B.–2187, Cochair Bharadwaj, Hari–2258
Session 3aAO (2187)
Bhatta, Ambika–2140
Bai, Mingsian R.–2084
Bhojani, Naeem–2191
Bailakanavar, Mahesh–2195
Bigelow, Timothy–2096, 2157, 2279
Bailey, Michael–2192, 2193, 2278,
Bigelow, Timothy A.–2279, 2280
2301
Binder, Alexander–2129
Bailey, Michael R.–2191, 2193,
Binder, Carolyn–2074
2249, 2250, 2251
Birkett, Stephen–2132
Balestriero, Randall–2217
Blaeser, Susan B.–Cochair Session
Ballard, Megan S.–2120, 2178,
3aUW (2216)
2252, 2317, Chair Session 2pUW Blanc-Benon, Philippe–2289
(2178), Cochair Session 2aAO
Blanchard, Nathan–2215
(2119)
Blanco, Cynthia P.–2109
Balletto, Emilio–2184
Blasingame, Michael–2263
Bang, Hye-Young–2262
Bleifnick, Jay–2128
Banks, Russell–2294
Blevins, Matthew G.–2126, 2200
Barbar, Steve–2115, 2151
Blomgren, Philip M.–2191
Barbero, Francesca–2074, 2184
Blotter, Jonathan D.–2199
Barbieri, Nilson–2282, 2305
Blumsack, Judith–2307
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Bocko, Mark–2202
Boebinger, Dana–2243
Bohn, Alexander C.–2093
Boisvert, Jeffrey–2195
Bolshakova, Ekaterina S.–2290
Bolton, J. S.–2141, 2183
Bolton, J. Stuart–2197
Bomberger Brown, Mary–2073
Bonadies, Marguerite–2144
Bonelli, Simona–2074, 2184
Boning, Willem–2218
Bonnel, Julien–2119
Bonomo, Anthony L.–2179
Borchers, David–2245
Bottalico, Pasquale–2294
Boubriak, Olga–2281
Bouchard, Kristofer E.–2104
Bouchoux, Guillaume–2095
Boutin, Claude–2077
Boutoussov, Dmitri–2279
Boyce, Suzanne–2261
Boyce, Suzanne E.–2082, 2105
Boyd, Brenna N.–2126, 2274
Boyle, John K.–2112
Braasch, Jonas–2150, 2198
Bradley, David–2188
Bradley, David T.–2243, Chair
Session 4aAAb (2243)
Bradlow, Ann–2201, 2241
Bradlow, Ann R.–2263
Brady, Michael C.–2215
Brady, Steven–2277
Brand, Thomas–2273
Brandão, Eric–2141
Brandao, Alexandre–2282
Brandewie, Eugene–2242
Bridal, Lori–2123
Bridgewater, Ben–2089, 2090
Brigham, John C.–2124
Brill, Laura C.–2126
Britton, Deb–2115
Broda, Andrew L.–2128
Brooks, Todd–2198
Brouard, Brunuo–2077
Brown, David A.–2131, 2189
Brown, Michael–2156
Brown, Michael G.–2156
Brown, Stephen–2254
Brule, Stephane–2077
Brum, Ricardo–2097, 2282, 2305
Brundiers, Katharina–2248
Brungart, Timothy A.–2208
B T Nair, Balamurali–2083
Bucaro, Joseph–2086, 2112
Bucaro, Joseph A.–2111, 2112, 2194
Buck, John–2189
Buck, John R.–2093, 2147, 2154
Buckingham, Michael J.–2276
Bueno, Odair C.–2074
Bui, Thanh Minh–2123
Bunting, Gregory–2141
Burdin, Rachel S.–2172
Burgess, Alison–2300
168th Meeting: Acoustical Society of America
2357
Burnett, David–2086, 2140
Burns, Dan–2254
Burov, Valentin–2220
Bush, Dane R.–2214
Buss, Emily–2242
Bustamante, Omar A.–2118
Butko, Daniel–2127, 2151
Butler, Kevin–Cochair Session
1pAA (2088)
Byrne, David C.–2134
Byun, Gi Hoon–2148
Cacace, Anthony T.–2258
Cade, David–2186
Cai, Tingli–2309
Cain, Charles–2250
Cain, Charles A.–2193, 2248, 2250,
2251, 2280
Cain, Jericho E.–2139
Calandruccio, Lauren–2242
Çalıs
kan, Mehmet–2219
Calvo, David C.–2252
Campanella, Angelo J.–2131, 2207
Campbell, Richard L.–2120
Canchero, Andres–2167
Canney, Michael–2220, 2301
Cao, Rui–2141
Capone, Dean E.–2208
Carbotte, Suzanne M.–2092
Cardinale, Matthew R.–2306
Cariani, Peter–2164
Carignan, Christopher–2104
Carlén, Ida–2248
Carlisle, Robert–2300, 2302
Carlos, Amanda A.–2074
Carlson, Lindsey C.–2124
Carlström, Julia–2248
Carpenter-Thompson, Jake–2309
Carter, J. Parkman–2090
Carthel, Craig–2075
Casacci, Luca P.–2184
Casali, John–2166
Case, Alexander U.–2130, 2151,
2271, Cochair Session 2aAA
(2114), Cochair Session 2pAA
(2150), Cochair Session 4pAAa
(2270)
Cash, Brandon J.–2307
Cassaci, Luca P.–2074
Casserly, Elizabeth D.–2313
Cataldo, Edson–2282
Catheline, Stefan–2196, 2279
Cavanaugh, William J.–2162,
Cochair Session 2pID (2161)
Cechetto, Clement–2185
Celano, Joseph W.–2204
Celis Murillo, Antonio–2276
Celmer, Robert–2219
Cesar, Lima–2243
Chéenne, Dominique J.–2128
Cha, Yongwon–2244
Chabassier, Juliette–2133
Chan, Julian–2175
Chan, Weiwei–2256
Chandra, Kavitha–2140, 2256, 2289
Chandrasekaran, Bharath–2263,
2264, 2314
Chandrika, Unnikrishnan K.–2268
Chang, Andrea Y.–2178, 2179
Chang, Edward F.–2104
Chang, Yueh-chin–2145, 2173
2358
Chang, Yung-hsiang Shawn–2175
Chapelon, Jean-Yves–2220, 2279
Chapin, William L.–2286
Chapman, Ross–2188, 2316
Chavali, Vaibhav–2147
Che, Xiaohua–2254
Cheinet, Sylvain–2138
Chelliah, Kanthasamy–2172
Chen, Chi-Fang–2074, 2178
Chen, Chifang–2316
Chen, Ching-Cheng–2084
Chen, Gang–2295
Chen, Hsin-Hung–2179
Chen, Jessica–2154
Chen, Jun–2205
Chen, Li-mei–2312
Chen, Shigao–2159
Chen, Sinead H.–2243
Chen, Tianrun–2317
Chen, Weirong–2145
Chen, Wei-rong–2145
Chen, Yi-Tong–2266
Chen, Yongjue–2215
Chesnais, Céline–2077
Chevillet, John R.–2278
Cheyne, Harold A.–2117
Chhetri, Dinesh–2294
Chiaramello, Emma–2313
Chien, Yu-Fu–2177
Chirala, Mohan–2280
Chiu, Chen–2185
Chiu, Ching-Sang–2178, 2316
Chiu, Linus–2179, 2316
Chiu, Linus Y.–2178
Cho, Sungho–2298
Cho, Sunghye–2108
Cho, Tongjun–2209
Choi, Inyong–2258
Choi, James–2300
Choi, Jee W.–2149
Choi, Jee Woong–2298
Choi, Jeung-Yoon–2174
Choi, Wongyu–2281
Cholewiak, Danielle–2277
Choo, Andre–2289
Choo, Youngmin–2180
Chotiros, Nicholas–2268
Chotiros, Nicholas P.–2179, 2268,
2269
Chou, Lien-Siang–2074, 2186
Christensen, Benjamin Y.–2081
Christian, Andrew–2285, 2287
Chu, Chung-Ray–2179
Chuen, Lorraine–2307
Church, Charles C.–2249
Cipolla, Jeffrey–2195
Civale, John–2301
Clark, Brad–2129
Clark, Cathy Ann–2178
Clark, Grace A.–2084, Chair Session
4aSPa (2264)
Clayards, Meghan–2262
Clement, Gregory T.–2159, 2160
Cleveland, Robin–2281
Clopper, Cynthia G.–2172
Coburn, Michael–2192
Coffeen, Robert C.–2090, Cochair
Session 1pAA (2088)
Coiado, Olivia C.–2096
Colbert, Sadie B.–2096
Colin, Mathieu E.–2297
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Collier, Sandra–2138
Collier, Sandra L.–2139
Collin, Jamie–2300
Collin, Samantha–2132
Colonius, Tim–2080, 2081, 2192,
Cochair Session 3aBA (2191)
Colosi, John A.–Cochair Session
5aUW (2315)
Colosi, John A.–2149, 2155, 2316,
Chair Session 2pAO (2155)
Colson, Brendan–2261
Conant, David F.–2104
Connick, Robert–2089
Connors, Bret–2192
Connors, Bret A.–2191
Cook, Sara–2312
Coralic, Vedran–2192
Coraluppi, Stefano–2075
Corke, Thomas C.–2200
Corkeron, Peter–2277
Coron, Alain–2123
Corrêa, Fernando–2282
Costa, Marcia–2301
Costley, R. Daniel–2252
Costley, Richard D.–2178, Chair
Session 4aEA (2251)
Cottingham, James P.–2201, 2202,
2283
Coulouvrat, François–2279
Coussios, Constantin–2281, 2302
Coussios, Constantin C.–2300
Coviello, Christian–2300, 2302
Coyle, Whitney L.–2283, Cochair
Session 3aID (2197)
Craig, Adam–2305
Cray, Benjamin A.–2196
Cremaldi, Lucien–2252
Cremer, Marta J.–2277
Crone, Timothy J.–2092
Crowley, Alex–2308
Crum, Lawrence–2301
Crum, Lawrence A.–2249, Cochair
Session 3pBA (2219)
Csapó, Tamás G.–2128
Culver, R. L.–2213
Culver, R. Lee–2222, Chair Session
3aSPa (2213), Chair Session
3aSPb (2214), Cochair Session
4aSPb (2265)
Cummins, Phil R.–2085
Cunitz, Bryan–2193
Cunitz, Bryan W.–2192, 2193
Cuppoletti, Dan–2101
Curley, Devyn P.–2285
Czarnota, Gregory–2123
Czech, Joseph J.–2079
Davis, Catherine M.–2280
Davis, Gabriel–2279
Davis, Genevieve–2277
Dayeh, Maher A.–2223
de Graaff, Boris–2097
De Jesus Diaz, Luis–2178
de Jong, Kenneth–2106
Dele-Oni, Purity–2256
de Moustier, Christian–2267
Denis, Max–2159, 2256, 2289
Deppe, Jill–2276
DeRuiter, Stacy L.–2247
Desa, Keith–2285
De Saedeleer, Jessica–2284
Deshpande, Shruti B.–2306
de Souza, Olmiro C.–2304
Dettmer, Jan–2085, 2268, 2269,
2298
Dey, Saikat–2086, 2194
D’Hondt, Steven–2156
Diaz-Alvarez, Henry–2266
Dichter, Ben–2104
Diedesch, Anna C.–Chair Session
5aPPb (2308)
Diedesch, Anna C.–2198, 2308
Dighe, Manjiri–2193
Dilley, Laura–2176, 2312
Dimitrijevic, Andrew–2306
D’Mello, Sydney–2215
Dmitrieva, Olga–2174, Chair
Session 2pSC (2172)
Doc, Jean-Baptiste–2283
Dodsworth, Robin–2104
Doedens, Ric–2183
Doerschuk, Peter–2144
Dong, David W.–2181
Dong, Qi–2106
Dong, Weijia–2148
Dooley, Wesley L.–2130
Dosso, Stan–2268, 2269
Dosso, Stan E.–2085, 2298
Dostal, Jack–2284, Chair Session
3aMU (2201)
Dou, Chunyan–2301
Dowling, David R.–2148, 2158,
2188
Downing, Micah–2079
Doyley, Marvin M.–2302
D’Spain, Gerald–2092
D’Spain, Gerald L.–2118, 2277
Dubno, President, Judy R.–Chair
Session (2228)
Duda, Timothy–2315, 2316
Duda, Timothy F.–2316, 2317,
Cochair Session 2aAO (2119)
Dudley, Christopher–2088
Dudley Ward, Nicholas F.–2289
Dumont, Alain–2255
Dunmire, Barbrina–2192, 2193
Dunn, Floyd–2219
Duryea, Alex–2301
Duryea, Alexander–2302
Duryea, Alexander P.–2193, 2280
Duvanenko, Natalie E.–2260
Dziak, Robert P.–2154
Dzieciuch, Matthew A.–2149
Dahl, Peter H.–2187, 2206, 2216,
2226, 2227, 2297
Dalby, Jonathan–2212
Dall’Osto, David R.–2226, 2227,
2297
Danielson, D. Kyle–2263
Danilewicz, Daniel–2277
Darcy, Isabelle–2109
da Silva, Andrey R.–2141, 2305
David, Bonnett E.–2217
Eastland, Grant C.–2088
Davidson, Lisa–2103
Davies, Patricia–2197, 2287, Cochair Ebbini, Emad S.–2124, 2280
Eccles, David–2255, Cochair
Session 4pNS (2285)
Session 4aPAa (2254)
Davis, Andrea K.–2261
168th Meeting: Acoustical Society of America
2358
Eddins, Ann C.–2291
Eddins, David A.–2291, 2293,
2295
Edelmann, Geoffrey F.–2214
Elam, W. T.–2297
Elbes, Delphine–2281
Eligator, Ronald–2088
Elkington, Peter–2255
Elko, Gary W.–2130
Eller, Anthony I.–2296
Ellis, Dale D.–2297
Ellis, Donna A.–2182
Enoch, Stefan–2077
Ensberg, David–2225
Erdol, Nurgun–2073
Esfahanian, Mahdi–2073
Espana, Aubrey–2087, 2110
Espana, Aubrey L.–2087, 2111,
Chair Session 1pUW (2110)
Espy-Wilson, Carol–2082, 2312
Etchenique, Nikki–2132
Evan, Andrew–2192
Evan, Andrew P.–2191
Evans, Neal–2223
Evans, Samuel–2243
Ezekoye, Ofodike A.–2166, 2219
Fackler, Cameron J.–2084, 2162,
Cochair Session 1aSP (2084)
Falvey, Dan–2185
Fan, Lina–2148
Farahani, Mehrdad H.–2224
Farmer, Casey–2219
Farmer, Casey M.–2166
Farr, Navid–2249, 2278
Farrell, Daniel–2160
Farrell, Dara M.–2206
Fatemi, Mostafa–2124, 2159
Faulkner, Kathleen F.–2314
Fazzio, Robert–2159
Fehler, Michael–2254
Feistel, Stefan–2089
Feleppa, Ernest J.–2123, 2157
Feltovich, Helen–2124
Ferguson, Elizabeth–2246
Ferguson, Sarah H.–2210
Ferracane, Elisa–2109
Ferrier, John–2127
Fink, Mathias–2282
Fischell, Erin M.–2110
Fischer, Jost–2163
Fischer, Jost L.–2163
Fisher, Daniel–2129
Fleury, Romain–2099, 2281
Florêncio, Dinei A.–2265
Fogerty, Daniel–2211
Folmer, Robert–2291
Foote, Kenneth G.–2217
Forssén, Jens–2286
Forsythe, Hannah–2176
Fosnight, Tyler R.–2096, 2125
Fournet, Michelle–2153, Cochair
Session 1pAB (2091)
Fowlkes, J. B.–2251
Fowlkes, Jeffrey B.–Cochair Session
4aBA (2248), Cochair Session
4pBA (2278)
Fox, Robert A.–2312, 2313
Foye, Michelle–2143
Francis, Alexander L.–Chair Session
5aSC (2310)
2359
Francis, Alexander L.–2107, 2145
Frankford, Saul–2176
Franklin, Thomas D.–2220
Frazer, Brittany–2296
Frederickson, Carl–2126, 2127
Frederickson, Nicholas L.–2126
Freeman, Lauren A.–2276
Freeman, Robin–2276
Freeman, Simon E.–2276
Freeman, Valerie–2175
Fregosi, Selene–2119
Freiheit, Ronald–2115
Frisk, George V.–Cochair Session
3aUW (2216)
Frush Holt, Rachael–2263
Fu, Pei-Chuan–2256
Fu, Yanqing–2075
Fuhrman, Robert A.–2310
Fujita, Kiyotaka–2168
Fukushima, Takeshi–2255
Fullan, Ryan–2129
Gaffney, Rebecca G.–2306
Galatius, Anders–2248
Gallagher, Hilary–2133
Gallagher, Hilary L.–2079, 2134
Gallot, Thomas–2254
Gallun, Frederick–2291
Gallun, Frederick J.–2242, 2311,
Cochair Session 4aPP (2257),
Cochair Session 4pPP (2291)
Gao, Shunji–2300
Gao, Ximing–2136
Garcı́a-Chocano, Victor M.–2076
Garcia, Paula B.–2211
Garcia, Tiffany S.–2074
Gardner, Michael–2129
Garellek, Marc–2295
Garello, René–2266
Garrett, Steven L.–Chair Session
2aID (2129)
Gassmann, Martin–2092
Gaudette, Jason E.–2093
Gauthier, Marianne–2095
Gavrilov, Leonid–2220
Gawarkiewicz, Glen–2315, 2316
Gee, Kent–2079
Gee, Kent L.–2079, 2081, 2100,
2101, 2102, 2128, 2135, 2167,
2169, 2171, 2199, Cochair
Session 1aPA (2079), Cochair
Session 1pPA (2100), Cochair
Session 2aNSb (2135)
Gendron, Paul–2189
Gerard, Odile–2075
Gerges-Naisef, Haidy–2122
Gerken, LouAnn–2261
Gerratt, Bruce–2295
Gerratt, Bruce R.–2295
Gerstoft, Peter–2304
Ghassemi, Marzyeh–2260
Ghoshal, Goutam–2125
Giacomoni, Clothilde–2136
Giammarinaro, Bruno–2279
Giard, Jennifer–2197
Giard, Jennifer L.–2156
Giegold, Carl–2114, 2244, 2274
Giguere, Christian–2165
Gilbert, Keith–2265
Gillani, Uzair–2075
Gillespie, Doug–2093
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Gillespie, Douglas–2277
Gillespie, Douglas M.–2117
Giordano, Nicholas–2284, Chair
Session 2aMU (2132)
Giorli, Giacomo–2246
Gipson, Karen–2202
Giraldez, Maria D.–2278
Gjebic, Julia–2202
Gkikopoulou, Kalliopi–2117
Gladden, Joseph R.–2290
Gladden, Josh R.–2207, Cochair
Session 4aPAb (2256), Cochair
Session 4pPA (2288)
Glauser, Mark N.–2100
Glean, Aldo A.–2194
Glosemeyer Petrone, Robin S.–2089
Glotin, Hervé–2217
Goad, Heather–2262
Godin, Oleg–2156
Godin, Oleg A.–2156
Goerlitz, Holger R.–2185
Gogineni, Sivaram–2100
Goldberg, Hannah–2258
Goldhor, Richard–2265
Goldman, Geoffrey H.–2213, 2266
Goldsberry, Benjamin M.–2156
Goldstein, Julius L.–2309
Goldstein, Louis–2143
Golubev, V.–2168
Gomez, Antonio–2281
Gong, Zheng–2093, 2147, 2226,
2317
Gopala, Anumanchipalli K.–2104
Gordon, Jonathan–2093
Gordon, Samuel–2242
Götze, Simone–2091
Graetzer, Simone–2294
Graham, Susan–2302
Granlund, Sonia–2262, 2313
Grass, Kotoko N.–2177
Gray, Michael D.–2159
Greenleaf, James F.–2124
Greenwood, L. Ashleigh–2306
Greuel, Alison J.–2263
Griesinger, David H.–2242, Cochair
Session 4aAAa (2241), Cochair
Session 4pAAb (2273)
Griffiths, Emily–2117
Grigorieva, Natalie S.–2180
Groby, Jean-Philippe–2077
Grogirev, Valery–2155
Guan, Shane–2186
Guarino, Joe–2209
Guazzo, Regina A.–2153
Guenneau, Sebastien R.–2077
Guerrero, Quinton–2124
Guild, Matthew D.–2076, 2099
Guillemain, Philippe–2283
Guillemin, Bernard J.–2083
Guilloteau, Alexis–2283
Guiu, Pierre–2135
Gunderson, Aaron M.–2087, 2088
Guo, Mingfei–2113
Guo, Yuanming–2215
Gupta, Anupam K.–2075
Guri, Dominic–2127
Gutiérrez-Jagüey, Joaquı́n–2118
Gutmark, Ephraim–2101, 2126,
2144
Guttag, John V.–2260
Gyongy, Miklos–2300
Haberman, Michael R.–2098, 2099,
2200
Hackert, Chris–2223
Hahn-Powell, Gustave V.–2082,
2104
Haley, Patrick–2316
Hall, Hubert S.–2209
Hall, Neal A.–2200
Hall, Timothy–2122
Hall, Timothy J.–2124, 2159
Hall, Timothy L.–2193, 2250, 2251,
2280, 2301, 2302
Halvorsen, Michele B.–2205, 2217
Hambric, Stephen A.–2142
Hamilton, Mark F.–2099, 2158,
2188, 2200
Hamilton, Robert–2209
Hamilton, Sarah–2261
Hamilton, Sarah M.–2261
Han, Aiguo–2158
Han, Jeong-Im–2106
Han, Sungwoo–2145
Hanan, Zachary A.–2285
Handa, Rajash–2192
Handa, Rajash K.–2191
Handzy, Nestor–2207
Hanna, Kristin E.–2274
Hans, Stéphane–2077
Hansen, Uwe J.–Cochair Session
5aED (2303)
Hansen, Colin–2221
Hansen, Colin H.–2136
Hansen, Jonh H.–2083
Hansen, Uwe J.–2113, Chair Session
1eID (2113), Chair Session
2aED (2126), Chair Session
2pEDa (2160), Chair Session
2pEDb (2161), Chair Session
3pED (2221), Cochair Session
2pPA (2170)
Hanson, Helen–2260
Hao, Yen-Chen–2106
Harada, Tetsuo–2108
Hardage, Haven–2127
Hardwick, Jonathan R.–2285, 2287
Hariram, Varsha–2311
Harker, Blaine–2101
Harker, Blaine M.–2100, 2102
Harms, Andrew–2214
Harne, Ryan L.–2196
Harper, Jonathan–2193
Harper, Jonathan D.–2192, 2193
Harris, Catriona M.–2247
Harris, Danielle–2117, 2245, 2275,
2277, Cochair Session 4aAB
(2245), Cochair Session 4pAB
(2275)
Hartmann, Lenz–2164
Hartmann, William–2309
Hashemi, Hedieh–2312
Hasjim, Bima–2302
Haslam, Mara–2108
Hastings, Mardi C.–2206
Hathaway, Kent K.–2178
Hawkins, Anthony D.–2205
Haworth, Kevin J.–Cochair Session
5aBA (2300)
Haworth, Kevin J.–2199, 2303
Hazan, Valerie–2262, 2313
He, Ruoying–2117
He, Xiao–2254
168th Meeting: Acoustical Society of America
2359
Headrick, Robert H.–2188
Heald, Shannon L.–2202, 2261
Heaney, Kevin D.–2120, 2296
Heeb, Nicholas–2101
Hefner, Brian T.–2225, 2267, Chair
Session 4aUW (2267)
Hegland, Erica L.–2306
Heitmann, Kristof–2206
Helble, Tyler A.–2092, 2277
Hellweg, Robert D.–Cochair Session
3aNS (2203)
Henderson, Brenda S.–2080
Henessee, Spencer–2201
Henke, Christian–2288
Henyey, Frank S.–2297
Herbert, Sean T.–2153
Hermand, Jean-Pierre–2284
Hessler, George–2204
Heutschi, Kurt–2286
H Farahani, Mehrdad–2294
Hickey, Craig J.–2139
Hicks, Ashley J.–2098, 2099
Hicks, Keaton T.–2226
Hildebrand, John–2092, 2118
Hildebrand, John A.–2148, 2153
Hildebrand, Matthew S.–2115
Hill, James–2129
Hillman, Robert E.–2260
Hines, Paul–2268, 2269
Hines, Paul C.–2074, 2226
Hirayama, Makoto J.–2143
Hitchcock, Elaine R.–2262
Hobbs, Christopher M.–2079
Hoch, Matthew–2307
Hochmuth, Sabine–2273
Hodgkiss, William–2091
Hodgkiss, William S.–2225
Hodgson, Murray–2151
Holden, Andrew–2217
Holderied, Marc W.–2185
Holland, Charles W.–2085, 2121,
2268, 2269, 2296, 2298
Holland, Christy K.–2095, 2199,
2303
Holland, Mark R.–2122
Holliday, Jeffrey J.–2108
Holliday, Nicole–2173
Holt, R. Glynn–2256
Holthoff, Ellen L.–2256
Holz, Annelise C.–2277
Hong, Hyun–2200, 2274
Hong, Suk-Yoon–2141
Hooi, Fong Ming–2096, 2125
Hoover, K. Anthony–2114, Cochair
Session 2aAA (2114), Cochair
Session 2pAA (2150)
Hord, Samuel–2128
Hori, Hiroshi–2213
Horie, Seichi–2167
Horn, Andrew G.–2184
Horner, Terry G.–2314
Hossen, Jakir–2085
Høst-Madsen, Anders–2094
Houpt, Joseph W.–2307
Houston, Brian–2086, 2112
Houston, Brian H.–2111, 2112, 2194
Houston, Janice–2136
Howarth, Thomas R.–2131
Howe, Bruce–2118
Howe, Thomas–2110
Howell, Mark–2159
2360
Howson, Phil–2144, 2145
Hsi, Ryan–2192
Hsieh, Feng-fan–2173
Hsu, Timothy Y.–2133
Hsu, Wei Chen–2312
Huang, Bin–2124
Huang, Ming-Jer–2247
Huang, Ting–2173
Huang, Wei–2093, 2246, 2298
Hughes, Michael–2095, 2264, 2282
Hu, Huijing–2294
Hull, Andrew J.–2196, Cochair
Session 3aEA (2194)
Hulva, Andrew M.–2128
Humes, Larry E.–2213, 2257, 2311,
2314
Hunter, Eric J.–2294, Cochair
Session 4pAAa (2270)
Huntzicker, Steven–2302
Husain, Fatima T.–2309
Hutter, Michele–2291
Huttunen, Tomi–2289
Hwang, Joo Ha–2249, 2278
Hwang, Joo-Ha–2301
Hynynen, Kullervo–2300
Hu, Zhong–2279, 2280
Ierley, Glenn–2092
Ignisca, Anamaria–2226
Ilinskii, Yurii A.–2158
Imaizumi, Tomohito–2152, 2155
Imran, Muhammad–2151, 2244
Ing, Ros Kiri–2282
Ingersoll, Brooke–2312
Inoue, Jinro–2167
Isakson, Marcia–2268
Isakson, Marcia J.–2156, 2179,
2188, 2200, 2268, 2269
Ishida, Yoshihisa–2214, 2215
Ishii, Tatsuya–2137
Islam, Upol–2304
Ito, Masanori–2155
Itoh, Miho–2074
Izuhara, Wataru–2255
Jacewicz, Ewa–2312, 2313
Jain, Ankita D.–2226
Jakien, Kasey M.–2242
Jakits, Thomas–2218
James, Michael–2079
James, Michael M.–2079, 2081,
2100, 2101, 2102, 2167, 2169
Jang, Hyung Suk–2274
Janssen, Sarah–2128
Jardin, Paula P.–2097
Järvikivi, Juhani–2173
Jasinski, Christopher–2200
Jeger, Nathan–2282
Jeon, Eunbeom–2209
Jeon, Jin Yong–2151, 2244, 2274
Jeske, Andrew–2107
Jia, Kun–2283
Jiang, Yong-Min–2155
Jiang, Yue–2082
Jig, Kyrie–2112
Jig, Kyrie K.–2111
Joaj, Dilip S.–2285
Joh, Cheeyoung–2253
Johnsen, Eric–2158, 2280
Johnson, Chip–2246
Johnson, Cynthia–2192
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Johnson, Cynthia D.–2191
Johnson, Keith–2173, Cochair
Session 1pSCa (2103)
Johnson, Mark–2117
Johnston, William–2203
Jones, Chris–2118
Jones, Gareth–2185
Jones, Kate E.–2276
Jones, Ryan M.–2300
Jones, Zack–2174
Jongman, Allard–2107, 2173, 2174
Joseph, John E.–2247
Ju, Xiaodong–2254
Judge, John–2209
Judge, John A.–2186, 2194
Jüssi, Ivar–2248
Kaipio, Jari P.–2289
Kaliski, Kenneth–2203, Cochair
Session 3aNS (2203)
Kallay, Jeffrey–2263
Kampel, Sean D.–2242
Kamrath, Matthew–2304, Cochair
Session 2aSAa (2140)
Kan, Weiwei–2076
Kanada, Sunao–2143
Kandhadai, Padmapriya–2263
Kang, Jian–2164
Kang, Yoonjnung–2145
Kanter, Shane J.–2090, 2244
Kaplan, Maxwell B.–2153
Kapusta, Matthew–2101
Karami, Mohsen–2152
Kargl, Steven G.–2087, 2110, 2111
Karlin, Robin–2144
Karlin, Robin P.–2176
Karunakaran, Chandra–2125
Karzova, Maria M.–2193, 2289
Katayama, Makito–2255
Katsnelson, Boris–2155, 2156, 2317
Kaul, Sanjiv–2280
Kausel, Wilfried–2284
Kawai, Shin–2152
Ke, Fangyu–2202
Keck, Casey–2261
Kedrinskiy, Valeriy–2290
Keen, Sara–2275
Keil, Martin–2164
Keil, Ryan D.–2096, 2125
Keith, Robert W.–2306
Kellison, Todd–2117
Kelly, Jamie R.–2095
Kemmerer, Jeremy P.–2125
Kemp, John N.–2149
Kenny, R. Jeremy–Cochair Session
2pNSb (2167)
Key, Charles R.–2117
Khan, Sameer ud Dowla–2295
Kho, Hyo-in–2209
Khokhlova, Tatiana–2251, 2278,
2301
Khokhlova, Tatiana D.–2249,
2250
Khokhlova, Vera–2220, 2251, 2278,
2301
Khokhlova, Vera A.–2191, 2193,
2249, 2250, 2289, Cochair
Session 4aBA (2248), Cochair
Session 4pBA (2278)
Khosla, Sid–2126, 2144
Kidd, Gary R.–2308, 2311, 2314
Kiefte, Michael–2081, Chair Session
1aSC (2081)
Kiel, Barry V.–2100
Kieper, Ronald W.–2134
Kil, Hyun-Gwon–2141
Kim, Hak-sung–2209
Kim, Hui-Kwan–2206
Kim, Jea Soo–2148
Kim, Junghun–2149
Kim, Kang–2302
Kim, Kyung-Ho–2106
Kim, Nicholas–2183
Kim, Noori–2251
Kim, Yong-Joe–2095
Kim, Yong Tae–2094
Kim, Yousok–2209
King, Eoin A.–2219
Kinjo, Atsushi–2155
Kinnick, Randall R.–2124
Kitahara, Mafuyu–2146
Kitterman, Susan–2113
Klaseboer, Evert–2289
Klegerman, Melvin E.–2095
Klinck, Holger–2074, 2118, 2119,
2154, Cochair Session 2aAB
(2116)
Klinck, Karolin–2154
Kloepper, Laura–2156
Kloepper, Laura N.–2093, 2154,
2160
Klos, Jacob–2223
Kluender, Keith R.–2082
Kniffin, Gabriel P.–2112
Knobles, David P.–2297, 2317
Knopik, Valerie S.–2264, 2314
Knorr, Hannah D.–2128
Knox, Don–2305
Koblitz, Jens C.–2091, 2248
Koch, Rachelle–2202
Koch, Robert A.–2297
Koch, Robert M.–Cochair Session
2aSAa (2140)
Kochetov, Alexei–2145
Koenig, Laura L.–2173, 2262
Kolar, Miriam A.–2270
Kolios, Michael C.–2096
Kollmeier, Birger–2273
Komova, Ekaterina–2144
Konarski, Stephanie G.–2099
Kondaurova, Maria V.–2262, Chair
Session 4aSCb (2261)
Kong, Eunjong–2145
Kong, Eun Jong–2174
Kopechek, Jonathan A.–2302
Kopf, Lisa M.–2293
Korakas, Alexios–2121
Korkowski, Kristi R.–2096
Korman, Murray S.–2128, 2129,
2170, Cochair Session 2pPA
(2170)
Korzyukov, Oleg–2176
Kosawat, Krit–2315
Kosecka, Monika–2248
Kowalok, Ian–2243
Koza, Radek–2248
Kozlov, Alexander I.–2196
Kraft, Barbara J.–2267
Kreider, Wayne–2157, 2191, 2193,
2249, 2250, 2278, 2301, Cochair
Session 3aBA (2191)
Kreiman, Jody–2295
168th Meeting: Acoustical Society of America
2360
Kripfgans, Oliver D.–Cochair
Session 5aBA (2300)
Kripfgans, Oliver–2301
Krolik, Jeffrey–2214
Krylov, Victor V.–2076
Krysl, Petr–2086
Kujawa, Sharon G.–2258
Kumar, Anu–2246
Kumar, Viksit–2157
Kuperman, William–2091
Kuperman, William A.–2189
Kurbatski, K.–2168
Küsel, Elizabeth T.–2275
Kwak, Yunsang–2210
Kwan, James–2302
Kwon, Bomjun J.–2271
Kyhn, Line–2248
La Follett, Jon R.–2110
Lafon, Cyril–2220, 2279
Lagarrigue, Clément–2077
Lahiri, Aditi–2175
Lähivaara, Timo–2289
Laidre, Kristin–2091
Lalonde, Kaylah–2263
Lam, Boji–2263
Lambaré, Hadrien–2168
Lammers, Marc O.–2276
Lan, Yu–2318
Laney, Jonathan–2114
Lang, William W.–2161
Langer, Matthew D.–2094
Lapotre, Céline–2134
La Rivière, Patrick J.–2157
Larson, Charles R.–2176, 2294
Lavery, Andone C.–2190, 2200,
Cochair Session 3aAO (2187)
Law, Wai Ling–2145
Lawless, Martin S.–2272
Layman, Jr., Christopher N.–2252
Le Bas, Pierre-Yves–2252, 2265
Le Cocq, Cecile–2135
Lee, Adrian KC–Cochair Session
4aPP (2257), Cochair Session
4pPP (2291)
Lee, Chan–2141
Lee, Chao-Yang–2315
Lee, Dohyung–2137
Lee, Franklin–2192, 2193
Lee, Franklin C.–2193
Lee, Goun–2173
Lee, Greg–2143
Lee, Hunki–2137
Lee, Hyunjung–2108
Lee, Jaewook–2083
Lee, Joonhee–2183, 2200
Lee, Kevin M.–2207, 2252, Cochair
Session 3aPA (2205)
Lee, Kwang H.–2112
Lee, Sunwoong–2147
Leek, Marjorie–2291
LEE-KIM, SANG-IM–2310
Lehrman, Paul D.–2285
Leib, Stewart J.–2080
Leibold, Lori J.–2242
Leishman, Timothy W.–2199
Le Magueresse, Thibaut–2171
Lembke, Chad–2117
Lendvay, Thomas S.–2193
Lengeris, Angelos–2107
Leonard, Martha L.–2184
2361
Lermusiaux, Pierre F.–2316
Lester, Rosemary A.–2293
Leta, Fabiana R.–2282
Levow, Gina-Anne–2175
Levy, Roger–2107
Lewis, George K.–2094
Lewis, M. Samantha–2291
Li, Chunxiao–2113
Li, Fenfang–2289
Li, Fenghua–2148
Li, Guangyan–2191
Li, Haisen–2318
Li, Kai Ming–2138, 2139, 2197,
2205, Cochair Session 2aPA
(2138)
Li, Mingxing–2175
Li, Ruo–2318
Li, Shihuai–2288
Li, TianYun–2224
Li, Wei–2215
Li, Xinyan–2288
Li, Xiukun–2189
Li, Xu–2224, 2318
Li, Xy–2288
Li, Yang–2189
Lim, Hansol–2151
Lim, Raymond–2087, 2112
Lima, Key F.–2282, 2305
Lin, Chyi-Her–2312
Lin, Kuang-Wei–2250
Lin, Shen-Jer–2266
Lin, Susan–Cochair Session 1pSCa
(2103)
Lin, Tzu-Hao–2074, 2186
Lin, Ying-Tsong–2093, 2121, 2315,
2316, 2317
Lin, Yu Ching–2312
Lin, Yuh-Jyh–2312
Lin, Yung-Chieh–2312
Lindemuth, Michael–2117
Lindsey, Stephen–2182
Lingeman, James–2192
Lingeman, James E.–2191, 2192
Lippert, Stephan–2206
Lippert, Tristan–2206
Liu, Chang–2106, 2211
Liu, Dalong–2124
Liu, Emily–2178
Liu, GuoQing–2224, 2318
Liu, Hanjun–2294
Liu, Peng–2254
Liu, Tengxiao–2159
Liu, Ying–2294
Liu, Zhongzheng–2095
Liu, Ziyue–2192, 2193
Llanos, Fernando–2082, 2107
Lodhavia, Anjli–2176
Loebach, Jeremy–2307, 2311
Lof, John–2300
Logan, Roger M.–2160
Logawa, Banda–2151
Loisa, Olli–2248
Lomotey, Charlotte F.–2177
Long, Gayle–2313
Lopes, Joseph L.–2268
López Arzate, Diana C.–2247
Lopez Prego, Beatriz–2107
Lotto, Andrew–2108
Lotto, Andrew J.–2293, 2307
Loubeau, Alexandra–2223, Cochair
Session 3pNS (2223)
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
LoVerde, John J.–2181, 2219
Lowenstein, Joanna H.–2262
Lowrie, Allen–2269
Lozupone, David–2203
Lu, Huancai–2113, 2142
Lu, Jia–2148
Lu, Junqiang–2254
Lu, Wei–2318
Luan, Yi–2175
Lubert, Caroline P.–2126, 2137
Lucas, Tim C. D.–2276
Luchies, Adam–2158
Luczkovich, Joseph J.–2276
Luegmair, Georg–2294
Luh, Wenming–2144
Lulich, Meredith D.–2144
Lulich, Steven–2199, 2259
Lulich, Steven M.–2104, 2127,
2128, 2144, 2260, Cochair
Session 4aSCa (2259)
Lunsford, Chris–2091
Luo, Dan–2285
Lynch, James–Cochair Session
5aUW (2315)
Lynch, James–2155, 2315
Lynch, James F.–2093
Lyons, Gregory W.–2139
Lyrintzis, A.–2168
Lysoivanov, Yuri–2150
MacAulay, Jamie–2091
Macaulay, Jamie D.–2093
MacAuslan, Joel–2265
MacConaghy, Brian–2095
MacGillivray, Alexander O.–2206
Machi, Junji–2123, 2157
Mack, Gregory–2168
Maddox, Alexandra–2126
Maddox, W. T.–2264
Maddox, W. Todd–2314
Magliula, Elizabeth A.–2194
Magstadt, Andrew S.–2100
Mahdavi Mazdeh, Mohsen–2105
Mahon, Merle–2313
Majdinasab, Fatemeh–2312
Maki, C. T.–2256
Maki, Daniel P.–2308
Makris, Nicholas C.–2093, 2147,
2226, 2246, 2317
Malcolm, Alison–2254
Maling, George C.–2161
Malla, Bhupatindra–2101
Malphurs, David E.–2112
Malykhin, Andrey–2317
Mamou, Jonathan–2123, 2157,
Cochair Session 2aBA (2122)
Mamou-Mani, Adrien–2132, 2284
Mankbadi, Reda–2168
Manley, David–2089
Mann, David–2117
Maraghechi, Borna–2096
Marcus, Logan S.–2256
Mareze, Paulo–2141
Margolina, Tetyana–2247
Market, Jennifer–2255
Markham, Benjamin–2089, 2162
Marques, Tiago A.–2245
Marsh, Christopher A.–2153
Marsh, Jon–2095, 2264, 2282
Marshall, Andrew–2223
Marsteller, Marisa–2308
Marston, Philip L.–2087, 2088,
2110, 2111, 2172, 2298
Marston, Timothy M.–2110, 2172
Martin, James S.–2159
Martin, Stephen–2092
Mast, T. Douglas–2096, 2125, 2199,
2302
Masud, Salwa–2258
Mathias, Delphine–2091
Matias, Luis–2275
Matsumoto, Haru–2118, 2119
Matsuo, Ikuo–2152, 2155
Mattson, Steve–2286
Matula, Thomas J.–2095, 2279
Maussang, Frédéric–2266
Maxwell, Adam–2193, 2278, 2301
Maxwell, Adam D.–2157, 2249,
2250, 2251
Mayell, Marcus–2089
Maynard, Julian D.–2131
Mayoral, Salvador–2080
Mazzocco, Elizabeth–2128
McAteer, James A.–2191
McCammon, Diana–2226
McCarthy, John–2095, 2264, 2282
McComas, Sarah–2266
McCullough, Elizabeth A.–2109
McDaniel, J. Gregory–2194, Cochair
Session 3aEA (2194)
McDannold, Nathan–2221
McDonald, Mark A.–2148
McDougal, Forrest–2127
McFarland, Dennis J.–2258
McGeary, John E.–2264, 2314
McGee, JoAnn–2073
McGettigan, Carolyn–2243
McGough, Robert–2096, 2125, 2128,
2159, Chair Session 1pBA
(2094)
McKay, Scotty–2127
McKenna, Elizabeth A.–2134
McKenna, Mihan–2266
McKinley, Richard–2079
McKinley, Richard L.–2079, 2133,
2134, 2166, Cochair Session
1aPA (2079), Cochair Session
1pPA (2100)
McKinnon, Daniel–2203
McLaughlin, Dennis K.–2101
McMullen, Andrew–2287
McNeese, Andrew–2178
McNeese, Andrew R.–2207, 2252
McPhee, Peter–2203
McPherson, David D.–2095
Means, Steve L.–2214
Meegan, G. Douglas–2165
Meekings, Sophie–2243
Mehmohammadi, Mohammad–2159
Mehraei, Golbarg–2258
Mehta, Daryush–2260
Meixner, Duane–2159
Mellinger, David K.–2118, 2119,
2153, 2154, 2275, Cochair
Session 1pAB (2091), Cochair
Session 2aAB (2116)
Melodelima, David–2220
Menard, Lucie–2105
Meng, Qingxin–2265
Mental, Rebecca–2143
Merkens, Karlina–2153
Mi, Lin–2106
168th Meeting: Acoustical Society of America
2361
Michalopoulou, Zoi-Heleni–2085
Mielke, Jeff–2104
Mikhalvesky, Peter–2148
Mikhaylova, Daria A.–2180
Mikkelsen, Lonnie–2248
Miksis-Olds, Jennifer L.–2186
Miller, Amanda L.–2103
Miller, Douglas–2301
Miller, Douglas L.–2158
Miller, Greg–2114
Miller, Gregory A.–2244, 2274
Miller, James D.–2212, 2308, 2311
Miller, James G.–2122
Miller, James H.–2156, 2178, 2190,
2197, 2206
Miller, Rita J.–2125
Miller, Taylor L.–2176
Mirabito, Chris–2316
Mishima, Yuka–2074
Mitra, Vikramjit–2082
Mitran, Sorin M.–2191, 2192
Miyamoto, Yoshinori–2074
Miyashita, Takuya–2303
Mizoguchi, Ai–2103
Mizumachi, Mitsunori–2314
Moeller, Niklas–2183
Molinari, Michael–2281
Molis, Michelle R.–2311
Mollashahi, Maryam–2312
Monson, Brian B.–2272, 2307
Moon, Wonkyu–2253
Mooney, T. A.–2153
Moorcroft, Elizabeth–2276
Moore, David–2305
Moore, David R.–2257
Moore, Keith A.–2203
Moore, Thomas R.–2132, 2284
Moquin, Philippe–2265
Mora, Pablo–2101
Moran, John–2091
Morasutti, Jon–2129
Morgan, Andrew–2090
Morgan, Mallory–2243
Moriconi, Stefano–2313
Morisaka, Tadamichi–2074
Morlet, Thierry–2306
Moron, Juliana R.–2073
Morrill, Tuuli–2146, 2176
Morris, Philip–2101
Morris, Richard J.–2293, Chair
Session 4pSC (2293)
Morrison, Andrew C.–2170
Morrison, Andrew C. H.–Chair
Session 4pMU (2283)
Morrison, Andrew C. H.–Cochair
Session 5aED (2303)
Morshed, Mir Md M.–2136
Moss, Cynthia F.–2185, Chair
Session 2pAB (2152)
Moss, Geoffrey R.–2131
Mott, Brian–2280
Mousel, John–2224
Moyal, Olivier–2253
Muehleisen, Ralph T.–2172
Muellner, Herbert–2218
Muenchow, Andreas–2317
Muenster, Malte–2164
Muhlestein, Michael B.–2098
Muir, Thomas G.–2178, 2252
Mukae, Junpei–2214
Müller, Rolf–2075
2362
Mullins, Lindsay–2261
Munthuli, Adirek–2315
Munyazikwiye, Gerard–2127
Murakami, Takahiro–2214, 2215
Murata, Taichi–2134
Murphy, Stefan M.–2226
Murphy, William J.–2134, 2165,
Cochair Session 2aNSa (2133),
Cochair Session 2pNSa (2165)
Murray, Alastair R.–2077
Murray, Nathan E.–2101, 2139,
2167
Murray, Patrick–2195
Murta, Bernardo H.–2097
Muzi, Lanfranco–2155
Myers, Kyle R.–2209
Myers, Rachel–2302
Nachtigall, Paul E.–2093
Naderyan, Vahid–2139
Nagao, Kyoko–2306
Naghshineh, Koorosh–2209
Nagle, Anna S.–2125
Nakamura, Aya–2167
Nam, Hosung–2082
Namjoshi, Jui–2176
Nandamudi, Srihimaja–2295
Narayanan, Shrikanth S.–2143
Nariyoshi, Pedro–2125
Nault, Isaac–2192
Neal, Matthew T.–2091
Nearey, Terrance M.–2081
Neel, Amy T.–2210, Cochair
Session 3aSC (2210)
Neely, Stephen T.–2211
Neilsen, Tracianne B.–2079, 2081,
2100, 2101, 2102, 2128, 2135,
2167, 2169, 2171, 2199, Cochair
Session 2pNSb (2167)
Nelson, Danielle V.–2074
Nennig, Benoit–2077
Netchitailo, Vladimir–2279
Neubauer, Juergen–2259
Newhall, Arthur–2315
Newhall, Arthur E.–2093
Nguon, Chrisna–2256, 2289
Nguyen, Man M.–2302
Nguyen, Vincent–2140
Nicholas, Michael–2252
Nicolaidis, Katerina–2107
Nielsen, Peter L.–2155
Nieukirk, Sharon L.–2154
Nightingale, Kathryn–2249
Nijhof, Marten J.–2111
Nishi, Kanae–2211
Nishimiya, Kojiro–2213
Nissen, Jene–2246
Nittrouer, Susan–2262
Noble, John M.–2139, 2169
Nohara, Timothy–2129
Norris, Andrew–2078
Norris, Andrew N.–2098
Norris, Thomas F.–2246
Northridge, Simon–2093
Nosal, Eva-Marie–2094
Nottoli, Chris S.–2142
Nozawa, Takeshi–2108
Nusbaum, Howard C.–2202,
2261
Nusierat, Ola–2252
Nystrand, Martin–2215
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Oberai, Assad A.–2159
O’Boy, Daniel J.–2076
O’Brien, William–2123, 2219
O’Brien, William D.–2095, 2158
O’Brien Jr., William D.–2096
O’Connell, Victoria–2185
Odom, Jonathan–2214
Odom, Robert I.–2187, 2199
Oelschlaeger, Karl–2223
Oelze, Michael–2158, Chair Session
2pBA (2157), Cochair Session
2aBA (2122)
Oelze, Michael L.–2123, 2125
Ogata, Kazuto–2314
Oh, Byung Kwan–2209
Oh, Seongmin–2274
Ohl, Claus-Dieter–2256, 2289
Ohl, Siew-Wan–2289
Ohm, Won-Suk–2137
Okerlund, David–2293
Okutsu, Kenji–2074
Oleson, Erin–2118, 2153, 2154
Oliphant, Michelle–2127
Olivier, Dazel–2077
Ollivier, Benjamin–2266
Ollivier, Sebastien–2289
Olney, Andrew–2215
Olson, Bruce C.–2090
O’Neal, Robert–2203
Onsuwan, Chutamanee–2315
O’Reilly, Meaghan A.–2300
Oren, Liran–2126, 2144
Orr, Marshall H.–2121
Osman, E.–2168
Ostarek, Markus–2243
Ostashev, Vladimir E.–2138, 2139
Ostendorf, Mari–2175
Ostrovskii, Igor–2252
Ostrovskiy, Dmitriy B.–2180
Oswald, Julie N.–2154
Ota, Anri–2215
Otero, Sebastian–2150
ounadjela, abderrhamane–2253
Ouyang, Huajiang–2224, 2318
Ow, Dave–2289
Owen, Kylie–2185
Owens, Gabe E.–2301
Ozmeral, Erol J.–2291
Pace, Mike–2266
Pack, Adam A.–2154
Page, Juliet A.–2079
Paillasseur, Sébasien–2171
Pajak, Bozena–2107
Pallayil, Venugopalan–2268
Palmer, William K.–2204
Palumbo, Daniel L.–2285
Papamoschou, Dimitri–2080
Papesh, Melissa–2291
Parizet, Etienne–2309
Park, Hanyong–2174
Park, Hyo Seon–2209
Park, Junhong–2209, 2210
Park, Taeyoung–2137
Parks, Susan E.–2185
Partan, Jim–2153
Partanen, Ari–2249, 2278
Pate, Michael B.–2281
Patel, Sona–2176
Patterson, Brandon–2158
Paul, Adam L.–2219
Paul, Stephan–2097, 2253, 2282,
2304, 2305
Paustian, Iris–2268
Pavese, Lorenzo–2294
Pawliczka, Iwona–2248
Payton, Karen–2189, 2265
Pearson, Heidi–2185
Pearson, Michael F.–2169
Pecknold, Sean–2226, 2297, 2298
Peddinti, Vijay Kumar–2164
Pedro, Rebecca–2128
Pedrycz, Adam–2213
Pellegrino, Paul M.–2256
Peng, Tao–2095
Peng, Yuan–2205
Peng, Zhao–2126, 2200, 2274,
Cochair Session 3aID (2197)
Penny, Christopher W.–2285
Penrod, Clark S.–2188
Perez, Camilo–2095, 2279
Pestorius, Frederick M.–2188
Petchpong, Patchariya–2094
Petillo, Stephanie–2110
Pettersen, Michael S.–2129
Pettinato, Michèle–2262
Pettit, Chris L.–2138
Pettyjohn, Steve–2182, 2208
Pfeiffer, Scott–2114, 2244
Pfeiffer, Scott D.–2089, 2244
Pfeifle, Florian–2132, 2164
Philipp, Norman H.–Chair Session
3pAA (2218)
Phillips, James E.–2208
Piao, Shengchun–2265
Piccinini, Page–2107
Pichora-Fuller, Margaret K.–2292
Pierce, Allan D.–2179
Pineda, Nick–2268
Pinson, Samuel–2121, 2268, 2296
Pinton, Gianmarco–2279
Piovesan, Tenile–2305
Piperkova, Rossitza–2202
Pisoni, David B.–2314
Plath, Niko–2132
Plotkin, Kenneth–2286
Plotkowski, Andrea R.–2310
Plotnick, Daniel–2087, 2172
Plotnick, Daniel S.–2088, 2110,
2111
Plsek, Thomas J.–2115
Pol, Graland-Mongrain–2279
Poncot, Remi–2135
Ponte, Aurelien–2316
Pope, Hsin-Ping C.–2101
Popper, Arthur N.–2205
Porta-Gándara, Miguel A.–2118
Porter, Thomas R.–2300
Possing, Miles–2215
Potty, Gopu–2178
Potty, Gopu R.–2156, 2190, 2197,
2206
Powell, Larkin A.–2073
Powers, Jeffry–2300
Powers, Russell–2101
Pozzer, Talita–2253
Prakash, Arun–2141
Prater, James L.–2112
Preisig, James C.–2266
Preminger, Jill E.–2198, 2308
Preston, John R.–2225, 2297
Price, John C.–2203
168th Meeting: Acoustical Society of America
2362
Pritz, Tamas–2256
Probert Smith, Penny–2302
Qiang, Bo–2124
Qiao, Shan–2281
Qiao, Wenxiao–2254
Qin, Jixing–2156
Qin, Zhen–2107
Quick, Nicola J.–2247
Quijano, Jorge E.–2147, Chair
Session 2aUW (2147)
Radhakrishnan, Kirthi–2199, 2303
Rafferty, Tom–2151
Raghukumar, Kaustubha–2155
Raghunathan, Shreyas B.–2097
Rakerd, Brad–2309
Raman, Ganesh–2172
Ramanarayanan, Vikram–2143
Ramdas, Kumaresan–2164
Ranft, Richard–2182
Rankin, Shannon–2117, 2245
Rankinen, Wil A.–2082
Rao, Marepalli B.–2125
Rasmussen, Per–2080, 2102
Raspet, Richard–2139
Rathsam, Jonathan–2224, Cochair
Session 3pNS (2223)
Ratilal, Purnima–2093, 2147, 2226,
2246, 2298, 2317
Rawlings, Samantha–2181
Read, Andrew J.–2277
Reba, Ramons A.–2080
Redford, Melissa A.–2263
Reed, Heather–2195
Reeder, Ben–2316
Reeder, D. Benjamin–2120
Reeder, Davis B.–2178
Reese, Marc C.–2281
Reetz, Henning–2175
Reetzke, Rachel–2263
Reganti, Namratha–2159
Regier, Kirsten T.–2082
Reichman, Brent–2079
Reichman, Brent O.–2102, 2169
Reidy, Patrick–2262
Reiter, Sebastian–2202
Remillieux, Marcel C.–2252, 2265
Ren, Gang–2202
Ren, Xiaoping–2125
Rennies, Jan–2273
Reuter, Eric L.–2271
Riahi, Nima–2304
Rich, Kyle T.–2302
Richards, Angela–2243
Richards, Roger T.–Chair Session
4pEA (2281)
Richie, Carolyn–2212
Riddle, Jason–2276
Rideout, Brendan P.–2094
Riegel, Kimberly A.–2243
Rietdijk, Frederik–2286
Rimington, Dennis–2277
Riquimaroux, Hiroshi–2152
Rivens, Ian–2250, 2301
Rivera-Campos, Ahmed–2105
Rivers, Julie–2246
Rizzi, Stephen A.–2285, 2286, 2287,
Cochair Session 4pNS (2285)
Roberts, Bethany L.–2277
Roberts, Joshua J.–2129
2363
Roberts, Philip J.–2175
Roberts, William W.–2193, 2251,
2280
Robinette, Martin–2166
Robinson, Stephen P.–2216, 2217
Roch, Marie A.–2073, 2153
Rodriguez, Christopher F.–2285
Rodriguez, Peter–2256
Rogers, Catherine L.–2198, 2211,
2273, Cochair Session 3aSC
(2210)
Rogers, Chris B.–2285
Rogers, Jeffrey S.–2214
Rogers, Lydia R.–2210
Rogers, Peter H.–2159
Rohrbach, Daniel–2123, 2157
Romero-Vivas, Eduardo–2118
Rone, Brenda K.–2246
Ronsse, Lauren M.–2129
Rosado-Mendez, Ivan–2122
Rosado-Mendez, Ivan M.–2124
Rosado Rogers, Lydia–2211
Rosen, Stuart–2243
Rosenberg, Carl–2162, Cochair
Session 2pID (2161)
Rosenfield, Jonathan R.–2157
Ross, Susan–2192
Rossing, Thomas D.–2170
Rossi-Santos, Marcos–2073
Roth, Ethan–2118
Rourke, Christopher S.–2174
Rouse, Jerry W.–2223
Rowan-West, Carol–2203
Rowcliffe, Marcus J.–2276
Rowland, Elizabeth–2275
Roy, Kenneth–2181
Roy, Kenneth P.–Chair Session
3aAA (2181)
Rudisill, Chase J.–2128
Ruf, Joseph–2168
Ruf, Joseph H.–2167
Ruhnau, Marcel–2206
Rupp, Martin–2202
Ruscher, Christopher J.–2100
Russell, Daniel A.–2197, 2200
Ryerson, Erik J.–2151
Sabra, Karim G.–2111, 2149, 2190
Sacks, Jonah–2089
Sadeghi-Naini, Ali–2123
Sadykova, Dina–2247
Saegusa-Beecroft, Emi–2123, 2157
Sagers, Jason D.–2178, 2317
Sahu, Saurabh–2312
Sakaguchi, Aiko–2074
Sakamoto, Nicholas–2141
Sakata, Yoshino–2213
Sakiyama, Naoki–2255
Salter, Ethan–2181
Salton, Alexandira R.–2169
Salton, Alexandria R.–2079, 2167
Saltzman, Elliot–2082
Sambles, Roy–2077
Sammelmann, Gary S.–2112
Samson, David J.–2150
Sanchez-Dehesa, Jose–2076
Sandhu, Jaswinder S.–2157
Sanghvi, Narendra T.–2220, Cochair
Session 3pBA (2219)
Sankin, Georgy–2191, 2192
Sannachi, Lakshmanan–2123
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Sapozhnikov, Oleg–2193, 2301
Sapozhnikov, Oleg A.–2191, 2193,
2249, 2250
Sapoznikov, Oleg–2251
Sarkar, Jit–2091, 2185
Sarkissian, Angie–2086, 2112, 2194
Satter, Michael J.–2117
Scanlon, Michael V.–2252
Scanlon, Patricia–2182
Scarborough, Rebecca–2083
Scarbrough, Paul–2115, 2151
Schade, George–2278
Schade, George R.–2249, 2251,
2278
Scharenbroch, Gina–2311
Scherer, Ronald–2296
Scherer, Ronald C.–2295, 2296
Schertz, Jessamyn L.–2108
Schlinker, Robert H.–2080
Schmid, Charles E.–2161
Schmidt, Anna M.–2146
Schmidt, Henrik–2110
Schnitzler, Hans-Ulrich–2091
Schomer, Paul D.–2204, Cochair
Session 3aNS (2203)
Schrader, Matthew K.–2128
Schreiber, Nolan–2143
Schulte-Fortkamp, Brigitte–2204
Schutz, Michael–2307
Schwan, Logan–2098
Scott, E. K. Ellington–2202
Scott, Michael P.–2306
Scott, Sophie K.–2243
Scott-Hayward, Lindesay A.–2247
Segala, David–2195
Seger, Kerri–2247
Seibert, Anna-Maria–2091
Selep, Andrew–2158
Seo, Seonghoon–2141
Seong, Woojae–2180
Sepulveda, Frank–2305
Sereno, Joan–2177
Setter, Jane–2312
Shade, Neil T.–2115
Shafer, Benjamin–Chair Session
3aSAb (2208)
Shafer, Benjamin M.–Chair Session
3aSAa (2207)
Shah, Apurva–2302
Shannon, Dan–2080
Sharma, Ariana–2243
Shattuck-Hufnagel, Stefanie–2174,
2260
Shekhar, Himanshu–2199, 2302
Shen, Jing–2314
Shen, Junyuan–2215
Sheng, Li–2263
Sheng, Xueli–2148
Shepherd, Micah R.–2142, 2284
Sherren, Richard S.–2209
Sheth, Raj C.–2306
Shi, Lu-Feng–2173
Shi, William T.–2300
Shih, Chilin–2145
Shin, Ho-Chul–2078
Shin, Kumjae–2253
Shinn-Cunningham, Barbara–2258,
2271
Shiu, Yu–2275
Shofner, William–2199, 2308
Shrivastav, Rahul–2293, 2295
Shrotriya, Pranav–2279
Siderius, Martin–2155, 2189
Sieck, Caleb F.–2099
Siegmann, William L.–2179, 2190
Signorello, Rosario–2295
Sikarwar, Nidhi–2101
Silbert, Noah H.–Chair Session
5aPPa (2306)
Silbert, Noah H.–2174, 2307
Siliceo, Oscar E.–2268
Sillings, Roy–2212
Simmen, Jeffrey A.–2187
Simmons, James A.–2154, 2272,
Chair Session 1aAB (2073)
Simon, Julianna–2301
Simon, Julianna C.–2249
Simonis, Anne–2153
Simons, Theodore–2276
Simpson, Brian D.–2166
Simpson, Harry–2112
Simpson, Harry J.–2111, 2112
Sirovic, Ana–2148, Cochair Session
3aAB (2184)
Sivaraman, Ganesh–2082
Sivriver, Alina–2279
Skordilis, Zisis Iason–2143
Skowronski, Mark D.–2293
Slaton, William–2127, 2160, 2288,
Cochair Session 4aPAb (2256),
Cochair Session 4pPA (2288)
Smaragdakis, Costas–2120
Smiljanic, Rajka–2109, 2241
Smirnov, Dmitry–2078
Smith, Adam B.–2093
Smith, Anthony R.–2087, 2088
Smith, Chad–2268, 2269
Smith, Cory J.–2208
Smith, Eric–2178
Smith, Jennifer A.–2073
Smith, John D.–2077
Smith, Nathan D.–2128
Smith, Sherri L.–2292
Smith, Silas–2145
Smith, Valerie–2181
Snell, Colton D.–2091
Soles, Lindsey–2307
Sommerfeldt, Scott D.–2199
Sommers, Mitchell–2259, Cochair
Session 4aSCa (2259)
Son, Su-Uk–2298
Song, Aijun–2148
Song, H. C.–2148
Song, Hee-Chun–2148
Song, Heechun–2180
Song, Zhongchang–2075
Sorensen, Mathew–2192
Sorensen, Mathew D.–2192, 2193
Sorenson, Matthew–2193
Souchon, Remi–2279
Soule, Dax C.–2092
Sounas, Dimitrios–2099, 2281
Southall, Brandon L.–2247
Souza, Pamela–2314
Sparrow, Victor–2188, 2197
Sparrow, Victor W.–2200
Speights, Marisha–2082
Spincemaille, Pascal–2144
Spivack, Arthur J.–2156
Sponheim, Nils–2097
Sprague, Mark W.–2276
Srinivasan, Nirmal–2311
168th Meeting: Acoustical Society of America
2363
Srinivasan, Nirmal K.–2242
Srinivasan, Nirmal Kumar–2242
Stansell, Megan–2242
Stanton, Timothy K.–2187, 2222
Stauffer, Stauffer A.–2186
Stecker, G. Christopher–2198, 2308
Steininger, Gavin–2085, 2269, 2298
Sterling, John–2194
Stewart, Kenneth–2128
Stiles, Timothy–2158
Stilp, Christian–2308, 2310, 2311
Stilp, Christian E.–2198
Stilz, Peter–2091
Stimpert, Alison K.–2247
Stockman, Ida–2312
Stojanovik, Vesna–2312
Stokes, Michael A.–2083, 2314
Story, Brad H.–2272, 2293, 2307
Stott, Alex–2129
Stotts, Steven A.–2297
Stout, Trevor A.–2081, 2100
Straley, Janice–2091, 2185
Stratton, Kelly–2094
Strickland, Elizabeth A.–2306
Strong, John–2244
Strong, William J.–2199
Sturm, Frédéric–2119, 2121
Styler, Will–2083
Subramanian, Swetha–2125
Sucunza, Federico–2277
Sugiyama, Hitoshi–2213
Sü Gül, Zühre–2219
Suits, Joelle I.–2166
Sullivan, Edmund–2213
Summers, Jason E.–2214
Sun, Lin–2318
Sung, Min–2253
Surve, Ratnaprabha F.–2285
Suzuki, Ryota–2074
Svegaard, Signe–2248
Swaim, Zach–2277
Swalwell, Jarred–2095
Swearingen, Michelle E.–2139
Sweeney, James F.–2281
Swift, Hales S.–2081
Szabo, Andrew R.–2153
Szabo, Thomas L.–2249, 2254
Szymczak, William G.–2086
Tabata, Kyohei–2215
Tabatabai, Ameen–2157
Tadayyon, Hadi–2123
Taft, Benjamin N.–2152
Taggart, Rebecca–2094
Taguchi, Kei–2303
Taherzadeh, Shahram–2078, Cochair
Session 2aPA (2138)
Tajima, Keiichi–2146
Takada, Mieko–2174
Takeyama, Yousuke–2168
Talbert, Coretta M.–2127
Talesnick, Lily–2313
Tamaddoni, Hedieh–2301
Tamaddoni, Hedieh A.–2302
Tanaka, Ryo–2215
Tandiono, Tandiono–2289
Tang, Dajun–2225, 2226, 2267,
Chair Session 3pUW (2225)
Tang, Sai Chun–2160
Tanizawa, Yumi–2167
Tanji, Hiroki–2215
2364
Tantibundhit, Charturong–2315
Tao, Sha–2106
Taroudakis, Michael–2120
Tarr, Eric–2262
Tatara, Eric–2172
Tavakkoli, Jahan–2096
Tavossi, Hasson M.–2208
Taylor, Chris–2117
Tebout, Michelle–2128
Teilmann, Jonas–2248
Tennakoon, Sumudu P.–2290
Tenney, Stephen M.–2169
ter Haar, Gail–2220, 2250, 2301
ter Hofstede, Hannah M.–2185
Tewari, Muneesh–2278
Thaden, Joseph J.–2102
Thangawng, Abel L.–2252
Theis, Melissa A.–2133
Theis, Mellisa A.–2134
Themann, Christa L.–2134
Theobald, Pete D.–2216, 2217
Thiel, Jeff–2192
Thode, Aaron–2091, 2185, 2216
Thode, Aaron M.–2247
Thomas, Derek C.–2081
Thomas, Jean-Hugh–2171
Thomas, Len–2245, 2247, 2248, 2275
Thompson, Charles–2140, 2256,
2289
Thompson, Eric R.–2166
Thompson, Stephen C.–2131, 2252
Thorsos, Eric I.–2297
Tiberi Ljungqvist, Cinthia–2248
Tilsen, Sam–2144, 2176, Chair
Session 2aSC (2143)
Timmerman, Nancy S.–Cochair
Session 3aNS (2203)
Tinney, Charles E.–2101, 2167,
2168
Titovich, Alexey–2078
Titovich, Alexey S.–2098
Titze, Ingo R.–2163, 2259
Tognola, Gabriella–2313
Tohid, Usama–2305
Tokudome, Shinichiro–2137
Tollefsen, Cristina–2298
Tolstoy, Maya–2092
Tong, Bao N.–2138
Too, Gee-Pinn J.–2266
Tougaard, Jakob–2248
Tournat, Vincent–2077
Towne, Aaron–2080, 2081
Tracy, Erik C.–2173
Tran, Duong D.–2093
Tran, Trang–2175
Tregenza, Nick–2248
Trevino, Andrea C.–2211
Treweek, Benjamin C.–2158
Trickey, Jennifer S.–2073
Trone, Marie–2217
Troyes, Julien–2168
Tsutsumi, Seiji–2137, Cochair
Session 2aNSb (2135)
Tsysar, Sergey A.–2191, 2250
Tu, Juan–2095
Tune, Johnathan–2192
Tuomainen, Outi–2262
Turgeon, Christine–2105
Turgut, Altan–2121, 2122
Turnbull, Rory–2172, 2313
Turner, Cathleen–2156
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Turo, Diego–2209
Tuttle, Brian C.–2287
Tyack, Tyack L.–2247
Tyson, Cassandra–2300
Tyson, Thomas–2089
Ueberfuhr, Margarete A.–2306
Ui, Kyoichi–2137
Ulrich, Timothy J.–2252, 2265
Umemura, Shin-ichiro–2303
Umnova, Olga–2077, 2078, 2098,
Cochair Session 1aNS (2076),
Cochair Session 1pNS (2098)
Urbán, Jorge–2247
Urban, Jocelyn–2281
Urban, Matthew–2159
Urban, Matthew W.–2124
Valero, Henri Pierre–2253
Valero, Henri-Pierre–2213
Vali, Mansour–2312
Van Engen, Kristin–2241
Van Hedger, Stephen C.–2202
Vannier, Michaël–2309
Van Parijs, Sofie–2277
Van Stan, Jarrad H.–2260
Van Uffelen, Lora J.–2118
van Vossen, Robbert–2297
Vasilyeva, Lena–2173
Vatikiotis-Bateson, Eric–2105, 2310
Vavrikova, Marlen–2202
Vecherin, Sergey N.–2138
Venalainen, Kevin–2265
Vergez, Christophe–2283
Verlinden, Chris–2091
Verweij, Martin D.–2097
Vick, Jennell–2143
Vigeant, Michelle C.–2091, 2272,
2304
Vigmostad, Sarah–2224
Vignola, Joseph–2209
Vignola, Joseph F.–2186, 2194
Vignon, Francois–2300
Villa Médina, Franciso–2118
Villanueva, Flordeliza S.–2302
Villermaux, E.–2207
Visser, Fleur–2247
Vitorino, Clebe T.–2305
Vlaisavljevich, Eli–2250
Vogel, Irene–2176
Voix, Jeremie–2134, 2135
Volk, Roger–2112
Volk, Roger R.–2111
von Benda-Beckmann, Alexander
M.–2247
Von Borstel-Luna, Fernando D.–
2118
von Estorff, Otto–2206
Vuillot, François–2168
Wada, Kei–2137
Wage, Kathleen E.–2147
Wahlberg, Magnus–2091
Walden, David–2162
Walker, Bruce E.–2204
Wall, Alan T.–Chair Session 5aNS
(2304)
Wall, Alan T.–2079, 2100, 2102,
2171, Cochair Session 1aPA
(2079), Cochair Session 1pPA
(2100)
Wall, Carrie–2117
Waller, Steven J.–2116, 2270
Wallin, Brenton–2129, 2197
Walsh, Edward J.–2073
Walsh, Timothy–2141
Walton, Joseph P.–2258
Wan, Lin–2317
Wang, Chau-Chang–2179
Wang, Chenghui–2095
Wang, Chunhui–2148
Wang, Delin–2093, 2246, 2298
Wang, Ding–2075
Wang, Jingyan–2148
Wang, Kon-Well–2196
Wang, Lily–2126
Wang, Lily M.–2126, 2183, 2200,
2274, Cochair Session 4aAAa
(2241), Cochair Session 4pAAb
(2273)
Wang, Qi–2159
Wang, Ruijia–2254
Wang, Wenjing–2106
Wang, Xiuming–2254, 2255
Wang, Yak-Nam–2157, 2249, 2251,
2278, 2279
Wang, Yang–2125
Wang, Yen-Chih–2084
Wang, Yi–2144
Wang, Yijie–2133
Wang, Yiming–2139
Wang, Yue–2106
Wang, Zhitao–2075
Ward, Gareth P.–2077
Ward, Michael P.–2276
Warnecke, Michaela–2185
Warnez, Matthew–2280
Warren, Joseph–2185
Warzybok, Anna–2273
Washington, Jonathan N.–2105
Waters, Zachary J.–2112
Waters, Zack–2112
Waters, Zackary J.–2111
Watson, Charles S.–2212, 2308
Webster, Jeremy–2139
Wei, Chong–2075
Weinrich, Till–2164
Weirathmueller, Michelle–2092
Welton, Patrick J.–2179
Wennerberg, Daniel–2248
Werker, Janet F.–2263
Werner, Lynne–2309
Wessells, Hunter–2192
West, James E.–2130
Whalen, Cara–2073
Whalen, Douglas H.–2103
White, Charles E.–2178, 2197
White, Ean–2271
White, Robert D.–2127, 2285
Whiting, Jonathon–2270
Wickline, Samuel–2095, 2264,
2282
Wiggins, Sean M.–2073, 2092,
2118, 2148, 2153
Wilcock, Tom–2090
Wilcock, William S. D.–2092
Wilcock, William SD–2092
Wild, Lauren–2185
Williams, James C.–2191
Williams, Kevin–2111, 2268
Williams, Kevin L.–2087, 2110,
2111, 2225, Chair Session
168th Meeting: Acoustical Society of America
2364
1aUW (2086), Chair Session
4pUW (2296)
Williams, Michael–2286
Williams, Neil–2156
Williams, Neil J.–2156
Wilson, D. Keith–2138
Wilson, David K.–2139
Wilson, Ian–2143
Wilson, Kieth–2203
Wilson, Michael B.–2222
Wilson, Preston S.–2098, 2099,
2166, 2188, 2200, 2207, 2219,
2305, Cochair Session 3aAO
(2187), Cochair Session 3aID
(2197)
Wiseman, Suzi–2305
Withnell, Robert–2199
Withnell, Robert H.–2144, 2306
Wittum, Gabriel–2202
Wixom, Andrew–2194
Wochner, Mark S.–2207, Cochair
Session 3aPA (2205)
Wolff, Daniel M.–2201
Woodstock, Zev C.–2126
Woodworth, Michael–2195
Woolfe, Katherine F.–2149
Woolworth, David S.–2090, 2114,
2271
Worcester, Peter F.–2149, 2155
Worthmann, Brian–2148
Wrege, Peter H.–2275
Wright, Andrew–2248
Wright, Beverly A.–2292
Wright, Lindsay B.–2293
2365
Wright, Neil A.–2262
Wright, Richard–2175, 2314
Wu, Chenhuei–2145
Wu, Juefei–2300
Wu, Kuangcheng–2140
Wu, Sean F.–2171, Chair Session
2aSAb (2142), Chair Session
2pSA (2171)
Wylie, Jennifer–2122
Wylie, Jennifer L.–2121
Xian, Wei–2185
Xiang, Ning–2084, 2162, 2198,
2214, 2219, 2222, Cochair
Session 1aSP (2084)
Xie, Feng–2300
Xie, Zilong–2263, 2314
Xin, Penglai–2255
Xu, Bo–2144
Xu, Jin–2279, 2280
Xu, Zhen–2250, 2251
Xue, Yutong–2183
Yack, Tina M.–2246, Cochair
Session 4aAB (2245), Cochair
Session 4pAB (2275)
Yamaguchi, Tadashi–2123
Yamakawa, Kimiko–2175
Yamamoto, Hiroaki–2255
Yan, Hanbo–2174
Yan, Qingyang–2109
Yanagihara, Eugene–2123, 2157
Yang, Byunggon–2146
Yang, Chung-Lin–2109
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Yang, Desen–2189
Yang, Jie–2225, 2297
Yang, Ming–2164
Yang, Shie–2265
Yang, Tsih C.–2147
Yang, Yiing Jang–2316
Yang, Yiqun–2159
Yang, Yuanxiang–2256
Yasuda, Jun–2303
Yeh, Meng-Hsin–2312
Yellepeddi, Atulya–2266
Yi, Dong Hoon–2226
Yi, Han-Gyol–2264
Yi, Hao–2144
Yi, Hoyoung–2109
Yoder, Timothy–2112
Yoder, Timothy J.–2111, 2112
Yoneyama, Kiyoko–2146, 2174
Yonovitz, Al–2145
Yoshioka, Yutoku–2159
Yoshizawa, Shin–2303
Younk, Darrel–2286
Yu, Hsin-Yi–2074
Yuldashev, Petr–2249
Yuldashev, Petr V.–2193, 2250,
2289
Zabolotskaya, Evgenia A.–2158
Zabotin, Nikolai–2156
Zabotin, Nikolay A.–2156
Zabotina, Liudmila–2156
Zagzebski, James–2122
Zaher, Eesha A.–2294
Zahorik, Pavel–2198, 2242
Zanartu, Matias–2260
Zander, Anthony C.–2136
Zang, Xiaoqin–2156
Zartman, David J.–2172
Zayats, Victoria–2175
Zeale, Matt–2185
Zerbini, Alexandre N.–2246,
2277
Zhang, Fawen–2306
Zhang, Mingfeng–2202
Zhang, Tao–2224, 2318
Zhang, Weifeng G.–2316
Zhang, Weifeng Gordon–2317
Zhang, Xiaoming–2124
Zhang, Xiumei–2254
Zhang, Ying–2191
Zhang, YongOu–2224, 2318
Zhang, Yu–2075, 2315
Zhang, Zhaoyan–2259, 2293, 2294,
2295
Zhao, Dan–2288
Zhao, Xiaofeng–2096
Zheng, Fei–2280
Zhong, Pei–2191, 2192, 2221
Zhou, Nina–2205
Zhou, Yinqiu–2255
Zhu, Hongxiao–2075
Zhuang, Hanqi–2073
Ziaei, Ali–2083
Zimman, Lal–2295
Zimmerman, John–2203
Zorgani, ali–2196, 2279
Zou, Bo–2318
Zurk, Lisa M.–2112, 2147
168th Meeting: Acoustical Society of America
2365
I N D E X TO A DV E RT I S E R S
Acoustics First Corporation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Cover 2
www.acousticsfirst.com
AFMG Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .A1
www.AFMG.eu
Brüel & Kjær . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Cover 4
www.bksv.com
G.R.A.S. Sound & Vibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .A3
www.gras.dk
Meyer Sound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .A9
meyersound.com
PCB Piezotronics Inc. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Cover 3
www.pcb.com
Scantek, Inc. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .A7
www.Scantekinc.com
A DV E RT I S I N G S A L E S O F F I C E
JOURNAL ADVERTISING SALES
Robert G. Finnegan, Director, Journal Advertising
AIP Publishing, LLC
1305 Walt Whitman Road, Suite 300
Melville, NY 11747-4300
Telephone: 516-576-2433
Fax: 516-576-2481
Email: rfinnegan@aip.org
SR. ADVERTISING PRODUCTION MANAGER
Christine DiPasca
Telephone: 516-576-2434
Fax: 516-576-2481
Email: cdipasca@aip.org
www.pcb.com/acoustics
When You Need to Take a
Sound Measurement
PCB’s Broad Range of Acoustic Products
High Quality ■ Unbeatable Prices ■ Fast Delivery ■ Best Warranty
To Learn More Visit: www.pcb.com/acoustics
Toll Free in USA 800-828-8840 ■ E-mail info@pcb.com
THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA
Postmaster: If undeliverable, send notice on Form 3579 to:
ACOUSTICAL SOCIETY OF AMERICA
1305 Walt Whitman Road, Suite 300,
Melville, NY 11747-4300
ISSN: 0001-4966
CODEN: JASMAN
Periodicals Postage Paid at
Huntington Station, NY and
Additional Mailing Offices
The Journal
of the
Acoustical Society of America
Vol. 136, No. 4, Pt. 2 of 2, October 2014
www.acousticalsociety.org
168th Meeting
Acoustical Society of America
Indianapolis Marriott Downtown Hotel
Indianapolis, Indiana
27–31 October 2014
Table of Contents on p. A5
Published by the Acoustical Society of America through AIP Publishing LLC
CODEN: JASMAN
ISSN: 0001-4966
INFORMATION REGARDING THE JOURNAL
Publication of the Journal is jointly inanced by the dues of members of
the Society, by contributions from Sustaining Members, by nonmember
subscriptions, and by publication charges contributed by the authors’
institutions. A peer-reviewed archival journal, its actual overall value includes extensive voluntary commitments of time by the Journal ’s Associate Editors and reviewers. The Journal has been published continuously
since 1929 and is a principal means by which the Acoustical Society
seeks to fulill its stated mission—to increase and diffuse the knowledge
of acoustics and to promote its practical applications.
Submission of Manuscripts: Detailed instructions are given in the
latest version of the “Information for Contributors” document, which is
printed in the January and July issues of the Journal; the most current
version can be found online at http://asadl.org/jasa/for_authors_jasa.
This document gives explicit instructions regarding the content of the
transmittal letter and speciies completed forms that must accompany
each submission. All research articles and letters to the editor should
be submitted electronically via an online process at the site <http://
jasa.peerx-press.org/>. The uploaded iles should include the complete
manuscript and the igures. The authors should identify, on the cover
page of the article, the principal PACS classiication. A listing of PACS
categories is printed with the index in the inal issues (June and December) of each volume of the Journal; the listing can also be found at the
online site of the Acoustical Society. The PACS (physics and astronomy
classiication scheme) listing also identiies, by means of initials enclosed
in brackets, just which associate editors have the primary responsibility
for the various topics that are listed. The initials correspond to the names
listed on the back cover of each issue of the Journal and on the title page
of each volume. Authors are requested to consult these listings and to
identify which associate editor should handle their manuscript; the decision regarding the acceptability of a manuscript will ordinarily be made
by that associate editor. The Journal also has special associate editors
who deal with applied acoustics, education in acoustics, computational
acoustics, and mathematical acoustics. Authors may suggest one of these
associate editors, if doing so is consistent with the content or emphasis of
their paper. Review and tutorial articles are ordinarily invited; submission
of unsolicited review articles or tutorial articles (other than those which
can be construed as papers on education in acoustics) without prior discussion with the Editor-in-Chief is discouraged. Authors are also encouraged to discuss contemplated submissions with appropriate members of
the Editorial Board before submission. Submission of papers is open to
everyone, and one need not be a member of the Society to submit a
paper.
JASA Express Letters: The Journal includes a special section
which has a separate submission process than that for the rest of the
Journal. Details concerning the nature of this section and information for
contributors can be found at the online site http://scitation.aip.org/content/
asa/journal/jasael/info/authors.
Publication Charge: To support the cost of wide dissemination of
acoustical information through publication of journal pages and production of a database of articles, the author’s institution is requested to pay
a page charge of $80 per page (with a one-page minimum). Acceptance
of a paper for publication is based on its technical merit and not on the
acceptance of the page charge. The page charge (if accepted) entitles the
author to 100 free reprints. For Errata the minimum page charge is $10,
with no free reprints. Although regular page charges commonly accepted
by authors’ institutions are not mandatory for articles that are 12 or fewer
pages, payment of the page charges for articles exceeding 12 pages is
mandatory. Payment of the publication fee for JASA Express Letters is
also mandatory.
Selection of Articles for Publication: All submitted articles are peer
reviewed. Responsibility for selection of articles for publication rests with
the Associate Editors and with the Editor-in-Chief. Selection is ordinarily
based on the following factors: adherence to the stylistic requirements of
the Journal, clarity and eloquence of exposition, originality of the contribution, demonstrated understanding of previously published literature
pertaining to the subject matter, appropriate discussion of the relationships of the reported research to other current research or applications,
appropriateness of the subject matter to the Journal, correctness of the
content of the article, completeness of the reporting of results, the reproducibility of the results, and the signiicance of the contribution. The Journal reserves the right to refuse publication of any submitted article without
giving extensively documented reasons. Associate Editors and reviewers
are volunteers and, while prompt and rapid processing of submitted
manuscripts is of high priority to the Editorial Board and the Society, there
is no a priori guarantee that such will be the case for every submission.
Supplemental Material: Authors may submit material that is part
supplemental to a paper. Deposits must be in electronic media, and can
include text, igures, movies, computer programs, etc. Retrieval instructions
are footnoted in the related published paper. Direct requests to the JASA
office at jasa@aip.org for additional information, see http://publishing.aip.
org/authors.
Role of AIP Publishing: AIP Publishing LLC has been under contract
with the Acoustical Society of America (ASA) continuously since 1933
to provide administrative and editorial services. The providing of these
services is independent of the fact that the ASA is one of the member
societies of AIP Publishing. Services provided in relation to the Journal
include production editing, copyediting, composition of the monthly issues
of the Journal, and the administration of all inancial tasks associated with
the Journal. AIP Publishing’s administrative services include the billing and
collection of nonmember subscriptions, the billing and collection of page
charges, and the administration of copyright-related services. In carrying
out these services, AIP Publishing acts in accordance with guidelines
established by the ASA. All further processing of manuscripts, once they
have been selected by the Associate Editors for publication, is handled by
AIP Publishing. In the event that a manuscript, in spite of the prior review
process, still does not adhere to the stylistic requirements of the Journal,
AIP Publishing may notify the authors that processing will be delayed until
a suitably revised manuscript is transmitted via the appropriate Associate
Editor. If it appears that the nature of the manuscript is such that processing
and eventual printing of a manuscript may result in excessive costs, AIP
Publishing is authorized to directly bill the authors. Publication of papers is
ordinarily delayed until all such charges have been paid.
Copyright © 2014, Acoustical Society of America. All rights reserved.
Copying: Single copies of individual articles may be made for private use or research. Authorization is given to copy
articles beyond the free use permitted under Sections 107 and 108 of the U.S. Copyright Law, provided that the copying
fee of $30.00 per copy per article is paid to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923,
USA, www.copyright.com. (Note: The ISSN for this journal is 0001-4966.)
Authorization does not extend to systematic or multiple reproduction, to copying for promotional purposes, to electronic
storage or distribution, or to republication in any form. In all such cases, speciic written permission from AIP Publishing
LLC must be obtained.
Note: Copies of individual articles may also be purchased online via AlP’s DocumentStore service.
Permission for Other Use: Permission is granted to quote from the Journal with the customary acknowledgment of
the source. Republication of an article or portions thereof (e.g., extensive excerpts, igures, tables, etc.) in original form
or in translation, as well as other types of reuse (e.g., in course packs) require formal permission from AIP Publishing
and may be subject to fees. As a courtesy, the author of the original journal article should be informed of any request for
republication/reuse.
Obtaining Permission and Payment of Fees: Using Rightslink®: AIP Publishing has partnered with the Copyright
Clearance Center to offer Rightslink, a convenient online service that streamlines the permissions process. Rightslink
allows users to instantly obtain permissions and pay any related fees for reuse of copyrighted material, directly from AlP’s
website. Once licensed, the material may be reused legally, according to the terms and conditions set forth in each unique
license agreement.
To use the service, access the article you wish to license on our site and simply click on the Rightslink icon/ “Permissions
for Reuse” link in the abstract. If you have questions about Rightslink, click on the link as described, then click the “Help”
button located in the top right-hand corner of the Rightslink page.
Without using Rightslink: Address requests for permission for republication or other reuse of journal articles or portions
thereof to: Office of Rights and Permissions, AIP Publishing LLC, 1305 Walt Whitman Road, Suite 300, Melville, NY 117474300, USA; FAX: 516-576-2450; Tel.: 516-576-2268; E-mail: rights@aip.org
Micr
Microphone
ophone set
set
for llow
ow SPL
a
applications
pplications
G.R.A.S.
G.R.A.S
S 46BL
•
•
•
•
High se
Low no
Constan
TEDS I
We make m
microphones
grras.dk
CODEN: JASMAN
ISSN: 0001-4966
The Journal
of the
Acoustical Society of America
Acoustical Society of America Editor-in-Chief: Allan D. Pierce
ASSOCIATE EDITORS OF JASA
General Linear Acoustics: J.B. Lawrie, Brunel Univ.; A.N. Norris, Rutgers
University; O. Umnova, Univ. Salford; R.M. Waxler, Natl. Ctr. for Physical
Acoustics; S.F. Wu, Wayne State Univ.
Nonlinear Acoustics: R.O. Cleveland, Univ. of Oxford; M. Destrade, Natl. Univ.
Ireland, Galway; L. Huang, Univ. of Hong Kong; V.E. Ostashev, Natl. Oceanic
and Atmospheric Admin; O.A. Sapozhnikov, Moscow State Univ.
Atmospheric Acoustics and Aeroacoustics: P. Blanc-Benon, Ecole Centrale
de Lyon; A. Hirschberg, Eindhoven Univ. of Technol.; J.W Posey, NASA Langley
Res. Ctr. (ret.); D.K. Wilson, Army Cold Regions Res. Lab.
Underwater Sound: J.I. Arvelo, Johns Hopkins Univ.; N.P. Chotiros, Univ. of
Texas; J.A. Colosi, Naval Postgraduate School; S.E. Dosso, Univ. of Victoria; T.F.
Duda, Woods Hole Oceanographic Inst.; K.G. Foote, Woods Hole Oceanographic
Inst.; A.P. Lyons, Pennsylvania State Univ.; Martin Siderius, Portland State
Univ.; H.C. Song, Scripps Inst. of Oceanography; A.M. Thode, Scripps Inst. of
Oceanography
Ultrasonics and Physical Acoustics: T. Biwa, Tohoku Univ.; M.F. Hamilton,
Univ. Texas, Austin; T.G. Leighton, Inst. for Sound and Vibration Res.
Southampton; J.D. Maynard, Pennsylvania State Univ.; R. Raspet, Univ. of
Mississippi; R.K. Snieder, Colorado School of Mines; J.A. Turner, Univ. of
Nebraska—Lincoln; M.D. Verweij, Delft Univ. of Technol.
Transduction, Acoustical Measurements, Instrumentation, Applied Acoustics: M.R. Bai, Natl., Tsinghua Univ.; D.A. Brown, Univ. of MassachusettsDartmouth; D.D. Ebenezer, Naval Physical and Oceanographic Lab., India; T.R.
Howarth, NAVSEA, Newport; M. Sheplak, Univ. of Florida
Structural Acoustics and Vibration: L. Cheng, Hong Kong Polytechnic Univ.;
D. Feit, Applied Physical Sciences Corp.; L.P. Franzoni, Duke Univ.; J.H. Ginsberg,
Georgia Inst. of Technol. (emeritus); T. Kundu, Univ. of Arizona; K.M. Li, Purdue
Univ.; J.G. McDaniel, Boston Univ.; E.G. Williams, Naval Research Lab.
Noise: Its Effects and Control: G. Brambilla, Natl. Center for Research
(CNR), Rome; B.S. Cazzolato, Univ. of Adelaide; S. Fidell, Fidell Assoc.; K.V.
Horoshenkov, Univ. of Bradford; R. Kirby, Brunel Univ.; B. Schulte-Fortkamp,
Technical Univ. of Berlin
Architectural Acoustics: F. Sgard, Quebec Occupational Health and Safety
Res. Ctr.; J.E. Summers, Appl. Res. Acoust., Washington; M. Vorlaender, Univ.
Aachen; L.M. Wang, Univ. of Nebraska—Lincoln
Acoustic Signal Processing: S.A. Fulop, California State Univ., Fresno; P.J.
Loughlin, Univ. of Pittsburgh; Z-H. Michalopoulou, New Jersey Inst. Technol.;
K.G. Sabra, Georgia Inst. Tech.
Physiological Acoustics: C. Abdala, House Research Inst.; I.C. Bruce, McMaster
Univ.; K. Grosh, Univ. of Michigan; C.A. Shera, Harvard Medical School
Psychological Acoustics: L.R. Bernstein, Univ. Conn.; V. Best, Natl. Acoust.
Lab., Australia; E. Buss, Univ. of North Carolina, Chapel Hill; J.F. Culling,
Cardiff Univ.; F.J. Gallun, Dept. Veteran Affairs, Portland; Enrique LopezPoveda, Univ. of Salamanca; V.M. Richards, Univ. California, Irvine; M.A.
Stone, Univ. of Cambridge; E.A. Strickland, Purdue Univ.
Speech Production: D.A. Berry, UCLA School of Medicine; L.L. Koenig, Long
Island Univ. and Haskins Labs.; C.H. Shadle, Haskins Labs.; B.H. Story, Univ. of
Arizona; Z. Zhang, Univ. of California, Los Angeles
Speech Perception: D. Baskent, Univ. Medical Center, Groningen; C.G. Clopper,
Ohio State Univ.; B.R. Munson, Univ. of Minnesota; P.B. Nelson, Univ. of Minnesota
Speech Processing: C.Y. Espy-Wilson, Univ. of Maryland; College Park; M.A.
Hasegawa-Johnson, Univ. of Illinois; S.S. Narayanan, Univ. of Southern California
Musical Acoustics: D. Deutsch, Univ. of California, San Diego; T.R. Moore,
Rollins College; J. Wolfe, Univ. of New South Wales
Bioacoustics: W.W.L. Au, Hawaii Inst. of Marine Biology; C.C. Church, Univ.
of Mississippi; R.R. Fay, Loyola Univ., Chicago; J.J. Finneran, Navy Marine
Mammal Program; M.C. Hastings, Georgia Inst. of Technol; G. Haïat, Natl.
Ctr. for Scientifi c Res. (CNRS); D.K. Mellinger, Oregon State Univ.; D.L.
Miller, Univ. of Michigan; M.J. Owren, Georgia State Univ., A.N. Popper, Univ.
Maryland; A.M. Simmons, Brown Univ.; K.A. Wear, Food and Drug Admin; Suk
Wang Yoon, Sungkyunkwan Univ.
Computational Acoustics: D.S. Burnett, Naval Surface Warfare Ctr., Panama
City; N.A. Gumerov, Univ. of Maryland; L.L. Thompson, Clemson Univ.
Mathematical Acoustics: R. Martinez, Applied Physical Sciences
Education in Acoustics: B.E. Anderson, Los Alamos National Lab.; V.W.
Sparrow, Pennsylvania State Univ.; P.S. Wilson, Univ. of Texas at Austin
Reviews and Tutorials: W.W.L. Au, Univ. Hawaii
Forum and Technical Notes: N. Xiang, Rensselaer Polytechnic Univ.
Acoustical News: E. Moran, Acoustical Society of America
Standards News, Standards: S. Blaeser, Acoustical Society of America; P.D.
Schomer, Schomer & Assoc., Inc.
Book Reviews: P.L. Marston, Washington State Univ.
Patent Reviews: S.A. Fulop, California State Univ., Fresno; D.L. Rice,
Computalker Consultants (ret.)
ASSOCIATE EDITORS OF JASA EXPRESS LETTERS
Editor: J.F. Lynch, Woods Hole Oceanographic Inst.
General Linear Acoustics: A.J.M. Davis, Univ. California, San Diego; O.A.
Godin, NOAA-Earth System Research Laboratory; S.F. Wu, Wayne State Univ.
Nonlinear Acoustics: M.F. Hamilton, Univ. of Texas at Austin
Aeroacoustics and Atmospheric Sound: V.E. Ostashev, Natl. Oceanic and
Atmospheric Admin.
Underwater Sound: G.B. Deane, Univ. of California, San Diego; D.R. Dowling,
Univ. of Michigan, A.C. Lavery, Woods Hole Oceanographic Inst.; J.F. Lynch,
Woods Hole Oceanographic Inst.; W.L. Siegmann, Rensselaer Polytechnic Institute
Ultrasonics, Quantum Acoustics, and Physical Effects of Sound: P.E.
Barbone, Boston Univ.; T.D. Mast, Univ of Cincinatti; J.S. Mobley, Univ. of
Mississippi
Transduction: Acoustical Devices for the Generation and Reproduction
of Sound; Acoustical Measurements and Instrumentation: M.D. Sheplak,
Univ. of Florida
Structural Acoustics and Vibration: J.G. McDaniel, Boston Univ.
Noise: S.D. Sommerfeldt, Brigham Young Univ.
Architectural Acoustics: N. Xiang, Rensselaer Polytechnic Inst.
Acoustic Signal Processing: D.H. Chambers, Lawrence Livermore Natl. Lab.;
C.F. Gaumond, Naval Research Lab.
Physiological Acoustics: B.L. Lonsbury-Martin, Loma Linda VA Medical Ctr.
Psychological Acoustics: Q.-J. Fu, House Ear Inst.
Speech Production: A. Lofqvist, Univ. Hospital, Lund, Sweden
Speech Perception: A. Cutler, Univ. of Western Sydney; S. Gordon-Salant, Univ.
of Maryland
Speech Processing and Communication Systems and Speech Perception:
D.D. O’Shaughnessy, INRS-Telecommunications
Music and Musical Instruments: D.M. Campbell, Univ. of Edinburgh;
D. Deutsch, Univ. of California, San Diego; T.R. Moore, Rollins College; T.D.
Rossing, Stanford Univ.
Bioacoustics—Biomedical: C.C. Church, Natl. Ctr. for Physical Acoustics
Bioacoustics—Animal: W.W.L. Au, Univ. Hawaii; C.F. Moss, Univ. of Maryland
Computational Acoustics: D.S. Burnett, Naval Surface Warfare Ctr., Panama City;
L.L. Thompson, Clemson Univ.
CONTENTS
page
Technical Program Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A8
Schedule of Technical Session Starting Times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A10
Map of Meeting Rooms at Marriott . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A11
Map of Indianapolis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A12
Calendar—Technical Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A13
Schedule—Other Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A16
Meeting Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A17
Guidelines for Presentations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A23
Dates of Future Meetings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A25
Technical Sessions (1a__), Monday Morning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2073
Technical Sessions (1p__), Monday Afternoon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2088
Tutorial Session (1eID), Monday Evening . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2113
Technical Sessions (2a__), Tuesday Morning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2114
Technical Sessions (2p__), Tuesday Afternoon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2150
Technical Sessions (3a__), Wednesday Morning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2181
Technical Sessions (3p__), Wednesday Afternoon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2218
Plenary Session and Awards Ceremony, Wednesday Afternoon . . . . . . . . . . . . . . . . . . . . . . . . . . 2228
Pioneers of Underwater Acoustics Medal encomium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2229
Silver Medal in Speech Communication encomium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2232
Wallace Clement Sabine Medal encomium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2237
Technical Sessions (4a__), Thursday Morning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2241
Technical Sessions (4p__), Thursday Afternoon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2270
Technical Sessions (5a__), Friday Morning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2300
Sustaining Members . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2344
Application Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2348
Regional Chapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2349
Author Index to Abstracts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2355
Index to Advertisers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2366
A5
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
A5
ACOUSTICAL SOCIETY OF AMERICA
The Acoustical Society of America was founded in 1929 to increase and diffuse the knowledge of acoustics and promote
its practical applications. Any person or corporation interested in acoustics is eligible for membership in this Society. Further
information concerning membership, together with application forms, may be obtained by addressing Elaine Moran, ASA
Office Manager, 1305 Walt Whitman Road, Suite 300, Melville, NY 11747-4300, T: 516-576-2360, F: 631-923-2875; E-mail:
asa@aip.org; Web: http://acousticalsociety.org
Officers 2014-2015
Barbara G. Shinn-Cunningham,
Vice President
Judy R. Dubno, President
Department of Otolaryngology–Head and Neck Surgery
Medical University of South Carolina
135 Rutledge Avenue, MSC5500
Charleston, SC 29425-5500
(843) 792-7978
dubnojr@musc.edu
Cognitive and Neural Systems
Biomedical Engineering
Boston University
677 Beacon Street
Boston, MA 02215
(617) 353-5764
shinn@cns.bu.edu
Christy K. Holland, President-Elect
University of Cincinnati
ML 0586
231 Albert Sabin Way
Cincinnati, OH 45267-0586
(513) 558-5675
christy.holland@uc.edu
Durham School of Architectural Engineering and Construction
University of Nebraska-Lincoln
1110 South 67th Street
Omaha, NE 68182-0816
(402) 554-2065
lwang4@unl.edu
David Feit, Treasurer
Acoustical Society of America
1305 Walt Whitman Road, Suite 300
Melville, NY 11747-4300
(516) 576-2360
dfeit@aip.org
Members of the Executive Council
Peter H. Dahl
Applied Physics Laboratory
and Department of Mechanical
Engineering
University of Washington
1013 N.E. 40th Street
Seattle, WA 98105
(206) 543-2667
dahl@apl.washington.edu
Michael R. Bailey
Applied Physics Laboratory
Center for Industrial and Medical
Ultrasound
1013 N.E. 40th St.
Seattle, WA 98105
(206) 685-8618
bailey@apl.washington.edu
Center for Industrial and Medical
Ultrasound
Applied Physics Laboratory
University of Washington
1013 N.E. 40th Street
Seattle, WA 98105
(206) 221-6585
vera@apl.washington.edu
Christine H. Shadle
Ann R. Bradlow
Haskins Laboratories
300 George Street, Suite 900
New Haven, CT 06511
(203) 865-6163 x 228
shadle@haskins.yale.edu
Department of Linguistics
Northwestern University
2016 Sheridan Road
Evanston, IL 60208
(847) 491-8054
abradlow@northwestern.edu
Applied Research Laboratories
The University of Texas at Austin
P. O. Box 8029
Austin, TX 78713-8029
(512) 835-3790
misakson@arlut.utexas.edu
Air Freight
North, Central,
& S. America
ASA Members
(on membership)
Institutions (print + online) $2155.00
$2315.00
Institutions (online only)
$1990.00
$1990.00
Acoustical Society of America
1305 Walt Whitman Road, Suite 300
Melville, NY 11747-4300
(516) 576-2360
sfox@aip.org
B.G. Shinn-Cunningham, Vice President
L.M. Wang, Vice President-Elect
P.H. Dahl, Past Vice President
A.C. Lavery, Acoustical Oceanography
C.F. Moss, Animal Bioacoustics
K.W. Good, Jr., Architectural Acoustics
N. McDannold, Biomedical Acoustics
R.T. Richards, Engineering Acoustics
A.C.H. Morrison, Musical Acoustics
S.D. Sommerfeldt, Noise
J.R. Gladden, Physical Acoustics
M. Wojtczak, Psychological and Physiological Acoustics
N. Xiang, Signal Processing in Acoustics
C.L. Rogers, Speech Communication
J.E. Phillips, Structural Acoustics and Vibration
D. Tang, Underwater Acoustics
K.J. de Jong, P. Davies, General Cochairs
R.F. Port, Technical Program Chair
K.M. Li, T. Lorenzen, Audio/Visual and WiFi
D. Kewley-Port, T. Bent, Food and Beverage
C. Richie, Volunteer Coordination
W.J. Murphy, Technical Tour
U.J. Hansen, Educational Activities
D. Kewley-Port, Special Events
M. Kondaurova, G. Li, M. Hayward, Indianapolis Visitor
Information
T. Bent, Student Activities
M.C. Morgan, Meeting Administrator
U.S. Army Research Laboratory
RDRL-SES-P
2800 Powder Mill Road
Adelphi, MD 20783-1197
(301) 394-3081
michael.v.scanlon2.civ@mail.mil
Subscription Prices, 2014
U.S.A.
& Poss.
Susan E. Fox, Executive Director
Organizing Committee
Michael V. Scanlon
Marcia J. Isakson
Schomer & Associates Inc.
2117 Robert Drive
Champaign, IL 61821
(217) 359-6602
schomer@schomerandassociates.com
Members of the Technical Council
Vera A. Khokhlova
Department of Ocean Engineering
University of Rhode Island
Narragansett Bay Campus
Narragansett, Rl 02882
(401) 874-6540
miller@uri.edu
Acoustical Society of America
P.O. Box 274
West Barnstable, MA 02668
(508) 362-1200
allanpierce@verizon.net
Paul D. Schomer, Standards Director
Lily M. Wang, Vice President-Elect
James H. Miller
Allan D. Pierce, Editor-in-Chief
Europe
Mideast, Africa,
Asia & Oceania
$ 160.00
$2315.00
$1990.00
$ 160.00
$ 2315.00
$1990.00
The Journal of the Acoustical Society of America (ISSN: 0001-4966) is published monthly by the Acoustical Society of America through the AIP Publishing
LLC. POSTMASTER: Send address changes to The Journal of the Acoustical
Society of America, 1305 Walt Whitman Road, Suite 300, Melville, NY 117474300. Periodicals postage paid at Huntington Station, NY 11746 and additional
mailing offices.
Editions: The Journal of the Acoustical Society of America is published simultaneously in print and online. Journal articles are available online from Volume
1 (1929) to the present. Abstracts of journal articles published by ASA, AIP
Publishing and its Member Societies (and several other publishers) are available
from AIP Publishing’s SPIN database, via AIP Publishing’s Scitation Service
(http://scitation.aip.org).
Back Numbers: All back issues of the Journal are available online. Some,
but not all, print issues are also available. Prices will be supplied upon request
to Elaine Moran, ASA Office Manager, 1305 Walt Whitman Road, Suite 300,
Melville, NY 11747-4300. Telephone: (516) 576-2360; FAX: (631) 923-2875;
E-mail: asa@aip.org.
Subscription, renewals, and address changes should be addressed to AIP
Publishing LLC - FMS, 1305 Walt Whitman Road, Suite 300, Melville, NY 117474300. Allow at least six weeks advance notice. For address changes please send
both old and new addresses and, if possible, include a mailing label from a recent
issue.
Claims, Single Copy Replacement and Back Volumes: Missing issue
requests will be honored only if received within six months of publication date
(nine months for Australia and Asia). Single copies of a journal may be ordered
and back volumes are available. Members—contact AIP Publishing Member
Services at (516) 576-2288; (800) 344-6901. Nonmember subscribers—contact
AIP Publishing Subscriber Services at (516) 576-2270; (800) 344-6902; E-mail:
subs@aip.org.
Page Charge and Reprint Billing: Contact: AIP Publishing Publication Page
Charge and Reprints—CFD, 1305 Walt Whitman Road, Suite 300, Melville, NY
11747-4300; (516) 576-2234; (800) 344-6909; E-mail: prc@aip.org.
Document Delivery: Copies of journal articles can be purchased for immediate download at www.asadl.org.
Your Source for all Your
Sound and Vibration Instrumentation
Sales/Calibration/Rental
Brands you Know
DELTA
Tools you Need
Calibrators
Sound Sources
Impedance Tubes
Tapping Machines
Prediction Software
Construction Noise Monitoring
Intensity Systems
Acoustic Camera
Vibration Meters
Sound Level Meters
Multi-Channel Systems
Microphones & Accelerometers
Calibrations you Trust
Distributed by:
Scantek, Inc.
www.Scantekinc.com
410.290.7726
TECHNICAL PROGRAM SUMMARY
*Indicates Special Session
Monday morning
1aAB
Topics in Animal Bioacoustics I
*1aNS
Metamaterials for Noise Control I
*1aPA
Jet Noise Measurements and Analyses I
1aSC
Speech Processing and Technology (Poster Session)
*1aSP
Sampling Methods for Bayesian Signal Processing
*1aUW Understanding the Target/Waveguide System–Measurement and
Modeling I
Monday afternoon
*1pAA
Computer Auralization as an Aid to Acoustically Proper Owner/
Architect Design Decisions
*1pAB
Array Localization of Vocalizing Animals
1pBA
Medical Ultrasound
*1pNS
Metamaterials for Noise Control II
*1pPA
Jet Noise Measurements and Analyses II
*1pSCa Findings and Methods in Ultrasound Speech Articulation Tracking
1pSCb Issues in Cross Language and Dialect Perception (Poster Session)
*1pUW Understanding the Target/Waveguide System–Measurement and
Modeling II
Monday evening
*1eID
Tutorial Lecture on Musical Acoustics: Science and Performance
Tuesday morning
*2aAA
Architectural Acoustics and Audio I
*2aAB
Mobile Autonomous Platforms for Bioacoustic Sensing
*2aAO
Parameter Estimation in Environments That Include Out-of-Plane
Propagation Effects
*2aBA
Quantitative Ultrasound I
*2aED
Undergraduate Research Exposition (Poster Session)
*2aID
Historical Transducers
*2aMU Piano Acoustics
*2aNSa New Frontiers in Hearing Protection I
*2aNSb Launch Vehicle Acoustics I
2aPA
Outdoor Sound Propagation
*2aSAa Computational Methods in Structural Acoustics and Vibration
*2aSAb Vehicle Interior Noise
2aSC
Speech Production and Articulation (Poster Session)
2aUW Signal Processing and Ambient Noise
Tuesday afternoon
*2pAA
Architectural Acoustics and Audio II
2pAB
Topics in Animal Bioacoustics II
2pAO
General Topics in Acoustical Oceanography
*2pBA
Quantitative Ultrasound II
2pEDa General Topics in Education in Acoustics
*2pEDb Take’s 5
*2pID
Centennial Tribute to Leo Beranek’s Contributions in Acoustics
*2pMU Synchronization Models in Musical Acoustics and Psychology
*2pNSa New Frontiers in Hearing Protection II
*2pNSb Launch Vehicle Acoustics II
*2pPA
Demonstrations in Acoustics
*2pSA
Nearfield Acoustical Holography
2pSC
Segments and Suprasegmentals (Poster Session)
2pUW Propagation and Scattering
Wednesday morning
*3aAA
Design and Performance of Office Workspaces in High
Performance Buildings
*3aAB
Predator–Prey Relationships
*3aAO
Education in Acoustical Oceanography and Underwater Acoustics
3aBA
Kidney Stone Lithotripsy
*3aEA
Mechanics of Continuous Media
*3aID
Graduate Studies in Acoustics (Poster Session)
3aMU Topics in Musical Acoustics
*3aNS
Wind Turbine Noise
*3aPA
Acoustics of Pile Driving: Models, Measurements, and Mitigation
A8
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
*3aSAa
3aSAb
*3aSC
3aSPa
3aSPb
*3aUW
Vibration Reduction in Air-Handling Systems
General Topics in Structural Acoustics and Vibration
Vowels = Space + Time, and Beyond: A Session in Honor of Diane
Kewley-Port
Beamforming and Source Tracking
Spectral Analysis, Source Tracking, and System Identification
(Poster Session)
Standardization of Measurement, Modeling, and Terminology of
Underwater Sound
Wednesday afternoon
3pAA
Architectural Acoustics Medley
*3pBA
History of High Intensity Focused Ultrasound
*3pED
Acoustics Education Prize Lecture
*3pID
Hot Topics in Acoustics
3pNS
Sonic Boom and Numerical Methods
*3pUW Shallow Water Reverberation I
Thursday morning
*4aAAa Room Acoustics Effects on Speech Comprehension and Recall I
*4aAAb Uses, Measurements, and Advancements in the Use of Diffusion
and Scattering Devices
*4aAB
Use of Passive Acoustics for Estimation of Animal Population
Density I
*4aBA
Mechanical Tissue Fractionation by Ultrasound: Methods, Tissue
Effects, and Clinical Applications I
4aEA
Acoustic Transduction: Theory and Practice I
*4aPAa Borehole Acoustic Logging and Micro-Seismics for Hydrocarbon
Reservoir Characterization
4aPAb Topics in Physical Acoustics I
*4aPP
Physiological and Psychological Aspects of Central Auditory
Processing Dysfunction I
*4aSCa Subglottal Resonances in Speech Production and Perception
4aSCb Learning and Acquisition of Speech (Poster Session)
4aSPa
Imaging and Classification
4aSPb Beamforming, Spectral Estimation, and Sonar Design
*4aUW Shallow Water Reverberation II
Thursday afternoon
*4pAAa Acoustic Trick-or-Treat: Eerie Noises, Spooky Speech, and
Creative Masking
*4pAAb Room Acoustics Effects on Speech Comprehension and Recall II
*4pAB
Use of Passive Acoustics for Estimation of Animal Population
Density II
*4pBA
Mechanical Tissue Fractionation by Ultrasound: Methods, Tissue
Effects, and Clinical Applications II
4pEA
Acoustic Transduction: Theory and Practice II
*4pMU Assessing the Quality of Musical Instruments
*4pNS
Virtual Acoustic Simulation
4pPA
Topics in Physical Acoustics II
*4pPP
Physiological and Psychological Aspects of Central Auditory
Processing Dysfunction II
4pSC
Voice (Poster Session)
*4pUW Shallow Water Reverberation III
Friday morning
*5aBA
Cavitation Control and Detection Techniques
*5aED
Hands-On Acoustics: Demonstrations for Indianapolis Area
Students
5aNS
Transportation Noise, Soundscapes, and Related Topics
5aPPa
Psychological and Physiological Acoustics Potpourri (Poster
Session)
5aPPb Perceptual and Physiological Mechanisms, Modeling, and
Assessment
5aSC
Speech Perception and Production in Challenging Conditions
(Poster Session)
*5aUW Acoustics, Ocean Dynamics, and Geology of Canyons
168th Meeting: Acoustical Society of America
A8
The Art of Sound.
Thinking Sound.
>OH[KV1VOU(KHTZ4VU[YL\_1Haa-LZ[P]HS+H]L4H[[OL^Z
4L[HSSPJH4HYR4VYYPZ;OL*VUJLY[NLIV\^;OL(WWLS9VVT
;OL4\ZPR]LYLPU9V`HS(SILY[/HSSHUK:HU-YHUJPZJV6WLYH
OH]LPUJVTTVU&4L`LY:V\UK;OL[VWJOVPJLPUJ\[[PUNLKNL
ZVUPJZVS\[PVUZMVY]LU\LZHUKHY[PZ[Z^VYSK^PKL
@V\»SSOLHY[OLKPMMLYLUJL>LN\HYHU[LLP[
^^^TL`LYZV\UKJVT
Original image made available by NASA, ESA, and the Hubble Heritage Team (STscI/Aura). Digital montage by Deborah O’Grady.
The Science of Sound.
A10
SCHEDULE OF STARTING TIMES FOR TECHNICAL SESSIONS AND TECHNICAL COMMITTEE (TC) MEETINGS
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
M am
M pm
Tu am
Tu pm
1aPA
8:15
1pBA
1:15
1pPA
1:15
2aBA
7:55
2aPA
8:30
1aUW
8:45
1aSP
8:40
1pUW
1:25
2aNSb
8:15
2aUW
8:00
2aAO
8:25
2pBA
1:30
2pPA
1:00
2pEDa
2:45
2pEDb
3:30
2pID
1:55
2pUW
1:00
2pAO
1:45
1aAB
8:25
1pAB
1:00
1pSCa
1:00
Indiana A/B
Indiana C/D
M eve
Indiana E
Indiana F
Indiana G
Lincoln
Marriott 1/2
Marriott 3/4
168th Meeting: Acoustical Society of America
Marriott 5
1aNS
7:55
1aSC
9:30
2aAB
8:25
2aSAa
8:00
2aSAb
10:30
2aNSa
9:25
2aSC
8:00
1pNS
12:55
1pSCb
1:00
Marriott 6
Marriott 7/8
2aED
9:00
2aAA
7:55
2aID
8:00
2aMU
9:00
1pAA
1:00
Marriott 9/10
Santa Fe
Hilbert Theater
A10
1eID
7:00
2pAB
1:25
2pSA
2:00
2pNSa
1:25
2pSC
1:00
2pAA
1:00
2pNSb
1:00
2pMU
1:00
Tu ev
W am
W pm
W ev
Th am
Th pm
3pBA
1:00
3pED
2:00
TCBA
7:30
TCPA
8:00
3aBA
8:00
3aPA
8:20
4aBA
7:55
4aPAa
8:00
4aPAb
10:30
4pBA
1:30
4pPA
1:30
3aAO
8:00
3aUW
9:00
3aSPa
8:30
8aSPb
10:15
3aAB
8:25
3aSAa
8:00
3aSAb
10:00
3aNS
8:45
3aSC
8:00
3pID
1:00
TCAO
8:00
TCSA
8:00
TCSC
8:00
TCAA
8:00
TCEA
4:30
3aID
9:00
3aAA
8:20
3aEA
8:00
3aMU
9:00
4pUW
1:00
4pAAa
1:10
TCUW
7:30
TCSP
7:30
4pAB
1:15
4pPP
1:30
TCAB
7:30
TCPP
7:30
TCNS
7:30
4aSCb
8:00
4pNS
1:15
4pSC
1:00
4aAAa
8:40
4aEA
8:30
4aSCa
8:00
4aAAb
10:35
4pAAb
1:15
4pEA
1:30
4pMU
1:00
Fri am
5aBA
8:00
4aUW
8:00
4aSPa
9:00
4aSPb
10:15
4aAB
8:00
4aPP
8:30
3pNS
1:00
3pAA
1:00
3pUW
1:00
Th ev
5aED
10:00
5aUW
8:00
5aPPb
10:15
5aPPa
8:00
5aSC
8:00
5aNS
9:45
TCMU
7:30
Acoustical Society of America / Indianapolis Marriott Downtown
Guest
Guest
Elevators Elevators
Clubhouse
Colorado
1st Floor
Front
Desk
IN A
IN G
IN B
W
Avenue
Indiana Ballroom
Lobby
IN E
IN C
IN F
M
IN D
Monument
Missouri
Entrance
Florida
Illinois
Texas
Michigan
Escalators
Parking
Garage
Elevators
Utah
Phoenix
M
W
Lincoln
M
W
Guest
Elevators
2nd Floor
Guest
Elevators
Albany
Atlanta
Austin
Boston
Columbus
Santa Fe
Business
Center
MB1
Indy
Board
Room
Skywalk
MB10
Stairway
MB9
MB2
Marriott
Ballroom
MB5
MB6
MB3
MB8
MB4
MB7
MB 7-10
Foyer
for
Exhibits
Open to
Lobby
Below
Stairway
Denver Foyer
Denver
Marriott Foyer
Reg 1
A11
Reg 2
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Parking
Garage
Elevators
Escalators
168th Meeting: Acoustical Society of America
A11
A12
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
A12
TECHNICAL PROGRAM CALENDAR
168th Meeting
Indianapolis, Indiana
27–31 October 2014
MONDAY MORNING
8:25
1aAB
Animal Bioacoustics: Topics in Animal
Bioacoustics I. Lincoln
7:55
1aNS
Noise, Physical Acoustics, Structural
Acoustics and Vibration, and Engineering
Acoustics: Metamaterials for Noise Control I.
Marriott 3/4
8:15
1aPA
Physical Acoustics and Noise: Jet Noise
Measurements and Analyses I. Indiana C/D
9:30
1aSC
Speech Communication: Speech
Processing and Technology (Poster
Session). Marriott 5
8:40
1aSP
Signal Processing in Acoustics: Sampling
Methods for Bayesian Signal Processing.
Indiana G
8:45
1aUW
Underwater Acoustics: Understanding the
Target/Waveguide System-Measurement and
Modeling I. Indiana F
TUESDAY MORNING
7:55
2aAA
Architectural Acoustics and Engineering
Acoustics : Architectural Acoustics and
Audio I. Marriott 7/8
8:25
2aAB
Animal Bioacoustics, Acoustical
Oceanography, and Signal Processing in
Acoustics: Mobile Autonomous Platforms
for Bioacoustic Sensing. Lincoln
8:25
2aAO
Acoustical Oceanography, Underwater
Acoustics, and Signal Processing in
Acoustics: Parameter Estimation in
Environments that Include Out-of-Plane
Propagation Effects. Indiana G
7:55
2aBA
Biomedical Acoustics: Quantitative
Ultrasound I. Indiana A/B
9:00
2aED
Education in Acoustics: Undergraduate
Research Exposition (Poster Session).
Marriott 6
8:00
2aID
Archives and History and Engineering
Acoustics: Historical Transducers. Marriott
9/10
9:00
2aMU
Musical Acoustics: Piano Acoustics.
Santa Fe
9:25
2aNSa
Noise and Psychological and Physiological
Acoustics: New Frontiers in Hearing
Protection I. Marriott 3/4
8:15
2aNSb
Noise and Structural Acoustics and
Vibration: Launch Vehicle Acoustics I.
Indiana E
8:30
2aPA
Physical Acoustics: Outdoor Sound
Propagation. Indiana C/D
8:00
2aSAa
Structural Acoustics and Vibration and
Noise: Computational Methods in Structural
Acoustics and Vibration. Marriott 1/2
MONDAY AFTERNOON
1:00
1:00
1pAA
1pAB
Architectural Acoustics: Computer
Auralization as an Aid to Acoustically
Proper Owner/Architect Design Decisions.
Marriott 7/8
Animal Bioacoustics and Signal
Processing in Acoustics : Array
Localization of Vocalizing Animals. Lincoln
1pBA
Biomedical Acoustics: Medical Ultrasound.
Indiana A/B
12:55 1pNS
Noise and Physical Acoustics: Metamaterials
for Noise Control II. Marriott 3/4
1:15
1pPA
Physical Acoustics and Noise: Jet Noise
Measurements and Analyses II. Indiana C/D
1:00
1pSCa
Speech Communication and Biomedical
Acoustics: Findings and Methods in
Ultrasound Speech Articulation Tracking.
Marriott 1/2
1:15
1:00
1:25
1pSCb
1pUW
10:30 2aSAb
Structural Acoustics and Vibration and
Noise: Vehicle Interior Noise. Marriott 1/2
8:00
2aSC
Speech Communication: Issues in Cross
Language and Dialect Perception (Poster
Session). Marriott 5
Speech Communication: Speech
Production and Articulation (Poster
Session). Marriott 5
8:00
2aUW
Underwater Acoustics: Understanding the
Target/Waveguide System-Measurement and
Modeling II. Indiana F
Underwater Acoustics: Signal Processing
and Ambient Noise. Indiana F
TUESDAY AFTERNOON
MONDAY EVENING
7:00
A13
1eID
Interdisciplinary: Tutorial Lecture
on Musical Acoustics: Science and
Performance. Hilbert Theater
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
1:00
2pAA
Architectural Acoustics and Engineering
Acoustics: Architectural Acoustics and
Audio II. Marriott 7/8
1:25
2pAB
Animal Bioacoustics: Topics in Animal
Bioacoustics II. Lincoln
168th Meeting: Acoustical Society of America
A13
1:45
2pAO
Acoustical Oceanography: General Topics
in Acoustical Oceanography. Indiana G
9:00
3aMU
Musical Acoustics: Topics in Musical
Acoustics. Santa Fe
1:30
2pBA
Biomedical Acoustics: Quantitative
Ultrasound II. Indiana A/B
8:45
3aNS
Noise and ASA Committee on Standard:
Wind Turbine Noise. Marriott 3/4
2:45
2pEDa
Education in Acoustics: General Topics in
Education in Acoustics. Indiana C/D
8:20
3aPA
3:30
2pEDb Education in Acoustics: Take 5’s. Indiana
C/D
1:55
2pID
Interdisciplinary: Centennial Tribute to
Leo Beranek’s Contributions in Acoustics.
Indiana E
Physical Acoustics, Underwater
Acoustics, Structural Acoustics and
Vibration, and Noise: Acoustics of Pile
Driving: Models, Measurements, and
Mitigation. Indiana C/D
8:00
3aSAa
1:00
2pMU
Musical Acoustics: Synchronization
Models in Musical Acoustics and
Psychology. Santa Fe
Structural Acoustics and Vibration,
Architectural Acoustics, and Noise:
Vibration Reduction in Air-Handling
Systems. Marriott 1/2
10:00 3aSAb
Noise and Psychological and Physiological
Acoustics: New Frontiers in Hearing
Protection II. Marriott 3/4
Structural Acoustics and Vibration:
General Topics in Structural Acoustics and
Vibration. Marriott 1/2
8:00
3aSC
Noise and Structural Acoustics and
Vibration: Launch Vehicle Acoustics II.
Marriott 9/10
Speech Communication: Vowels = Space
+ Time, and Beyond: A Session in Honor of
Diane Kewley-Port. Marriott 5
8:30
3aSPa
Physical Acoustics and Education in
Acoustics: Demonstrations in Acoustics.
Indiana C/D
Signal Processing in Acoustics:
Beamforming and Source Tracking.
Indiana G
10:15 3aSPb
Signal Processing in Acoustics: Spectral
Analysis, Source Tracking, and System
Identification (Poster Session). Indiana G
9:00
Underwater Acoustics, Acoustical
Oceanography, Animal Bioacoustics,
and ASA Committee on Standards:
Standardization of Measurement, Modeling,
and Terminology of Underwater Sound.
Indiana F
1:25
1:00
1:00
2:00
2pNSa
2pNSb
2pPA
2pSA
Structural Acoustics and Vibration,
Signal Processing in Acoustics, and
Engineering Acoustics: Nearfield
Acoustical Holography. Marriott 1/2
1:00
2pSC
Speech Communication: Segments and
Suprasegmentals (Poster Session). Marriott 5
1:00
2pUW
Underwater Acoustics: Propagation and
Scattering. Indiana F
WEDNESDAY MORNING
8:20
3aAA
Architectural Acoustics and Noise:
Design and Performance of Office
Workspaces in High Performance
Buildings. Marriott 7/8
8:25
3aAB
Animal Bioacoustics: Predator-Prey
Relationships. Lincoln
8:00
3aAO
Acoustical Oceanography, Underwater
Acoustics, and Education in Acoustics:
Education in Acoustical Oceanography and
Underwater Acoustics. Indiana E
8:00
3aBA
Biomedical Acoustics: Kidney Stone
Lithotripsy. Indiana A/B
8:00
3aEA
Engineering Acoustics and Structural
Acoustics and Vibration: Mechanics of
Continuous Media. Marriott 9/10
Student Council, Education in Acoustics,
and Acoustical Oceanography: Graduate
Studies in Acoustics (Poster Session).
Marriott 6
9:00
3aID
A14
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
3aUW
WEDNESDAY AFTERNOON
1:00
3pAA
Architectural Acoustics: Architectural
Acoustics Medley. Marriott 7/8
1:00
3pBA
Biomedical Acoustics: History of High
Intensity Focused Ultrasound. Indiana A/B
2:00
3pED
Education in Acoustics: Acoustics
Education Prize Lecture. Indiana C/D
1:00
3pID
Interdisciplinary: Hot Topics in Acoustics.
Indiana E
1:00
3pNS
Noise: Sonic Boom and Numerical
Methods. Marriott 3/4
1:00
3pUW
Underwater Acoustics: Shallow Water
Reverberation I. Marriott 9/10
THURSDAY MORNING
8:40
4aAAa Architectural Acoustics, Speech
Communication, and Noise:
Room Acoustics Effects on Speech
Comprehension and Recall I. Marriott 7/8
168th Meeting: Acoustical Society of America
A14
10:35 4aAAb Architectural Acoustics: Uses,
Measurements, and Advancements in the Use
of Diffusion and Scattering Devices. Santa Fe
8:00
7:55
4aAB
4aBA
Animal Bioacoustics and Acoustical
Oceanography: Use of Passive Acoustics
for Estimation of Animal Population
Density I. Lincoln
Biomedical Acoustics: Mechanical Tissue
Fractionation by Ultrasound: Methods,
Tissue Effects, and Clinical Applications I.
Indiana A/B
8:30
4aEA
Engineering Acoustics: Acoustic
Transduction: Theory and Practice I.
Marriott 9/10
8:00
4aPAa
Physical Acoustics, Underwater
Acoustics, Signal Processing in Acoustics,
Structural Acoustics and Vibration, and
Noise: Borehole Acoustic Logging and
Micro-Seismics for Hydrocarbon Reservoir
Characterization. Indiana C/D
10:30 4aPAb
Physical Acoustics: Topics in Physical
Acoustics I. Indiana C/D
8:30
Psychological and Physiological
Acoustics: Physiological and Psychological
Aspects of Central Auditory Processing
Dysfunction I. Marriott 1/2
4aPP
1:15
4pAB
Animal Bioacoustics and Acoustical
Oceanography: Use of Passive Acoustics
for Estimation of Animal Population
Density II. Lincoln
1:30
4pBA
Biomedical Acoustics: Mechanical Tissue
Fractionation by Ultrasound: Methods,
Tissue Effects, and Clinical Applications II.
Indiana A/B
1:30
4pEA
Engineering Acoustics: Acoustic
Transduction: Theory and Practice II.
Marriott 9/10
1:00
4pMU
Musical Acoustics: Assessing the Quality
of Musical Instruments. Santa Fe
1:15
4pNS
Noise: Virtual Acoustic Simulation.
Marriott 3/4
1:30
4pPA
Physical Acoustics: Topics in Physical
Acoustics II. Indiana C/D
1:30
4pPP
Psychological and Physiological
Acoustics: Physiological and Psychological
Aspects of Central Auditory Processing
Dysfunction II. Marriott 1/2
1:00
4pSC
Speech Communication: Voice (Poster
Session). Marriott 5
1:00
4pUW
Underwater Acoustics: Shallow Water
Reverberation III. Indiana F
8:00
4aSCa
Speech Communication: Subglottal
Resonances in Speech Production and
Perception. Santa Fe
FRIDAY MORNING
8:00
4aSCb
Speech Communication: Learning and
Acquisition of Speech (Poster Session).
Marriott 5
10:00 5aED
Education in Acoustics: Hands-On
Acoustics: Demonstrations for Indianapolis
Area Students. Indiana E
9:45
5aNS
Noise: Transportation Noise, Soundscapes,
and Related Topics. Marriott 7/8
8:00
5aPPa
Psychological and Physiological
Acoustics: Psychological and Physiological
Acoustics Potpourri (Poster Session).
Marriott 5
9:00
4aSPa
10:15 4aSPb
8:00
4aUW
Signal Processing in Acoustics: Imaging
and Classification. Indiana G
Signal Processing in Acoustics:
Beamforming, Spectral Estimation, and
Sonar Design. Indiana G
1:15
A15
5aBA
Underwater Acoustics: Shallow Water
Reverberation II. Indiana F
THURSDAY AFTERNOON
1:10
8:00
4pAAa Architectural Acoustics and Speech
Communication: Acoustic Trick-or-Treat:
Eerie Noises, Spooky Speech, and Creative
Masking. Indiana G
4pAAb Architectural Acoustics, Speech
Communication, and Noise:
Room Acoustics Effects on Speech
Comprehension and Recall II. Marriott 7/8
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Biomedical Acoustics: Cavitation Control
and Detection Techniques. Indiana A/B
10:15 5aPPb
Psychological and Physiological
Acoustics: Perceptual and Physiological
Mechanisms, Modeling, and Assessment.
Marriott 1/2
8:00
5aSC
Speech Communication: Speech
Perception and Production in Challenging
Conditions (Poster Session). Marriott 5
8:00
5aUW
Underwater Acoustics: Acoustics, Ocean
Dynamics, and Geology of Canyons.
Indiana F
168th Meeting: Acoustical Society of America
A15
SCHEDULE OF COMMITTEE MEETINGS AND OTHER EVENTS
COUNCIL AND ADMINISTRATIVE COMMITTEES AND OTHER
GROUPS
Mon, 27 Oct,7:30 a.m.
Mon, 27 Oct, 3:30 p.m.
Tue, 28 Oct, 7:00 a.m.
Tue, 28 Oct, 7:00 a.m.
Tue, 28 Oct, 7:30 a.m.
Tue, 28 Oct, 7:30 a.m.
Tue, 28 Oct, 11:45 a.m.
Executive Council
Technical Council
ASA Press Editorial
Board
POMA Editorial Board
Panel on Public Policy
Translation of Chinese
Journals
Editorial Board
Tue, 28 Oct, 12:00 noon
Tue, 28 Oct, 12:00 noon
Activity Kit
Prizes & Special
Fellowships
Tue, 28 Oct, 12:00 noon Student Council
Tue, 28 Oct, 1:30 p.m.
Meetings
Tue, 28 Oct, 4:00 p.m.
Books+
Tue, 28 Oct, 4:00 p.m.
Education in Acoustics
Tue, 28 Oct, 4:30 p.m.
Newman Fund Advisory
Tue, 28 Oct, 5:00 p.m.
Women in Acoustics
Wed, 29 Oct, 6:45 a.m. International Research &
Education
Wed, 29 Oct, 7:00 a.m. College of Fellows
Wed, 29 Oct, 7:00 a.m. Publication Policy
Wed, 29 Oct, 7:00 a.m. Regional Chapters
Wed, 29 Oct, 11:00 a.m. Medals and Awards
Wed, 29 Oct, 11:15 a.m. Public Relations
Wed, 29 Oct, 12:00 noon Membership
Wed, 29 Oct, 1:30 p.m. AS Foundation Board
Wed, 29 Oct, 5:30 p.m. Health Care Acoustics
Thu, 30 Oct, 7:00 a.m.
Archives & History
Thu, 30 Oct, 7:00 a.m.
Tutorials
Thu, 30 Oct, 7:30 a.m.
Investment
Thu, 30 Oct, 11:00 a.m. Acoustics Today
Advisory
Thu, 30 Oct, 2:00 p.m.
Publishing Services
Thu, 30 Oct, 4:30 p.m.
External Affairs
Thu, 30 Oct, 4:30 p.m.
Internal Affairs
Fri, 31 Oct, 7:00 a.m.
Technical Council
Fri, 31 Oct, 11:00 a.m.
Executive Council
Denver
Denver
Illinois
Denver
Michigan
Indy Boardroom
Circle City Bar &
Grille
Illinois
Utah
Atlanta
Denver
Illinois
Indiana C/D
Utah
Denver
Michigan
Florida
Illinois
Denver
Denver
Michigan
Florida
Illinois
Utah
Denver
Florida
Utah
Illinois
Florida
Michigan
Illinois
Denver
Denver
TECHNICAL COMMITTEEE OPEN MEETINGS
Tue, 28 Oct, 4:30 p.m.
Tue, 28 Oct, 8:00 p.m.
Tue, 28 Oct, 8:00 p.m.
Tue, 28 Oct, 8:00 p.m.
Tue, 28 Oct, 8:00 p.m.
Tue, 28 Oct, 8:00 p.m.
Wed, 29 Oct, 7:30 p.m.
Thu, 30 Oct, 7:30 p.m.
Thu, 30 Oct, 7:30 p.m.
Thu, 30 Oct, 7:30 p.m.
Thu, 30 Oct, 7:30 p.m.
Thu, 30 Oct, 7:30 p.m.
Thu, 30 Oct, 7:30 p.m.
A16
Engineering Acoustics
Acoustical
Oceanography
Architectural Acoustics
Physical Acoustics
Speech Communication
Structural Acoustics and
Vibration
Biomedical Acoustics
Animal Bioacoustics
Musical Acoustics
Noise
Psychological and
Physiological Acoustics
Signal Processing in
Acoustics
Underwater Acoustics
Santa Fe
Indiana G
Marriott 7/8
Indiana C/D
Marriott 3/4
Marriott 1/2
Indiana A/B
Lincoln
Santa Fe
Marriott 3/4
Marriott 1/2
Indiana G
Indiana F
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
STANDARDS COMMITTEEES AND WORKING GROUPS
Mon, 27 Oct, 1:00 p.m.
Mon, 27 Oct, 7:00 p.m.
Tue, 28 Oct, 7:00 a.m.
Tue, 28 Oct, 7:00 a.m.
Tue, 28 Oct, 4:00 p.m.
S12/WG11-Hearing
Protectors
ASACOS Steering
S1/WG4-Sound Pressure
Levels
ASACOS
S1/WG20-Ground
Impedance
Atlanta
Atlanta
Atlanta
Boston/Austin
Atlanta
MEEETING SERVICES, SPECIAL EVENTS, SOCIAL EVENTS
Mon-Thu, 27-30 Oct,
7:30 a.m. - 5:00 p.m.
Fri, 31 Oct,
7:30 a.m. - 12:00 noon
Mon-Thu, 27-30 Oct,
7:00 a.m. - 5:00 p.m.
Fri, 31 Oct,
7:00 a.m. - 12:00 noon
Mon-Thu, 27-30 Oct,
7:00 a.m. - 5:00 p.m.
Fri, 31 Oct,
7:00 a.m. - 12:00 noon
Mon-Thu, 27-30 Oct,
8:00 a.m. to 10:00 a.m.
Mon-Fri, 27-31 Oct,
9:40 a.m. - 10:40 a.m.
Tue-Thu, 28-30 Oct,
Registration
Marriott Foyer
E-mail/Internet Café
Marriott 6
A/V Preview
Albany
Accompanying Persons
Texas
Coffee Break
Marriott 6
Sun, 26 Oct,
Short Course
1:00 p.m. - 5:00 p.m.
Mon, 27 Oct,
7:30 a.m. - 12:30 p.m.
Mon-Thu, 27-30 Oct,
Gallery of Acoustics
9:00 a.m.-5:00 p.m.
Tue-Thu, 28-30 Oct,
Resume Help Desk
12:00 noon - 1:00 p.m.
Mon, 27 Oct,
Student Orientation
5:00 p.m. - 5:30 p.m.
Mon, 27 Oct,
Student Meet and Greet
5:30 p.m. - 6:45 p.m.
Mon, 27 Oct,
Pre-Tutorial Tour of
6:00 p.m. - 7:00 p.m.
Hilbert Circle Theater
Mon, 27 Oct,
Tutorial Lecture
7:00 p.m.-9:00 p.m.
Tue, 28 Oct,
Tour: Center for the
10:00 a.m. - 12:00 noon Performing Arts
Tue, 28 Oct,
Social at Eiteljorg
6:00 p.m. - 9:00 p.m.
Museum
Wed, 29 Oct,
Women in Acoustics
11:30 a.m. - 1:30 p.m.
Luncheon
Wed, 29 Oct,
Annual Membership
3:30 p.m.
Meeting
Wed, 29 Oct,
Plenary Session and
3:30 p.m. - 4:30 p.m.
Awards Ceremony
Wed, 29 Oct,
Student Reception
6:45 p.m.-8:15 p.m.
Wed, 29 Oct,
ASA Jam
8:00 p.m. - 12:00 midnight
Thu, 30 Oct,
Society Luncheon
12:00 noon - 2:00 p.m.
and Lecture
Thu, 30 Oct,
Tour: 3M Acoustics
3:00 p.m. - 6:00 p.m.
Facilities
Thu, 30 Oct,
Social
6:00 p.m. - 7:30 p.m.
Indiana Ballroom
Foyer
Santa Fe
Marriott 6
Marriott Foyer
Marriott 9/10
Marriott 6
Hilbert Circle Theater
Hilbert Circle Theater
Missouri Street
Entrance
Eiteljorg Museum
Circle City Bar and
Grille
Marriott 5
Marriott 5
Indiana E
Marriott 6
Indiana E
Missouri Street
Entrance
Marriott 5/6
168th Meeting: Acoustical Society of America
A16
168th Meeting of the Acoustical Society of America
The 168th meeting of the Acoustical Society of America will
be held Monday through Friday, 27–31 October 2014 at the
Marriott Indianapolis Downtown Hotel, Indianapolis, Indiana,
USA.
SECTION HEADINGS
1. HOTEL INFORMATION
2. TRANSPORTATION AND TRAVEL DIRECTIONS
3. STUDENT TRANSPORTATION SUBSIDIES
4. MESSAGES FOR ATTENDEES
5. REGISTRATION
6. ASSISTIVE LISTENING DEVICES
7. TECHNICAL SESSIONS
8. TECHNICAL SESSION DESIGNATIONS
9. HOT TOPICS SESSION
10. ROSSING PRIZE IN ACOUSTICS EDUCATION AND
ACOUSTICS EDUCATION PRIZE LECTURE
11. TUTORIAL LECTURE
12. SHORT COURSE
13. UNDERGRADUATE RESEARCH POSTER
EXPOSITION
14. RESUME DESK
15. TECHNICAL COMMITTEE OPEN MEETINGS
16. TECHNICAL TOURS
17. GALLERY OF ACOUSTICS
18. ANNUAL MEMBERSHIP MEETING
19. PLENARY SESSION AND AWARDS CEREMONY
20. ANSI STANDARDS COMMITTEES
21. COFFEE BREAKS
22. A/V PREVIEW ROOM
23. PROCEEDINGS OF MEETINGS ON ACOUSTICS
24. E-MAIL ACCESS, INTERNET CAFÉ AND BREAK
ROOM
25. SOCIALS
26. SOCIETY LUNCHEON AND LECTURE
27. STUDENTS MEET MEMBERS FOR LUNCH
28. STUDENT EVENTS: NEW STUDENT ORIENTATION, MEET AND GREET, STUDENT RECEPTION
29. WOMEN IN ACOUSTICS LUNCHEON
30. JAM SESSION
31. ACCOMPANYING PERSONS PROGRAM
32. WEATHER
33. TECHNICAL PROGRAM ORGANIZING COMMITTEE
34. MEETING ORGANIZING COMMITTEE
35. PHOTOGRAPHING AND RECORDING
36. ABSTRACT ERRATA
37. GUIDELINES FOR ORAL PRESENTATIONS
38. SUGGESTIONS FOR EFFECTIVE POSTER PRESENTATIONS
39. GUIDELINES FOR USE OF COMPUTER PROJECTION
40. DATES OF FUTURE ASA MEETINGS
1. HOTEL INFORMATION
The Indianapolis Marriott Downtown Hotel is the
headquarters hotel where all meeting events will be held.
A17
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Note that there are three Marriott hotels in Indianapolis
so please specify the Downtown as your destination when
traveling.
The cut-off date for reserving rooms at special rates has
passed. Please contact the Indianapolis Marriott Downtown
Hotel for reservation information: 350 West Maryland Street,
Indianapolis, IN 46225, Tel: 317-822-3500.
2. TRANSPORTATION AND TRAVEL DIRECTIONS
Indianapolis is served by many major airlines through
Indianapolis International Airport (IND). Information is
available at www.indianapolisairport.com. The airport
terminal consists of one centralized check-in area with gates
on two concourses A and B which are connected via walkways
to each other as well as to the check-in and reception areas
of the airport. You can easily walk from the terminal via an
elevated walkway to the car rental desks, which are in the
Ground Transportation Center. Also, located in the same area
are the limos and ground transportation information desks.
TAXI. Taxis depart from just outside the baggage claim area on
the ground floor of the terminal. There is a minimum charge of
USD $15 for all taxis, whatever the distance travelled. Typical
costs to downtown Indianapolis are USD $15 to USD $20.
Driving time to reach the Downtown Marriott is about 30
minutes.
GO GREEN LINE AIRPORT SHUTTLE. The shuttle leaves the
airport on the hour and the half hour from Zone #7 on the
road just outside the Ground Transportation Center. The cost
is USD $10 (debit/credit card only accepted by drivers) and
takes about 36 minutes to reach the Marriott complex which
includes the downtown Marriott (as well as Springhill Suites,
the JW Marriott, Courtyard Marriott and Fairfield Inn). You
can book online at goexpresstravel.com.
BUS SERVICE TO AND FROM THE AIRPORT. IndyGo’s Route 8
(www.indygo.net/maps-schedules/airport-service) provides
non-express, fixed-route service from the airport to downtown via stops along Washington Street. Cost is USD $1.75
per ride. For further information and route maps visit http://
www.indygo.net/maps-schedules/airport-service. The buses
stop close to all the major downtown hotels. The stop at West
and Washington is just northwest of the hotel. Pick-up stops
are slightly different but still nearby. The stop at the airport is
at Zone #6 on the road just outside the Ground Transportation
Center.
SHARED-RIDE AND PERSONAL LUXURY LIMOUSINE SERVICES. These
transportation services are available. Information desks are
located in the Ground Transportation Center. A list of limousine companies can be found at www.indianapolisairport.com.
RENTAL CAR. Renting a car is not recommended unless you are
planning trips out of town. Most everything you need should
be within walking distance of the hotel. There are a lot of very
nice restaurants, museums and shops reasonably close to the
hotel. If you do need a rental car, the desks are located in the
Ground Transportation Center on the 1st floor (ground level)
of the parking garage. Alamo, Avis, Budget, Dollar, Enterprise, Hertz, National, and Thrifty all have desks at the airport
168th Meeting: Acoustical Society of America
A17
and ACE has an off-airport location with a shuttle service to
and from the airport, pick up just outside the Ground Transportation Center.
Amtrak and Greyhound both serve Indianapolis and
the train and bus stations are within walking distance of the
conference hotel. However, trains to do not run very often,
e.g., one a day from Chicago to Indianapolis, versus seven
a day Greyhound buses from Chicago to Indianapolis. The
Amtrak station is at 350 S. Illinois Street a 10-minute walk
(0.5 miles) from the Marriott and the Greyhound Station is
next to the Amtrak station at 154 W. South St. See www.
greyhound.com and tickets.amtrak.com for more information.
3. STUDENT TRANSPORTATION SUBSIDIES
To encourage student participation, limited funds are
available to defray partially the cost of travel expenses of
students to attend Acoustical Society meetings. Instructions
for applying for travel subsidies are given in the Call for
Papers which can be found online at http://acousticalsociety.
org. The deadline for the present meeting has passed but this
information may be useful in the future.
4. MESSAGES FOR ATTENDEES
Messages for attendees may be left by calling the
Indianapolis Marriott Downtown Hotel, 317-822-3500, and
asking for the ASA Registration Desk during the meeting,
where a message board will be located. This board may also
be used by attendees who wish to contact one another.
5. REGISTRATION
Registration is required for all attendees and accompanying
persons. Registration badges must be worn in order to
participate in technical sessions and other meeting activities.
Registration will open on Monday, 27 October, at 7:30 a.m.
in the Marriott Ballroom Foyer on the second floor (see floor
plan on page A11).
Checks or travelers checks in U.S. funds drawn on U.S.
banks and Visa, MasterCard and American Express credit
cards will be accepted for payment of registration. Meeting
attendees who have pre-registered may pick up their badges
and registration materials at the pre-registration desk.
The registration fees (in USD) are $545 for members of
the Acoustical Society of America; $645 for non-members,
$150 for Emeritus members (Emeritus status pre-approved
by ASA), $275 for ASA Early Career members (for ASA
members within three years of their most recent degrees
– proof of date of degree required), $90 for ASA Student
members, $130 for students who are not members of ASA,
$115 for Undergraduate Students, and $150 for accompanying
persons.
One-day registration is available at $275 for members and
$325 for nonmembers (one-day means attending the meeting
on only one day either to present a paper and/or to attend
sessions). A nonmember who pays the $645 nonmember
registration fee and simultaneously applies for Associate
Membership in the Acoustical Society of America will be
given a $50 discount off their dues payment for 2015 dues.
Invited speakers who are members of the Acoustical
Society of America are expected to pay the registration fee, but
A18
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
nonmember invited speakers who participate in the meeting
only on the day of their presentation may register without
charge. The registration fee for nonmember invited speakers
who wish to participate for more than one day is $110 and
includes a one-year Associate Membership in the ASA upon
completion of an application form.
Special note to students who pre-registered online: You
will also be required to show your student id card when
picking-up your registration materials at the meeting.
6. ASSISTIVE LISTENING DEVICES
The ASA has purchased assistive listening devices (ALDs)
for the benefit of meeting attendees who need them at
technical sessions. Any attendee who will require an assistive
listening device should advise the Society in advance of the
meeting by writing to: Acoustical Society of America, 1305
Walt Whitman Road, Suite 300, Melville, NY 11747-4300;
asa@aip.org
7. TECHNICAL SESSIONS
The technical program includes 92 sessions with 948 papers
scheduled for presentation during the meeting.
A floor plan of the Marriott Hotel appears on page A11.
Session Chairs have been instructed to adhere strictly to the
printed time schedule, both to be fair to all speakers and to
permit attendees to schedule moving from one session to
another to hear specific papers. If an author is not present to
deliver a lecture-style paper, the Session Chairs have been
instructed either to call for additional discussion of papers
already given or to declare a short recess so that subsequent
papers are not given ahead of the designated times.
Several sessions are scheduled in poster format, with the
display times indicated in the program schedule.
8. TECHNICAL SESSION DESIGNATIONS
The first character is a number indicating the day the session
will be held, as follows:
1-Monday, 27 October
2-Tuesday, 28 October
3-Wednesday, 29 October
4-Thursday, 30 October
5-Friday, 31 October
The second character is a lower case “a” for a.m., “p” for
p.m., or “e” for evening corresponding to the time of day the
session will take place. The third and fourth characters are
capital letters indicating the primary Technical Committee
that organized the session using the following abbreviations
or codes:
AA Architectural Acoustics
AB Animal Bioacoustics
AO Acoustical Oceanography
BA Biomedical Acoustics
EA Engineering Acoustics
ED Education in Acoustics
ID Interdisciplinary
MU Musical Acoustics
NS Noise
PA Physical Acoustics
168th Meeting: Acoustical Society of America
A18
PP Psychological and Physiological Acoustics
SA Structural Acoustics and Vibration
SC Speech Communication
SP Signal Processing in Acoustics
UW Underwater Acoustics
In sessions where the same group is the primary organizer
of more than one session scheduled in the same morning or
afternoon, a fifth character, either lower-case “a” or “b” is
used to distinguish the sessions. Each paper within a session is
identified by a paper number following the session-designating
characters, in conventional manner. As hypothetical examples:
paper 2pEA3 would be the third paper in a session on Tuesday
afternoon organized by the Engineering Acoustics Technical
Committee; 3pSAb5 would be the fifth paper in the second
of two sessions on Wednesday afternoon sponsored by the
Structural Acoustics and Vibration Technical Committee.
Note that technical sessions are listed both in the calendar
and the body of the program in the numerical and alphabetical
order of the session designations rather than the order of their
starting times. For example, session 3aAA would be listed
ahead of session 3aAO even if the latter session began earlier
in the same morning.
9. HOT TOPICS SESSION
Hot Topics session 3pID will be held on Wednesday, 29
October, at 1:00 p.m. in Indiana E. Papers will be presented on
current topics in the fields of Education in Acoustics, Signal
Processing in Acoustics, and Acoustical Oceanography.
10. ROSSING PRIZE IN ACOUSTICS EDUCATION
AND ACOUSTICS EDUCATION PRIZE LECTURE
The 2014 Rossing Prize in Acoustics Education will be
awarded to Colin Hansen, University of Adelaide, at the
Plenary Session on Wednesday, 29 October. Colin Hansen
will present the Acoustics Education Prize Lecture titled
“Educating mechanical engineers in the art of noise control”
on Wednesday, 29 October, at 2:00 p.m. in Session 3pED in
Indiana C/D.
11. TUTORIAL LECTURE: MUSICAL ACOUSTICS:
SCIENCE AND PERFORMANCE
A tutorial presentation on “Musical Acoustics: Science and
Performance” will be given by Professor Uwe J. Hansen of
Indiana State University, and the New World Youth Symphony,
directed by Susan Kitterman, on Monday, 27 October at 7:00
p.m. in the Hilbert Circle Theater.
The Tutorial Concert will be preceded by a tour of Hilbert
Circle Theater, home of the Indianapolis Symphony. Hilbert
Circle Theater was a movie house. It underwent major
revisions to make it suitable as a concert hall. Since the last
ASA meeting in Indianapolis in 1996, the concert hall has
undergone additional major remodeling, mainly in the stage
area, but also in the hall itself. The tour will begin at 6:00 p.m.
Hilbert Circle Theater is well within easy walking distance
of the hotel (allow 15 minutes to get there), however, in the
event of inclement weather, and for those with additional
needs, limited bus transportation will be available (from
5:30pm onwards).
A19
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Lecture notes will be available at the meeting in limited
supply; only preregistrants will be guaranteed receipt of a set
of notes.
All Students (K – Grad school) will be admitted free of
charge. General admission, for both the general Public and
ASA members is USD $20.00. ASA members who include
attendance in this tutorial concert in their pre-registration by
22 September pay the reduced fee of USD $15.00.
12. SHORT COURSE ON ELECTROACOUSTIC
TRANSDUCERS
A short course on Electroacoustic Transducers:
Fundamentals and Applications will be given in two parts:
Sunday, 26 October, from 1:00 p.m. to 5:00 p.m. and Monday,
27 October, from 7:30 a.m. to 12:30 p.m. in the Santa Fe Room.
The objectives are (1) to introduce the physical principles,
basic performance, and system design aspects required for
effective application of receiving and transmitting transducers
and (2) to present common problems and potential solutions.
The instructor is Thomas Gabrielson, a Senior Scientist and
Professor of Acoustics at Penn State University, previously
worked in underwater-acoustic transducer design, modeling,
and measurement for 22 years at the Naval Air Warfare Center
in Warminster, PA.
The registration fee is USD$300.00 (USD$125 for
students) and covers attendance, instructional materials and
coffee breaks. Onsite registration at the meeting will be on a
space-available basis.
13. UNDERGRADUATE RESEARCH POSTER
EXPOSITION
The Undergraduate Research Exposition will be held
Tuesday morning, 28 October, 9:00 a.m. to 11:00 a.m. in
session 2aED in Marriott 6. The 2014 Undergraduate Research
Exposition is a forum for undergraduate students to present
their research pertaining to any area of acoustics and can also
include overview papers on undergraduate research programs,
designed to inspire and foster growth of undergraduate
research throughout the Society. It is intended to encourage
undergraduates to express their knowledge and interest in
acoustics and foster their participation in the Society. Four
awards, up to $500 each, will be made to help undergraduates
with travel costs associated with attending the meeting and
presenting a poster.
14. RESUME HELP DESK
Are you interested in applying for graduate school, a
postdoctoral opportunity, a research scientist position, a
faculty opening, or other position involving acoustics? If
you are, please stop by the ASA Resume Help Desk in the
Marriott Ballroom Foyer near the registration desk. Members
of the ASA experienced in hiring will be available to look
at your CV, cover letter, and research & teaching statements
to provide tips and suggestions to help you most effectively
present yourself in today’s competitive job market. The ASA
Resume Help Desk will be staffed on Tuesday, Wednesday,
and Thursday during the lunch hour for walk-up meetings.
Appointments during these three lunch hours will be available
via a sign-up sheet, too.
168th Meeting: Acoustical Society of America
A19
15. TECHNICAL COMMITTEE OPEN MEETINGS
Technical Committees will hold open meetings on Tuesday,
Wednesday, and Thursday at the Indianapolis Marriott
Downtown. The meetings on Tuesday and Thursday will be
held in the evenings after the socials, except Engineering
Acoustics which will meet at 4:30 p.m. on Tuesday. The
schedule and rooms for each Committee meeting are given
on page A16.
These are working, collegial meetings. Much of the work
of the Society is accomplished by actions that originate and
are taken in these meetings including proposals for special
sessions, workshops and technical initiatives. All meeting
participants are cordially invited to attend these meetings and
to participate actively in the discussions.
16. TECHNICAL TOURS
Note: Tour buses leave from the Marriott’s Missouri Street
Exit.
Monday, 27 October, 6:00 p.m.-8:30 p.m. Tour and
Tutorial Lecture at the Hilbert Circle Theater, 45 Monument
Circle, Indianapolis. Tour fees: USD $15 preregistration
and USD $20 on-site for non-students/No fee for students.
Prior to becoming the home of the Indianapolis Symphony,
Hilbert Circle Theater was a movie house. It underwent major
revisions to make it suitable as a concert hall. Tour starts at
6:00 p.m. at the theater and tutorial presentation starts at 7:00
p.m. The Theater is half a mile walking distance (about 15
minutes from the hotel, so leave at 5:45 p.m. at the latest). See
the Tutorial Lecture section above for full details.
Tuesday, 28 October: 10:00 a.m.-12:00 noon. Tour of the
Center for the Performing Arts, 355 City Center Drive,
Carmel. Tour limited to 30 participants. Tour fee: USD $25.
The Center for Performing Arts houses the Palladium (1,600
seat concert hall), the Tarkington Theater (500 seats proscenium
stage) and the Studio Theater (small flexible black box space).
This is a recently completed facility north of Indianapolis.
The Palladium is a space that rivals the world’s great concert
halls. David M. Schwarz Architects, a Washington, DC based
architectural firm, drew inspiration for the Palladium from the
famous Villa Capra “La Rotunda” (or Villa Rotunda) built in
1566 in Italy and designed by Italian Renaissance architect
Andrea Palladio (1508–1580). For more information about the
Center visit www.thecenterfortheperformingarts.org.
Tuesday, 28 October: 3:00 p.m.-6:00 p.m. Indiana
University School of Medicine, 699 Riley Hospital Drive,
Indianapolis. Tour limited to 30 participants. Tour fee:
USD $25. The Department of Otolaryngology-Head and
Neck Surgery was organized as an independent department
within the Indiana School of Medicine in 1909 by John
Barnhill, M.D., an internationally recognized head and neck
surgeon and anatomist. Since then the specialty has undergone
tremendous expansion in managing disorders of the ear,
nose, throat, head and neck. The DeVault Otologic Research
Laboratory is the primary behavioral research venue for the
Department. Occupying approximately 3000 square feet on
two floors of the research wing of the James Whitcomb Riley
Hospital for Children, the laboratory is named for its principal
early benefactor, Dr. Virgil T. DeVault (1901-2000), a native
Hoosier and alumnus of Indiana University. In the laboratory
A20
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
researchers examine the short-term and long-term effects of
cochlear implantation and/or therapeutic amplification in deaf
and hard-of-hearing infants, children, and adults, as well as
the factors underlying variability in behavioral outcomes of
cochlear implantation and/or therapeutic amplification.
Thursday, 30 October: 3:00 p.m.-6:00 p.m. Tour of 3M
Acoustics Facilities, 7911 Zionsville Road, Indianapolis.
Tour limited to 30 participants. Tour fee: USD $25
Elliott Berger and Steve Sorenson will give tours of 3M’s
E•A•RCAL hearing protection laboratory and Acoustic
Technology Center (ATC) laboratory for noise control research
and application. The E•A•RCAL facility consists of a NVLAP
accredited 113-m3 reverberation chamber instrumented for
real-ear attenuation testing, an 18-m3 electroacoustic sound
lab supporting high-level tests up to 120 dB SPL, and a 300m3 hemi-anechoic facility used for impulse testing via a shock
tube that generates blasts up to 168 dB SPL for measuring
the level-dependent performance of hearing protectors. The
ATC includes a 900-m3 hemi-anechoic chamber with an
inbuilt chassis dynamometer ideal for testing heavy trucks
under real-world load conditions, a smaller hemi-anechoic
chamber for product sound power testing, and 2 reverberation
chambers for a wide variety of sound transmission loss and
sound absorption test/development. Note that no photographs
are allowed to be taken on this tour. Conference attendees who
work for 3M/Aearo/E-A-R competitors may not be allowed to
participate in the tour – the registration fee will be refunded in
full should the request to participate (through pre-registration)
not be approved.
Start times are when the bus leaves the hotel, so plan on
being there ahead of time.
On-site registration will be on a space-available basis.
17. GALLERY OF ACOUSTICS
The Technical Committee on Signal Processing in
Acoustics will sponsor the 15th Gallery of Acoustics at the
Acoustical Society of America meeting in Indianapolis. Its
purpose is to enhance ASA meetings by providing a setting
for researchers to display their work to all meeting attendees
in a forum emphasizing the diversity, interdisciplinary, and
artistic nature of acoustics. The Gallery of Acoustics provides
a means by which we can all share and appreciate the natural
beauty, aesthetic, and artistic appeal of acoustic phenomena:
This is a forum where science meets art.
The Gallery will be held in the Marriott Ballroom 6,
Monday through Thursday, 27-30 October, from 9:00 a.m. to
5:00 p.m.
18. ANNUAL MEMBERSHIP MEETING
The Annual Membership Meeting of the Acoustical Society
of America will be held at 3:30 p.m. on Wednesday, 29 October
2014, in Marriott 5 at the at the Indianapolis Downtown
Marriott Hotel, 350 West Maryland Street, Indianapolis, IN
46225.
19. PLENARY SESSION AND AWARDS CEREMONY
A plenary session will be held Wednesday, 29 October, at
3:30 p.m. in Marriott 5.
168th Meeting: Acoustical Society of America
A20
The Rossing Prize in Acoustics Education will be presented
to Colin Hansen. The Pioneers of Underwater Acoustics
Medal will be presented to Michael B. Porter, the Silver
Medal in Speech Communication will be presented to Sheila
E. Blumstein and the Wallace Clement Sabine Medal will
be presented to Ning Xiang. Certificates will be presented to
Fellows elected at the Providence meeting of the Society. See
page 2228 for a list of fellows.
20. ANSI STANDARDS COMMITTEES
Meetings of ANSI Accredited Standards Committees will
not be held at the Indianapolis meeting.
Meetings of selected advisory working groups are often
held in conjunction with Society meetings and are listed in the
Schedule of Committee Meetings and Other Events on page
A16 or on the standards bulletin board in the registration area,
e.g., S12/WGI8-Room Criteria.
People interested in attending and in becoming involved in
working group activities should contact the ASA Standards
Manager for further information about these groups, or about
the ASA Standards Program in general, at the following
address: Susan Blaeser, ASA Standards Manager, Standards
Secretariat, Acoustical Society of America, 1305 Walt
Whitman Road, Suite 300, Melville, NY 11747-4300; T.: 631390-0215; F: 631-923-2875; E: asastds@aip.org
21. COFFEE BREAKS
Morning coffee breaks will be held each day from 9:40 a.m.
to 10:40 a.m. in Marriott 6.
22. A/V PREVIEW ROOM
The Albany Room on the second floor will be set up as
an A/V preview room for authors’ convenience, and will be
available on Monday through Thursday from 7:00 a.m. to 5:00
p.m. and Friday from 7:00 a.m. to 12:00 noon.
23. PROCEEDINGS OF MEETINGS ON ACOUSTICS
(POMA)
The Indianapolis meeting will have a published proceedings,
and submission is optional. The proceedings will be a separate
volume of the online journal, “Proceedings of Meetings on
Acoustics” (POMA). This is an open access journal, so that its
articles are available in pdf format without charge to anyone
in the world for downloading. Authors who are scheduled
to present papers at the meeting are encouraged to prepare a
suitable version in pdf format that will appear in POMA. The
format requirements for POMA are somewhat more stringent
than for posting on the ASA Online Meetings Papers Site, but
the two versions could be the same. The posting at the Online
Meetings Papers site, however, is not archival, and posted
papers will be taken down six months after the meeting. The
POMA online site for submission of papers from the meeting
will be opened about one-month after authors are notified
that their papers have been accepted for presentation. It is not
necessary to wait until after the meeting to submit one’s paper
to POMA. Further information regarding POMA can be found
at the site http://asadl/poma/for_authors_poma. Published
papers from previous meeting can be seen at the site http://
asadl/poma.
A21
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
24. E-MAIL ACCESS, INTERNET CAFÉ, AND BREAK
ROOM
Computers providing e-mail access will be available 7:00
a.m. to 5:00 p.m., Monday to Thursday and 7:00 a.m. to 12:00
noon on Friday in Marriott 6.
A unique feature of this ASA meeting is that a ballroom
located directly opposite the registration area will be dedicated
as a central gathering area for discussion, wi-fi, coffee breaks,
the Gallery of Acoustics and more. Join your colleagues the
Break Room every day to discuss the latest ASA topics and
news.
Wifi will be available in all ASA meeting rooms and spaces.
25. SOCIALS
The Eiteljorg Museum of American Indian and Western
art will be the site for the social on Tuesday, October 28, from
6:00 p.m. to 9:00 p.m. Galleries will be open for viewing art
that promotes understanding of the history and cultures of
North American people, including their contemporary Native
art collection that has been ranked among the world’s best.
The collections are housed in a striking building, located
along the White River Canal within easy walking distance
(about 6 minutes) of the Indianapolis Marriott Downtown. For
those who prefer not to walk, shuttle service will be available
throughout the evening to and from the Missouri Street exit
of the hotel. In keeping with the Museum, the reception will
feature a delectable array of food selections having a slightly
Southwestern flair.
A Halloween Social for all, even noisy spirits or eerie
creatures, will take place on Thursday, October 30 in the
Marriott Ballroom from 6:00 p.m. to 7:30 p.m. Costumes
are positively encouraged, so don’t forget to pack one. Get
ready for a few fun surprises organized by a team of young
acousticians that are sure to provide some great photo ops. To
set the stage for Thursday night’s activities, Halloween fun is
included in a Thursday afternoon technical session sponsored
by Architectural Acoustics and Speech Communication. In
this session other worldly minds offer 13 talks from 1:00 p.m.
to 5:00 p.m. in Marriott Ballroom 5/6. Come to learn about
the acoustics of supernatural spirits, bumps in the night, eerie
voices and other sorts of spooky audition.
The ASA hosts these social hours to provide a relaxing
setting for meeting attendees to meet and mingle with their
friends and colleagues as well as an opportunity for new
members and first-time attendees to meet and introduce
themselves to others in the field. A second goal of the socials
is to provide a sufficient meal so that meeting attendees
can attend the Technical Committees meetings that begin
immediately after the socials
26. SOCIETY LUNCHEON AND LECTURE
The Society Luncheon and Lecture will be held on
Thursday, 30 October, at 12:00 noon in Indiana E. The
luncheon is open to all attendees and their guests. The speaker
is Larry E. Humes, Distinguished Professor and Department
Chair, Department of Speech and Hearing Sciences, Indiana
University. Purchase your tickets at the Registration Desk
before 10:00 a.m. on Wednesday, 29 October. The cost is
$30.00 per ticket.
168th Meeting: Acoustical Society of America
A21
27. STUDENTS MEET MEMBERS FOR LUNCH
The ASA Education Committee arranges for a student to
meet one-on-one with a member of the Acoustical Society
over lunch. The purpose is to make it easier for students to
meet and interact with members at ASA Meetings. Each lunch
pairing is arranged separately. Students who are interested
should contact Dr. David Blackstock, University of Texas
at Austin, by email dtb@mail.utexas.edu Please provide
your name, university, department, degree you are seeking
(BS, MS, or PhD), research field, acoustical interests, your
supervisor’s name, days you are free for lunch, and abstract
number (or title) of any paper(s) you are presenting. The signup deadline is 12 days before the start of the Meeting, but an
earlier sign-up is strongly encouraged. Each participant pays
for his/her own meal.
28. STUDENT EVENTS: NEW STUDENTS
ORIENTATION, MEET AND GREET, STUDENT
RECEPTION
Follow the student twitter throughout the meeting @
ASAStudents.
A New Students Orientation will be held from 5:00 p.m.
to 5:30 p.m. on Monday, 27 October, in Marriott 9/10 for
all students to learn about the activities and opportunities
available for students at the Indianapolis ASA meeting. This
will be followed by the Student Meet and Greet from 5:30
p.m. to 6:45 p.m. in Marriott 6. Refreshments and a cash
bar will be available. Students are encouraged to attend the
tutorial lecture on which begins at 7:00 p.m. in The Hilbert
Theater. Student registration for this event is free.
The Students’ Reception will be held on Wednesday,
29 October, from 6:45 p.m. to 8:15 p.m. in Indiana E. This
reception, sponsored by the Acoustical Society of America and
supported by the National Council of Acoustical Consultants,
will provide an opportunity for students to meet informally
with fellow students and other members of the Acoustical
Society. All students are encouraged to attend, especially
students who are first time attendees or those from smaller
universities.
Students will find a sticker to place on their name tags
identifying them as students in their registration envelopes.
Although wearing the sticker is not mandatory, it will allow
for easier networking between students and other meeting
attendees.
Students are encouraged to refer to the student guide, also
found in their envelopes, for important program and meeting
information pertaining only to students attending the ASA
meeting.
They are also encouraged to visit the official ASA Student
Home Page at www.acosoc.org/student/ to learn more about
student involvement in ASA.
29. WOMEN IN ACOUSTICS LUNCHEON
The Women in Acoustics luncheon will be held at 11:30
a.m. on Wednesday, 29 October, in the Circle City Bar and
Grille on the first floor of the Marriott. Those who wish to
attend must purchase their tickets in advance by 10:00 a.m. on
Tuesday, 28 October. The fee is USD$30 for non-students and
USD$15 for students.
A22
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
30. JAM SESSION
You are invited to Marriott 6 on Wednesday night, 29
October, in Marriott 6 from 8:00 p.m. to midnight for the JAM
SESSION. Bring your axe, horn, sticks, voice, or anything
else that makes music. Musicians and non-musicians are all
welcome to attend. A full PA system, backline equipment,
guitars, bass, keyboard, and drum set will be provided. All
attendees will enjoy live music, a cash bar with snacks, and
all-around good times. Don’t miss out.
31. ACCOMPANYING PERSONS PROGRAM
Spouses and other visitors are welcome at the Indianapolis
meeting. The on-site registration fee for accompanying persons
is USD$150. A hospitality room for accompanying persons
will be open in the Texas Room at the Indianapolis Marriott
Downtown Hotel from 8:00 a.m. to 10:00 a.m. Monday through
Thursday. For updates about the accompanying persons
program please check the ASA website at AcousticalSociety.
org/meetings.html.
Visit: http://visitindy.com to learn about what is going on
in Indianapolis. Good places to visit within walking distance
include the Eiteljorg Museum (Tuesday night social venue),
the Indiana State Museum (with IMAX theater), the NCAA
Hall of Champions, the Indianapolis Zoo, and White River
State Park (you can hire bikes there). Further away requiring
transportation (taxi or bus - http://www.indygo.net/pages/
system-map) is the Indianapolis Speedway Museum which is
at the Indy 500 track, the Children’s Museum, and the Indiana
Museum of Art. Close to the hotel, there is the Circle Center
Mall which is a great place for shopping.
32. WEATHER
Weather in Indianapolis in the last week in October can
vary a lot from year to year. Make sure you are prepared
for rain so you can take full advantage of nearby restaurants
and attractions. See http://visitindy.com and the hotel
website http://www.marriott.com/hotels/travel-guide/indccindianapolis-marriott for more information about Indianapolis.
There is a 35% chance of some sort of precipitation (rain) and
snow is very rare at that time of year. Average low and high
temperatures at that time of year are 41 and 60 degrees F,
respectively.
33. TECHNICAL PROGRAM ORGANIZING
COMMITTEE
Robert F. Port, Chair; David R. Dowling, Acoustical
Oceanography; Roderick J. Suthers, Animal Bioacoustics;
Norman H. Philipp, Architectural Acoustics; Robert J.
McGough, Biomedical Acoustics; Uwe J. Hansen, Education
in Acoustics; Roger T. Richards, Engineering Acoustics;
Andrew C.H. Morrison, Musical Acoustics; William J.
Murphy, Noise; Kai Ming Li, Physical Acoustics; Jennifer
Lentz, Psychological and Physiological Acoustics; R. Lee
Culver, Cameron Fackler, Signal Processing in Acoustics;
Diane Kewley-Port, Alexander L. Francis, Speech
Communication; Benjamin M. Shafer, Structural Acoustics
and Vibration; Kevin L Williams, Underwater Acoustics.
168th Meeting: Acoustical Society of America
A22
34. MEETING ORGANIZING COMMITTEE
Kenneth de Jong and Patricia Davies, Cochairs; Robert F.
Port, Technical Program Chair; Diane Kewley-Port, Tessa
Bent, Mary C. Morgan, Food and Beverage; Mary C. Morgan,
Kai Ming Li, Tom Lorenzen, Audio-Visual and WiFi; Caroline
Richie, Volunteer Coordination; William J. Murphy, Technical
Tours; Uwe Hansen, Educational Activities, Tutorials; Diane
Kewley-Port, Special Events; Maria Kondaurova, Guanguan
Li, Michael Hayward, Indianapolis Visitor Information;
Tessa Bent, Student Activities; Mary C. Morgan, Meeting
Administrator.
35. PHOTOGRAPHING AND RECORDING
Photographing and recording during regular sessions are
not permitted without prior permission from the Acoustical
Society.
36. ABSTRACT ERRATA
This meeting program is Part 2 of the October 2014 issue of
The Journal of the Acoustical Society of America. Corrections,
for printer’s errors only, may be submitted for publication in
the Errata section of the Journal.
37. GUIDELINES FOR ORAL PRESENTATIONS
Preparation of Visual Aids
See the enclosed guidelines for computer projection.
• Allow at least one minute of your talk for each slide (e.g.,
Powerpoint). No more than 12 slides for a 15-minute talk
(with 3 minutes for questions and answers).
• Minimize the number of lines of text on one visual aid. 12
lines of text should be a maximum. Include no more than 2
graphs/plots/figures on a single slide. Generally, too little
information is better than too much.
• Presentations should contain simple, legible text that is
readable from the back of the room.
• Characters should be at least 0.25 inches (6.5 mm) in
height to be legible when projected. A good rule of thumb
is that text should be 20 point or larger (including labels
in inserted graphics). Anything smaller is difficult to read.
• Make symbols at least 1/3 the height of a capital letter.
• For computer presentations, use all of the available screen
area using landscape orientation with very thin margins. If
your institutions logo must be included, place it at the bottom of the slide.
• Sans serif fonts (e.g., Arial, Calibri, and Helvetica) are
much easier to read than serif fonts (e.g., Times New Roman) especially from afar. Avoid thin fonts (e.g., the horizontal bar of an e may be lost at low resolution thereby
registering as a c.)
• Do not use underlining to emphasize text. It makes the text
harder to read.
• All axes on figures should be labeled and the text size for
labels and axis numbers or letters should be large enough
to read.
• No more than 3–5 major points per slide.
• Consistency across slides is desirable. Use the same background, font, font size, etc. across all slides.
• Use appropriate colors. Avoid complicated backgrounds
and do not exceed four colors per slide. Backgrounds that
A23
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
change from dark to light and back again are difficult to
read. Keep it simple.
• If using a dark background (dark blue works best), use
white or yellow lettering. If you are preparing slides that
may be printed to paper, a dark background is not appropriate.
• If using light backgrounds (white, off-white), use dark
blue, dark brown or black lettering.
• DVDs should be in standard format.
Presentation
• Organize your talk with introduction, body, and summary
or conclusion. Include only ideas, results, and concepts that
can be explained adequately in the allotted time. Four elements to include are:
(1) Statement of research problem
(2) Research methodology
(3) Review of results
(4) Conclusions
• Generally, no more than 3–5 key points can be covered adequately in a 15-minute talk so keep it concise.
• Rehearse your talk so you can confidently deliver it in the
allotted time. Session Chairs have been instructed to adhere
to the time schedule and to stop your presentation if you
run over.
• An A/V preview room will be available for viewing computer presentations before your session starts. It is advisable to preview your presentation because in most cases
you will be asked to load your presentation onto a computer, which may have different software or a different
configuration from your own computer.
• Arrive early enough so that you can meet the session chair,
load your presentation on the computer provided, and familiarize yourself with the microphone, computer slide
controls, laser pointer, and other equipment that you will
use during your presentation. There will be many presenters loading their materials just prior to the session so it is
very important that you check that all multi-media elements (e.g., sounds or videos) play accurately prior to the
day of your session.
• Each time you display a visual aid the audience needs time
to interpret it. Describe the abscissa, ordinate, units, and the
legend for each figure. If the shape of a curve or some other
feature is important, tell the audience what they should observe to grasp the point. They won’t have time to figure it
out for themselves. A popular myth is that a technical audience requires a lot of technical details. Less can be more.
• Turn off your cell phone prior to your talk and put it away
from your body. Cell phones can interfere with the speakers and the wireless microphone.
38. SUGGESTIONS FOR EFFECTIVE POSTER
PRESENTATIONS
Content
• The poster should be centered around two or three key
points supported by the title, figures, and text.
• The poster should be able to “stand alone.” That is, it
should be understandable even when you are not present
to explain, discuss, and answer questions. This quality is
168th Meeting: Acoustical Society of America
A23
highly desirable since you may not be present the entire
time posters are on display, and when you are engaged in
discussion with one person, others may want to study the
poster without interrupting an ongoing dialogue.
• To meet the “stand alone” criteria, it is suggested that the
poster include the following elements, as appropriate:
○ Background
○ Objective, purpose, or goal
○ Hypotheses
○ Methodology
○ Results (including data, figures, or tables)
○ Discussion
○ Implications and future research
○ References and Acknowledgments
Design and layout
• A board approximately 8 ft. wide × 4 ft. high will be provided for the display of each poster. Supplies will be available for attaching the poster to the display board. Each
board will be marked with an abstract number.
• Typically posters are arranged from left to right and top
to bottom. Numbering sections or placing arrows between
sections can help guide the viewer through the poster.
• Centered at the top of the poster, include a section with
the abstract number, paper title, and author names and affiliations. An institutional logo may be added. Keep the design relatively simple and uncluttered. Avoid glossy paper.
Lettering and text
• Font size for the title should be large (e.g., 70-point font)
• Font size for the main elements should be large enough
to facilitate readability from 2 yards away (e.g., 32 point
font). The font size for other elements, such as references,
may be smaller (e.g., 20–24 point font).
• Sans serif fonts (e.g., Arial, Calibri, Helvetica) are much
easier to read than serif fonts (e.g., Times New Roman).
• Text should be brief and presented in a bullet-point list as
much as possible. Long paragraphs are difficult to read in a
poster presentation setting.
Visuals
• Graphs, photographs, and schematics should be large
enough to see from 2 yards (e.g., 8 × 10 inches).
• Figure captions or bulleted annotation of major findings
next to figures are essential. To ensure that all visual elements are “stand alone,” axes should be labeled and all
symbols should be explained.
• Tables should be used sparingly and presented in a simplified format.
Presentation
• Prepare a brief oral summary of your poster and short answers to likely questions in advance.
• The presentation should cover the key points of the poster
so that the audience can understand the main findings. Further details of the work should be left for discussion after
the initial poster presentation.
• It is recommended that authors practice their poster presentation in front of colleagues before the meeting. Authors
should request feedback about the oral presentation as well
as poster content and layout.
A24
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Other suggestions
• You may wish to prepare reduced-size copies of the poster
(e.g., 8 1/2 × 11 sheets) to distribute to interested audience
members.
39. GUIDELINES FOR USE OF COMPUTER
PROJECTION
• A PC computer with audio playback capability and a projector will be provided in each meeting room on which all
authors who plan to use computer projection should load
their presentations.
• Authors should bring computer presentations on a USB
drive to load onto the provided computer and should arrive
at the meeting rooms at least 30 minutes before the start of
their sessions.
• Assistance in loading presentations onto the computers
will be provided.
• Note that only PC format will be supported so authors using Macs to prepare their presentation must save their presentations so that the projection works when the presentation is run from the PC in the session room. Also, authors
who plan to play audio or video clips during their presentations should insure that their sound (or other) files are
also saved on the USB drive and are also uploaded to the
PC in the session room. Presenters should also check that
the links to the sound (and other) files in the presentation
still work after everything has been loaded onto the session
room computer.
Using your own computer (only if you really need to!)
It is essential that each speaker who plans to use his/her
own laptop connect to the computer projection system in the
A/V preview room prior to session start time to verify that
the presentation will work properly. Technical assistance is
available in the A/V preview room at the meeting, but not in
session rooms. Presenters whose computers fail to project for
any reason will not be granted extra time.
General Guidelines
• Set your computer’s screen resolution to 1024x768 pixels
or to the resolution indicated by the AV technical support.
If it looks OK, it will probably look OK to your audience
during your presentation.
• Remember that graphics can be animated or quickly toggled among several options: Comparisons between figures
may be made temporally rather than spatially.
• Animations often run more slowly on laptops connected
to computer video projectors than when not so connected.
Test the effectiveness of your animations before your assigned presentation time on a similar projection system
(e.g., in the A/V preview room). Avoid real-time calculations in favor of pre-calculation and saving of images.
• If you will use your own laptop instead of the computer provided, connect your laptop to the projector during the question/answer period of the previous speaker. It is good protocol
to initiate your slide show (e.g., run PowerPoint) immediately
once connected, so the audience doesn’t have to wait. If there
are any problems, the session chair will endeavor to assist
you, but it is your responsibility to ensure that the technical
details have been worked out ahead of time.
168th Meeting: Acoustical Society of America
A24
• During the presentation have your laptop running with
main power instead of using battery power to insure that
the laptop is running at full CPU speed. This will also guarantee that your laptop does not run out of power during
your presentation.
SPECIFIC HARDWARE CONFIGURATIONS
Macintosh
• Older Macs require a special adapter to connect the video
output port to the standard 15-pin male DIN connector.
Make sure you have one with you.
• Hook everything up before powering anything on. (Connect the computer to the RGB input on the projector).
• Turn the projector on and boot up the Macintosh. If this
doesn’t work immediately, you should make sure that your
monitor resolution is set to 1024x768 for an XGA projector
or at least 640x480 for an older VGA projector. (1024x768
will most always work.). You should also make sure that
your monitor controls are set to mirroring. If it’s an older
powerbook, it may not have video mirroring, but something called simulscan, which is essentially the same.
• Depending upon the vintage of your Mac, you may have
to reboot once it is connected to the computer projector
or switcher. Hint: you can reboot while connected to the
computer projector in the A/V preview room in advance of
your presentation, then put your computer to sleep. Macs
thus booted will retain the memory of this connection when
awakened from sleep.
• Depending upon the vintage of your system software, you
may find that the default video mode is a side-by-side configuration of monitor windows (the test for this will be that
you see no menus or cursor on your desktop; the cursor will
slide from the projected image onto your laptop’s screen as
it is moved). Go to Control Panels, Monitors, configuration, and drag the larger window onto the smaller one. This
produces a mirror-image of the projected image on your
laptop’s screen.
• Also depending upon your system software, either the Control Panels will automatically detect the video projector’s
resolution and frame rate, or you will have to set it manually. If it is not set at a commensurable resolution, the projector may not show an image. Experiment ahead of time
with resolution and color depth settings in the A/V preview
A25
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
room (please don’t waste valuable time adjusting the Control Panel settings during your allotted session time).
PC
• Make sure your computer has the standard female 15-pin
DE-15 video output connector. Some computers require an
adaptor.
• Once your computer is physically connected, you will need
to toggle the video display on. Most PCS use either ALTF5 or F6, as indicated by a little video monitor icon on
the appropriate key. Some systems require more elaborate
keystroke combinations to activate this feature. Verify your
laptop’s compatibility with the projector in the A/V preview room. Likewise, you may have to set your laptop’s
resolution and color depth via the monitor’s Control Panel
to match that of the projector, which settings you should
verify prior to your session.
Linux
• Most Linux laptops have a function key marked CRT/LCD
or two symbols representing computer versus projector.
Often that key toggles on and off the VGA output of the
computer, but in some cases, doing so will cause the computer to crash. One fix for this is to boot up the BIOS and
look for a field marked CRT/LCD (or similar). This field
can be set to Both, in which case the signal to the laptop
is always presented to the VGA output jack on the back
of the computer. Once connected to a computer projector,
the signal will appear automatically, without toggling the
function key. Once you get it working, don’t touch it and it
should continue to work, even after reboot.
40. DATES OF FUTURE ASA MEETINGS
For further information on any ASA meeting, or to obtain
instructions for the preparation and submission of meeting
abstracts, contact the Acoustical Society of America, 1305
Walt Whitman Road, Suite 300, Melville, NY 11747-4300;
Telephone: 516-576-2360; Fax: 631-923-2875; E-mail: asa@
aip.org
169th Meeting, Pittsburgh, Pennsylvania, 18–22 May 2015
170th Meeting, Jacksonville, Florida, 2–6 November 2015
171st Meeting, Salt Lake City, Utah, 23–27 May 2016
172nd Meeting, Honolulu, Hawaii, 28 November–2 December
2016.
168th Meeting: Acoustical Society of America
A25
Price: $52.00
Due to security risks and Payment Card Industry (PCI) data security standards e-mail is NOT an acceptable way to transmit credit
card information. Please use our secure web page to process your credit card payment (http://www.abdi-ecommerce10.com/asa)
or securely fax this form to (516-576-2377).
LINCOLN, 8:25 A.M. TO 11:45 A.M.
1a MON. AM
MONDAY MORNING, 27 OCTOBER 2014
Session 1aAB
Animal Bioacoustics: Topics in Animal Bioacoustics I
James A. Simmons, Chair
Neuroscience, Brown University, 185 Meeting St., Box GL-N, Providence, RI 02912
Chair’s Introduction—8:25
Contributed Papers
8:30
9:00
1aAB1. Spinner dolphins (Stenella longirostris GRAY, 1828) acoustic
parameters recorded in the Western South Atlantic Ocean. Juliana R.
Moron, Artur Andriolo (Instituto de Ci^encias Biol
ogicas, Universidade
Federal de Juiz de Fora, Rua Batista de Oliveira 1110 apto 404 B, Juiz de
Fora 36010520, Brazil, julianamoron@hotmail.com), and Marcos Rossiogicas, Universidade
Santos (Centro de Ci^encias Agrarias, Ambientais e Biol
Federal do Rec^oncavo da Bahia, Cruz das Almas, Brazil)
1aAB3. Spatio-temporal distribution of beaked whales in southern California waters. Simone Baumann-Pickering, Jennifer S. Trickey (Scripps
Inst. of Oceanogr., Univ. of California San Diego, 9500 Gilman Dr., La
Jolla, CA 92093, sbaumann@ucsd.edu), Marie A. Roch (Dept. of Comput.
Sci., San Diego State Univ., San Diego, CA), and Sean M. Wiggins (Scripps
Inst. of Oceanogr., Univ. of California San Diego, La Jolla, CA)
Spinner dolphins bioacoustics were study only in Fernando de Noronha
Archipelago region in the Western South Atlantic Ocean. Our study aimed
to describe the acoustic parameters of this species recorded approximately
3500 km south of Fernando de Noronha Archipelago. An one-element
hydrophone was towed 250 m behind the vessel R/V Atl^antico Sul over the
continental shelf break. Continuous mono recording was performed with the
R FR-2 LE, recording at 96
hydrophone passing signals to a digital FostexV
kHz/24 bits. A group of approximate 400 dolphins were recorded on June 3,
2013, at 168.9 km shore distance (27o 24’29”S, 46o50’05”W). The wavfiles were analyzed through the spectrogram configured as DFT 512 samples, 50% overlap and Hamming window of 1024 points generated by software Raven Pro 1.4. The preliminary results of 10 min recording allowed
the extraction of 693 whistles that were classified in contours shapes as:
upsweep (42%), chirp (17.3%), downsweep (14%), sinusoidal (10.5%), convex (5.9%), constant (5.4%), and concave (4.9%). Minimum frequencies
ranged from 3.32 kHz to 23.30 kHz (mean = 10.88 kHz); maximum frequencies ranged from 6.61 kHz to 35.34 kHz (mean = 15.77 kHz); whistle duration ranged from 0.03 s to 2.58 s (mean = 0.68 s). These results are
important to understand populations and/or species distributed in different
ocean basins.
8:45
1aAB2. A new method for detection of North Atlantic right whale upcalls. Mahdi Esfahanian, Hanqi Zhuang, and Nurgun Erdol (Comput. and
Elec. Eng. and Comput. Sci., Florida Atlantic Univ., 777 Glades Rd., Bldg:
EE 96, Rm. 409, Boca Raton, FL 33431, mesfahan@fau.edu)
A study of detecting North Atlantic Right Whale (NARW) up-calls has
been conducted with measurements from passive acoustic monitoring devices.
Denoising and normalization algorithms are applied to remove local variance
and narrowband noise in order to isolate the NARW up-calls in spectrograms.
The resulting spectrograms, after binarization, are treated with a region detection procedure called the Moor-Neighbor algorithm to find continuous objects
that are candidates of up-call contours. After selected properties of each
detected object are computed, they are compared with a pair of low and high
empirical thresholds to estimate the probability of the detected object being
an up-call; therefore, those objects that are determined with certainty to be
non up-calls are discarded. The final stage in the proposed call detection
method is to separate true up-calls from the rest of potential up-calls with
classifiers such as linear discriminate analysis (LDA), Na€ıve Bayes, and decision tree. Experimental results using the data set obtained by Cornell University show that the proposed method can achieve accuracy to 96%.
2073
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Cuvier’s beaked whales are the dominant beaked whales offshore of
southern California. Their abundance, distribution, and seasonality are
poorly understood. Insights on the spatio-temporal distribution of both Cuvier’s and a rare beaked whale with signal type BW43, likely Perrin’s beaked
whale, have been derived from long-term autonomous recordings of beaked
whale echolocation clicks. Acoustic recordings were collected at 18 sites
offshore of southern California since 2006, resulting in a total of ~26 years
of recordings. About 23,000 acoustic encounters with Cuvier’s beaked
whales were detected. In contrast, there were ~100 acoustic encounters of
the BW43 signal type. Cuvier’s beaked whales were predominantly detected
at deeper, more southern, and farther offshore sites, and there appears to be
a seasonal pattern to their presence, with lower probability of detection during summer and early fall. The BW43 signal type had higher detection rates
in the central basins, indicating a possible difference in habitat preference
and niche separation between the two species. Further investigation is
needed to reveal if these distribution patterns are purely based on bathymetric preference, driven by water masses that determine prey species composition and distribution, or possibly by anthropogenic activity.
9:15
1aAB4. The acoustic characteristics of greater prairie-chicken vocalizations. Cara Whalen, Mary Bomberger Brown (School of Natural Resources,
Univ. of Nebraska - Lincoln, 3310 Holdrege St., Lincoln, NE 68583, carawhalen@gmail.com), JoAnn McGee (Developmental Auditory Physiol.
Lab., Boys Town National Res. Hospital, Omaha, NE), Larkin A. Powell,
Jennifer A. Smith (School of Natural Resources, Univ. of Nebraska Lincoln, Lincoln, NE), and Edward J. Walsh (Developmental Auditory
Physiol. Lab., Boys Town National Res. Hospital, Omaha, NE)
Male Greater Prairie-Chickens (Tympanuchus cupido pinnatus) congregate in groups known as “leks” each spring to perform vocal and visual displays to attract females. Four widely recognized vocalization types
produced by males occupying leks are referred to as “booms,” “cackles,”
“whines,” and “whoops.” As part of a larger effort to determine the influence of wind turbine farm noise on lek vocal behavior, we studied the
acoustic properties of vocalizations recorded between March and June in
2013 and 2014 at leks near Ainsworth, Nebraska. Although all four calls are
produced by males occupying leks, the boom is generally regarded as the
dominant call type associated with courtship behavior. Our findings suggest
that the bulk of acoustic power carried by boom vocalizations is in a relatively narrow, low frequency band, approximately 100-Hz wide at 20 dB
below the peak frequency centered on approximately 0.3 kHz. The boom
vocalization is harmonic in character, has a fundamental frequency of
168th Meeting: Acoustical Society of America
2073
approximately 0.3060.01 kHz, and lasts approximately 1.8160.18 s.
Understanding Greater Prairie-Chicken vocal attributes is an essential element in the effort to understand the influence of environmental sound, prominently including anthropogenic sources like wind turbine farms, on vocal
communication success.
9:30
1aAB5. Bioacoustics of Trachymyrmex fuscus, Trachymyrmex tucumanus, and Atta sexdens rubropilosa (Hymenoptera: Formicidae). Amanda
A. Carlos, Francesca Barbero, Luca P. Cassaci, Simona Bonelli (Life Sci.
and System Biology, Univ. of Turin, Dipartimento di Biologia Animale e
dell’Uomo Via Accademia Albertina 13, Turin 10123, Italy, amandacarlos@yahoo.com.br), and Odair C. Bueno (Centro de Estudos de Insetos
Sociais (CEIS), Universidade Estadual Paulista J
ulio de Mesquita Filho
(UNESP), Rio Claro, Brazil)
The capability to produce species-specific sounds is common among
ants. Ants of the genus Trachymyrmex occur in an intermediate phylogenetic position within the Attini tribe, between the leafcutters, such as Atta
sexdens rubropilosa, and more basal species. The study of stridulations
would provide important cues on the evolution of the tribe’s diverse biological aspects. Therefore, in the present study, we described the stridulation
signals produced by Trachymyrmex fuscus, Trachymyrmex tucumanus, and
A. sexdens rubropilosa workers. Ant workers were recorded, and their stridulatory organs were measured. The following parameters were analyzed:
chirp length [ms], inter-chirp (pause) [ms], cycle (chirp + inter-chirp) [ms],
cycle repetition rate [Hz], and the peak frequency [Hz], as well as the number of ridges on the pars stridens. During the inter-chirp, there is no measurable signal for A. sexdens rubropilosa, whereas for Trachymyrmex fuscus
and Trachymyrmex tucumanus, a low intensity signal was detected. In other
words, the plectrum and the pars stridens of A. sexdens rubropilosa have no
contact during the lowering of the gaster. Principal component analysis, to
which mainly the duration of chirps contributed, showed that stridulation is
an efficient tool to differentiate ant species at least in the case of the Attini
tribe.
9:45
1aAB6. Robustness of perceptual features used for passive acoustic classification of cetaceans to the ocean environment. Carolyn Binder (Oceanogr. Dept., Dalhousie Univ., LSC Ocean Wing, 1355 Oxford St., PO Box
15000, Halifax, NS B3H 4R2, Canada, carolyn.binder@dal.ca) and Paul C.
Hines (Dept. of Elec. and Comput. Eng., Dalhousie Univ., Halifax, NS,
Canada)
Passive acoustic monitoring (PAM) is used to study cetaceans in their
habitats, which cover diverse underwater environments. It is well known
that properties of the ocean environment can be markedly different between
regions, which can result in distinct propagation characteristics. These can
in turn lead to differences in the time-frequency characteristics of a recorded
signal and may impact the accuracy of PAM systems. To develop an automatic PAM system capable of operating under numerous environmental
conditions, one must account for the impact of propagation conditions. A
prototype aural classifier developed at Defence R&D Canada has successfully been used for inter-species discrimination of cetaceans. The aural classifier achieves accurate results by using perceptual signal features that
model the features employed by the human auditory system. The current
work uses a combination of at-sea experiments and pulse propagation modeling to examine the robustness of the perceptual features with respect to
propagation effects. Preliminary results will be presented from bowhead and
humpback vocalizations that were transmitted over 1–20 km ranges during a
two-day sea trial in the Gulf of Mexico. Insight gained from experimental
results will be augmented with model results. [Work supported by the U.S.
Office of Naval Research.]
10:00–10:15 Break
10:15
1aAB7. Passive acoustic monitoring on the seasonal species composition
of cetaceans from a marine observatory. Tzu-Hao Lin, Hsin-Yi Yu (Inst.
of Ecology and Evolutionary Biology, National Taiwan Univ., No. 1, Sec.
4, Roosevelt Rd., Taipei 10617, Taiwan, schonkopf@gmail.com), Chi-Fang
Chen (Dept. of Eng. Sci. and Ocean Eng., National Taiwan Univ., Taipei,
Taiwan), and Lien-Siang Chou (Inst. of Ecology and Evolutionary Biology,
National Taiwan Univ., Taipei, Taiwan)
Information on the species diversity of cetaceans can help us to understand the community ecology of marine top predators. Passive acoustic
monitoring has been widely applied in the cetacean research, however, species identification based on tonal sounds remains challenging. In order to
examine the seasonal changing pattern of species diversity, we applied an
automatic detection and classification algorithm on acoustic recordings collected from the marine cable hosted observatory off the northeastern Taiwan. Representative frequencies of cetacean tonal sounds were detected.
Statistical features were extracted based on the distribution of representative
frequency and were used to classify four cetacean groups. The correct classification rate was 72.2% based on the field recordings collected from
onboard surveys. Analysis on one-year recordings revealed that the species
diversity was highest in winter and spring. Short finned pilot whales and
Risso’s dolphins were the most common species, they mainly occurred in
winter and summer. False killer whales were mostly detected in winter and
spring. Spinner dolphins, spotted dolphins, and Fraser’s dolphins were
mainly detected in summer. Bottlenose dolphins represent the least common
species. In the future, the biodiversity, species-specific habitat use, and
inter-specific interaction of cetaceans can be investigated through an underwater acoustic monitoring network.
10:30
1aAB8. The effects of road noise on the calling behavior of Pacific chorus frogs. Danielle V. Nelson (Dept. of Forest Ecosystems and Society,
Oregon State Univ., Oregon State University, 321 Richardson Hall, Corvallis, OR 97331, danielle.nelson@oregonstate.edu), Holger Klinck (Fisheries
and Wildlife, Oregon State Univ., Newport, OR), and Tiffany S. Garcia
(Fisheries and Wildlife, Oregon State Univ., Corvallis, OR)
Fitness consequences of anthropogenic noise on organisms that have
chorus-dependent breeding requirements, such as frogs, are not well understood. While frogs were thought to have innate and fixed call structure, species-specific vocal plasticity has been observed in populations experiencing
high noise conditions. Adjustment to call structure, however, can have negative fitness implications in terms of energy expenditure and female choice.
The Pacific chorus frog (Pseudacris regilla), a common vocal species
broadly distributed throughout the Pacific Northwest, often breeds in waters
impacted by road noise. We compared Pacific chorus frog call structure
from breeding populations at 11 high- and low-traffic sites in the Willamette
Valley, Oregon. We used passive acoustic monitoring and directional
recordings to determine mean dominant frequency, amplitude, and call rate
of breeding populations, individual frogs, and to quantify ambient road
noise levels. Preliminary results indicate that while individuals do not differ
in call rate or structure across noisy and quiet sites, high road noise levels
decrease the effective communication distance of both the chorus and the
individual. This research enhances our understanding of acoustic habitat in
the Willamette Valley and the impacts of anthropogenic noise on a native
amphibian species.
10:45
1aAB9. Inter-individual difference of one type of pulsed sounds produced by beluga whales (Delphinapterus leucas). Yuka Mishima (Tokyo
Univ. of Marine Sci. and Technol., Konan 4-5-7, Minato-ku, Tokyo 1088477, Japan, thank_you_for_email_5yuka@yahoo.co.jp), Tadamichi Morisaka (Tokai Univ. Inst. of Innovative Sci. and Technol., Shizuoka-shi,
Japan), Miho Itoh (The Port of Nagoya Public Aquarium, Nagoya-shi,
Japan), Ryota Suzuki, Kenji Okutsu (Yokohama Hakkeijima Sea Paradise,
Yokohama-shi, Japan), Aiko Sakaguchi, and Yoshinori Miyamoto (Tokyo
Univ. of Marine Sci. and Technol., Minato-ku, Japan)
Belugas often exchange one type of broadband pulsed sounds (termed
PS1 calls) which possibly functions as a contact calls (Morisaka et al.,
2074
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2074
11:00
11:15
1aAB11. Evidence for a possible functional significance of horseshoe bat
biosonar dynamics. Rolf M€
uller, Anupam K. Gupta (Mech. Eng., Virginia
Tech, 1075 Life Sci. Cir, Blacksburg, VA 24061, rolf.mueller@vt.edu),
Uzair Gillani (Elec. and Comput. Eng., Virginia Tech, Blacksburg, VA),
Yanqing Fu (Eng. Sci. and Mech., Virginia Tech, Blacksburg, VA), and
Hongxiao Zhu (Dept. of Statistics, Virginia Tech, Blacksburg, VA)
The periphery of the biosonar system of horseshoe bats is characterized
by a conspicuous dynamics where the shapes of the noseleaves (structures
that surround the nostrils) and the outer ears (pinnae) undergo fast changes
that can coincide with pulse emission and echo reception. These changes in
the geometries of the sound-reflecting surfaces affect the device characteristics, e.g., as represented by beampatterns. Hence, this dynamics could give
horseshoe bats an opportunity to view their environments through a set of
different device characteristics. It is not clear at present whether horseshoe
bats make use of this opportunity, but there is evidence from various sources, namely, anatomy, behavior, evolution, and information theory. Anatomical studies have shown the existence of specialized muscular actuation
systems that are clearly directed toward geometrical changes. Behavioral
observations indicate that these changes are linked to contexts where the bat
is confronted with a novel or otherwise demanding situation. Evolutionary
evidence comes from the occurrence of qualitatively similar ear deformation
patterns in mustached bats (Pteronotus) that have independently evolved a
biosonar for Doppler-shift detection. Finally, an information-theoretic analysis demonstrates that the capacity of the biosonar system for encoding sensory information is enhanced by these dynamic processes.
1aAB10. Numerical study of biosonar beam forming in finless porpoise
(Neophocaena asiaeorientalis). Chong Wei (College of Ocean & Earth
Sci., Xiamen Univ., 1502 Spreckels St. Apt 302A, Honolulu, Hawaii 96822,
weichong3310@foxmail.com), Zhitao Wang (Key Lab. of Aquatic Biodiversity and Conservation of the Chinese Acad. of Sci., Inst. of Hydrobiology
of the Chinese Acad. of Sci., Wuhan, China), Zhongchang Song (College of
Ocean & Earth Sci., Xiamen Univ., Xiamen, China), Whitlow Au (Hawaii
Inst. of Marine Biology, Univ. of Hawaii at Manoa, Kaneohe, HI), Ding
Wang (Key Lab. of Aquatic Biodiversity and Conservation of the Chinese
Acad. of Sci., Inst. of Hydrobiology of the Chinese Acad. of Sci., Wuhan,
China), and Yu Zhang (Key Lab. of Underwater Acoust. Commun. and Marine Information Technol. of the Ministry of Education, Xiamen Univ., Xiamen, China)
1aAB12. Analysis of some special buzz clicks. Odile Gerard (DGA, Ave.
de la Tour Royale, Toulon 83000, France, odigea@gmail.com), Craig Carthel, and Stefano Coraluppi (Systems & Technol. Res., Woburn, MA)
Finless porpoise (Neophocaena asiaeorientalis) is known to use the narrow band signals for echolocation living in the Yangtze River and in the
adjoining Poyang and Dongting Lakes in China. In this study, the sound velocity and density of different tissues (including melon, muscle, bony structure, connective tissues, blubber, and mandibular fat) in the porpoise’s head
were obtained by measurement. The sound velocity and density were found
out to have a linear relationship with Hounsfield unit (HU) obtained from
the CT scan. The acoustic property of the head of the porpoise was reconstructed from the HU distribution. Numerical simulations of the acoustic
propagation through finless porpoise’s head were performed by a finite element approach. The beam formation was compared with those of the baiji,
Indo-pacific humpback dolphin, and bottlenose dolphin. The role of the different structures in the head such as air sacs, melon, muscle, bony structure,
connective tissues, blubber, and mandibular fat on biosonar beam was investigated. The results might provide useful information for better understanding of the sound propagation in finless porpoise’s head.
Toothed whales are known to click regularly to find prey. Once a prey
has been detected, the repetition rate of the clicks increases; these sequences
are called buzzes. Previous work shows that the buzz clicks spectrum slowly
varies from click to click for various species. This spectrum similarity allows
buzz clicks association as a sequence using multi-hypothesis tracking
(MHT) algorithms. Thus buzz classification follows automatic click tracking.
The use of MHT reveals that in some rare cases a variant of this property has
been found, whereby sub-sequences of clicks exhibit slowly varying characteristics. In 2010 and 2011, NATO Undersea Research Centre (NURC, now
CMRE Centre for Maritime Research and Experimentation) conducted seatrials with the CPAM (compact Passive Acoustic Monitoring), a volumetric
towed array comprised of four or six hydrophones. This configuration allows
for a rough estimate of clicking animal localization. Some buzzes with subsequences of slowly varying characteristics were recorded with the CPAM.
Localization may help to understand this new finding from a physiological
point of view. The results of this analysis will be presented.
2075
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
11:30
168th Meeting: Acoustical Society of America
2075
1a MON. AM
2013). Here we investigate how belugas embed their signature information
into the PS1 calls. PS1 calls were recorded from each of five belugas
including both sexes and various ages at the Port of Nagoya Public Aquarium using a broadband recording system when in isolation. Temporal and
spectral acoustic parameters of PS1 calls were measured and compared
among individuals. Kruskal-Wallis test revealed that inter-pulse intervals
(IPIs), the number of pulses, and pulse rates of PS1 calls had significant differences among individuals, but duration did not (v2 = 76.7, p<0.0001;
v2 = 26.2, p<0.0001; v2 = 45.3, p<0.0001; and v2 = 4.7, p = 0.316 respectively). The contours depicted by the IPIs as a function of pulse order were
also individually different and only the contours of a calf fluctuated over
time. Four belugas except a juvenile had individually distinctive power
spectra. These results suggest that several acoustic parameters of PS1 calls
may hold individual information. We found PS1-like calls from the other
captive belugas (Yokohama Hakkeijima Sea Paradise) suggested that the
PS1 call is not the specific call for one captive population but the basic call
type for belugas.
MONDAY MORNING, 27 OCTOBER 2014
MARRIOTT 3/4, 7:55 A.M. TO 12:00 NOON
Session 1aNS
Noise, Physical Acoustics, Structural Acoustics and Vibration, and Engineering Acoustics: Metamaterials
for Noise Control I
Keith Attenborough, Cochair
DDEM, The Open University, Walton Hall, Milton Keynes MK7 6AA, United Kingdom
Olga Umnova, Cochair
University of Salford, The Crescent, Salford M5 4WT, United Kingdom
Chair’s Introduction—7:55
Invited Papers
8:00
1aNS1. Recent results on sonic crystals for sound guiding and acoustic absorption. Jose Sanchez-Dehesa, Victor M. GarcıaChocano, and Matthew D. Guild (Dept. of Electron. Eng., Universitat Politecnica de Valencia, Camino de vera s.n., Edificio 7F,
Valencia, Valencia ES-46022, Spain, jsdehesa@upv.es)
We report on different aspects of the behavior of sonic crystals with finite size. First, at wavelengths on the order of the lattice period
we have observed the excitation of deaf modes, i.e., modes with symmetry orthogonal to that of the exciting beam. Numerical simulations and experiments performed with samples made of three rows of cylindrical scatterers demonstrate the excitation of sound waves
guided along a direction perpendicular to the incident beam. Moreover, the wave propagation inside the sonic crystal is strongly dependent on the porosity of the building units. This finding can be used to enhance the absorbing properties of the crystal. Also, we will discuss
the properties of finite sonic crystals at low frequencies, where we have observed small period oscillations superimposed on the wellknown Fabry-Perot resonances appearing in the reflectance and transmittance spectra. It will be shown that the additional oscillations
are due to diffraction in combination with the excitation of the transverse modes associated with the finite size of the samples. [Work
supported by ONR.]
8:20
1aNS2. Acoustic metamaterial absorbers based on multi-scale sonic crystals. Matthew D. Guild, Victor M. Garcıa-Chocano (Dept.
of Electronics Eng., Universitat Politecnica de Valencia, Camino de vera s/n (Edificio 7F), Valencia 46022, Spain, mdguild@utexas.
edu), Weiwei Kan (Dept. of Phys., Nanjing Univ., Nanjing, China), and Jose Sanchez-Dehesa (Dept. of Electronics Eng., Universitat
Politecnica de Valencia, Valencia, Spain)
In this work, thermoviscous losses in single- and multi-scale sonic crystal arrangements are examined, enabling the fabrication and
characterization of acoustic metamaterial absorbers. It will be shown that higher filling fraction arrangements can be used to provide a
large enhancement in the complex mass density and loss factor, and can be combined with other sonic crystals of different sizes to create
multi-scale structures that further enhance these effects. To realize these enhanced properties, different sonic crystal lattices are examined and arranged as a layered structure or a slab with large embedded inclusions. The inclusions are made from either a single solid cylinder or symmetrically arranged clusters of cylinders, known as magic clusters, which behave as an effective fluid. Theoretical results
are obtained using a two-step homogenization process, by first homogenizing each sonic crystal to obtain the complex effective properties of each length scale, and then homogenizing the effective fluid structures to determine the properties of the ensemble structure. Experimental data from acoustic impedance tube measurements will be presented and shown to be in excellent agreement with the
expected results. [Work supported by the US ONR and Spanish MINECO.]
8:40
1aNS3. Quasi-flat acoustic absorber enhanced by metamaterials. Abdelhalim Azbaid El Ouahabi, Victor V. Krylov, and Daniel J.
O’Boy (Dept. of Aeronautical and Automotive Eng., Loughborough Univ., Loughborough University, Loughborough, Leicestershire
LE11 3TU, United Kingdom, A.Azbaid-El-Ouahabi@lboro.ac.uk)
In this paper, the design of a new quasi-flat acoustic absorber (QFAA) enhanced by the presence of a graded metamaterial layer is
described, and the results of the experimental investigation into the reflection of sound from such an absorber are reported. The matching
metamaterial layer is formed by a quasi-periodic array of brass cylindrical tubes with the diameters gradually increasing from the external row of tubes facing the open air towards the internal row facing the absorbing layer made of a porous material. The QFAA is placed
in a wooden box with the dimensions of 569 250 305 mm. All brass tubes are of the same length (305 mm) and fixed between the
opposite sides of the wooden box. Measurements of the sound reflection coefficients from the empty wooden box, from the box with an
inserted porous absorbing layer, and from the full QFAA containing both the porous absorbing layer and the array of brass tubes have
2076
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2076
1a MON. AM
been carried out in an anechoic chamber at the frequency range of 500–3000 Hz. The results show that the presence of the metamaterial
layer brings a noticeable reduction in the sound reflection coefficients in comparison with the reflection from the porous layer alone.
9:00
1aNS4. The influence of thermal and viscous effects on the effective properties of an array of slits. John D. Smith (Physical Sci.,
DSTL, Porton Down, Salisbury SP4 0JQ, United Kingdom, jdsmith@dstl.gov.uk), Roy Sambles, Gareth P. Ward, and Alastair R. Murray (Dept. of Phys. and Astronomy, Univ. of Exeter, Exeter, United Kingdom)
A system consisting of an array of thin plates separated by air gaps is examined using the method of asymptotic homogenization.
The effective properties are compared with a finite element model and experimental results for the resonant transmission of both a single
slit and an array of slits. These results show a dramatic reduction in the frequency of resonant transmission when the slit is narrowed to
below around one percent of the wavelength due to viscous and thermal effects reducing the effective sound velocity through the slits.
These effects are still significant for slit widths substantially greater than the thickness of the boundary layer.
9:20
1aNS5. Atypical dynamic behavior of periodic frame structures with local resonance. Stephane Hans, Claude Boutin (LGCB /
LTDS, ENTPE / Universite de Lyon, rue Maurice Audin, Vaulx-en-Velin 69120, France, stephane.hans@entpe.fr), and Celine Chesnais
(IFSTTAR GER, Universite Paris-Est, Paris, France)
This work investigates the dynamic behavior of periodic unbraced frame structures made up of interconnected beams or plates. Such
structures can represent an idealization of numerous reticulated systems, as the microstructure of foams, plants, bones, the sandwich panels. As beams are much stiffer in tension-compression than in bending, the propagation of waves with wavelengths much greater than
the cell size and the bending modes of the elements can occur in the same frequency range. Thus, frame structures can behave as metamaterials. Since the condition of scale separation is respected, the homogenization method of periodic discrete media is used to derive
the macroscopic behavior. The main advantages of the method are the analytical formulation and the possibility to study the behavior of
the elements at local scale. This provides a clear understanding of the mechanisms governing the dynamics of the material. In the presence of the local resonance, the form of the equations is unchanged but some macroscopic parameters depend on the frequency. In particular, this applies to the mass leading to a generalization of the Newtonian mechanics. As a result, there are frequency bandgaps. In
that case, the same macroscopic modal shape is also associated with several resonant frequencies.
9:40
1aNS6. Design of sound absorbing metamaterials by periodically embedding three-dimensional resonant or non-resonant inclusions in rigidly backed porous plate. Jean-Philippe Groby (LAUM, UMR6613 CNRS, LAUM, UMR 6613 CNRS, AV. Olivier Messiaen, Le Mans F-72085, France, Jean-Philippe.Groby@univ-lemans.fr), Benoit Nennig (LISMMA, Supmeca, Saint Ouen, France),
Clement Lagarrigue, Brunuo Brouard, Dazel Olivier (LAUM, UMR6613 CNRS, Le Mans, France), Olga Umnova (Acoust. Res. Ctr.,
Univ. of Salford, Salford, United Kingdom), and Vincent Tournat (LAUM, UMR6613 CNRS, Le Mans, France)
Air saturated porous materials, namely, foams and wools, are often used as sound absorbing materials. Nevertheless, they suffer
from a lack of absorption efficiency at low frequencies, which is inherent to their absorption mechanisms (viscous and thermal losses),
even when used as optimized multilayer or graded porous materials. These last decades, several solutions have been proposed to avoid
this problem. Among them, metaporous materials consist in exciting modes trapping the energy between the periodic rigid inclusions
embedded in the porous plate and the rigid backing or in the inclusions themselves. The absorption coefficient of different foams is
enhanced both in the viscous and inertial regimes by periodically embedding 3D inclusions, possibly resonant, i.e., air filled Helmholtz
resonators. This enhancement is due to different mode excitation: a Helmholtz resonance in the viscous regime and a trap mode in the inertial regime. In particular, a large absorption coefficient is reached for wavelengths in the air 27 times larger than the sample thickness.
The absorption amplitude and bandwidth is then enlarged by removing porous material in front of the neck, enabling a lower impedance
radiation, and by adjusting the resonance frequencies of the Helmholtz resonator.
10:00–10:20 Break
10:20
1aNS7. Seismic metamaterials: Shielding and focusing surface elastic waves in structured soils. Sebastien R. Guenneau, Stefan
Enoch (Phys., Institut Fresnel, Ave. Escadrille Normandie Niemen, Marseille 13013, France, sebastien.guenneau@fresnel.fr), and
Stephane Brule (Menard Co., Nozay, France)
Phononic crystals and metamaterials are man-made structures (with periodic heterogeneities typically a few micrometers to centimeters) that can control sound in ways not found in nature. Whereas the properties of phononic crystals derive from the periodicity of
their structure, those of metamaterials arise from the collective effect of a large array of small resonators. These effects can be used to
manipulate acoustic waves in unconventional ways, realizing functions such as invisibility cloaking, subwavelength focusing, and
unconventional refraction phenomena (such as negative refractive index and phase velocity). Recent work has started to explore another
intriguing domain of application: using similar concepts to control the propagation of seismic waves within the surface of the Earth. Our
research group at the Aix-Marseille University and French National Center for Scientific Research (CNRS) has teamed up with civil
engineers at an industrial company, Menard, in Nozay, also in France, and carried out the largest-scale tests to date of phononic crystals.
Arrays of boreholes in soil which are a few centimeters to a few meters in diameter are encouraging, thereafter called seismic metamaterials, can be used to deflect incoming acoustic waves at a frequency relevant to earthquake protection, or bring them to a focus. These
preliminary successes could one day translate into a way of mitigating the destructive effects of earthquakes.
2077
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2077
10:40
1aNS8. Tunable resonator arrays—Transmission, near-field interactions, and effective property extraction. Dmitry Smirnov and
Olga Umnova (Acoust. Res. Ctr., Univ. of Salford, The Crescent, Salford, Greater Manchester m5 4wt, United Kingdom, d.smirnov@
edu.salford.ac.uk)
Periodic arrays of slotted cylinders have been studied with a focus on analytical and semi-analytical techniques, observing near-field
interactions and their influence on reflection and transmission of acoustic waves by the array. Relative orientation of the cylinders within
a unit cell has been shown to strongly affect the array behavior, facilitating tunable transmission gaps. An improved homogenization
method is proposed and used to determine effective properties of the array, allowing accurate and computationally efficient prediction of
reflection and transmission characteristics of any number of rows at arbitrary incidence.
11:00
1aNS9. Tunable cylinders for sound control in water. Andrew Norris and Alexey Titovich (Mech. and Aerosp. Eng., Rutgers Univ.,
98 Brett Rd., Piscataway, NJ 08854, norris@rutgers.edu)
Long wavelength effective medium properties are achieved using arrays of closely spaced tunable cylinders. Thin metal shells provide the starting point: for a given shell thickness h and radius a, the effective bulk modulus and density are both proportional to h/a.
Since the metal has large impedance relative to water it follows that there is a unique value of h/a at which the shell is effectively impedance matched to water. The effective sound speed cannot be matched by the thin shell alone (except for impractical metals like silver).
However, simultaneous impedance and speed matching can be obtained by adding an internal mass, e.g., an acrylic core in aluminum cylindrical tubes. By varying the shell thickness and the internal mass, a range of effective properties is achievable. Practical considerations such as shell thickness, internal mass material, and fabrication will be discussed. Arrays made of a small number of different tuned
shells will be described using numerical simulations: example applications include focusing, lensing, and wave steering. [Work supported by ONR.]
11:20
1aNS10. Sound waves over periodic and aperiodic arrays of cylinders on ground surfaces. Shahram Taherzadeh, Ho-Chul Shin,
and Keith Attenborough (Eng. & Innovation, The Open Univ., Walton Hall, Milton Keynes MK7 6AA, United Kingdom, shahram.taherzadeh@open.ac.uk)
Propagation of audio frequency sound waves over periodic arrays of cylinders placed on acoustically hard and soft surfaces has been
studied through laboratory measurements and predictions using a point source. It is found that perturbation of the position of the cylinders from a regular array results in a higher insertion loss than completely periodic or random cylinder arrangements.
11:40
1aNS11. Ground effect due to rough and resonant surfaces. Keith Attenborough (Eng. and Innovation, Open Univ., 18 Milebush,
Linslade, Leighton Buzzard, Bedfordshire LU7 2UB, United Kingdom, Keith.Attenborough@open.ac.uk), Ho-Chul Shin, and Shahram
Taherzadeh (Eng. and Innovation, Open Univ., Milton Keynes, United Kingdom)
Particularly if the ground surface between noise source and receiver would otherwise be smooth and acoustically hard, structured
low-rise ground roughness can be used as an alternative to conventional noise barriers. The techniques of periodic-spacing, absorptive
covering, and local resonance can be used, as when broadening metamaterial stop bands, to achieve a broadband ground effect. This has
been demonstrated both numerically and through laboratory experiments. Computations have employed multiple scattering theory, the
Finite Element Method and the Boundary Element Method. The experiments have involved measurements over cylindrical and rectangular roughness elements and over their resonant counterparts created by incorporating slit-like openings. Resonant elements with slit
openings have been found numerically and experimentally to add a destructive interference below the first roughness-induced destructive interference and thereby mitigate the adverse effects of the low-frequency surface waves generated by the presence of roughness
elements. A nested configuration of slotted hollow roughness elements is predicted to produce multiple resonances and this idea has
been validated through laboratory experiments.
2078
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2078
INDIANA C/D, 8:15 A.M. TO 11:45 A.M.
1a MON. AM
MONDAY MORNING, 27 OCTOBER 2014
Session 1aPA
Physical Acoustics and Noise: Jet Noise Measurements and Analyses I
Richard L. McKinley, Cochair
Battlespace Acoustics, Air Force Research Laboratory, 2610 Seventh Street, Wright-Patterson AFB, OH 45433-7901
Kent L. Gee, Cochair
Brigham Young University, N243 ESC, Provo, UT 84602
Alan T. Wall, Cochair
Battlespace Acoustics Branch, Air Force Research Laboratory, Bldg. 441, Wright-Patterson AFB, OH 45433
Chair’s Introduction—8:15
Invited Papers
8:20
1aPA1. F-35A and F-35B aircraft ground run-up acoustic emissions. Michael M. James, Micah Downing, Alexandria R. Salton
(Blue Ridge Res. and Consulting, 29 N Market St., Ste. 700, Asheville, NC 28801, michael.james@blueridgeresearch.com), Kent L.
Gee, Tracianne B. Neilsen (Phys. and Astronomy, Brigham Young Univ., Provo, UT), Richard L. McKinley, Alan T. Wall, and Hilary
L. Gallagher (Air Force Res. Lab., Dayton, OH)
A multi-organizational effort led by the Air Force Research Laboratory conducted acoustic emissions measurements on the F-35A
and F 35B aircraft at Edwards Air Force Base, California, in September 2013. These measurements followed American National Standards Institute/Acoustical Society of America S12.75-2012 to collect noise data for community noise models and noise exposures to aircraft personnel. In total, over 200 unique locations were measured with over 300 high fidelity microphones. Multiple microphone arrays
were deployed in three orientations: circular arcs, linear offsets from the jet-axis centerline, and linear offsets from the jet shear layer.
The microphone arrays ranged from distances 10 ft outside the shear layer to 4000 ft from the aircraft with angular positions ranging
from 0 (aircraft nose) to 160 (edge of the exhaust flow field). A description of the ground run-up acoustic measurements, data processing, and the resultant data set is provided.
8:40
1aPA2. Measurement of acoustic emissions from F-35B vertical landing operations. Micah Downing, Michael James (Blue Ridge
Res. and Consulting, 29 N. Market St., Ste. 700, Asheville, NC 28801, micah.downing@blueridgeresearch.com), Kent Gee, Brent
Reichman (Brigham Young Univ., Provo, UT), Richard McKinley (Air Force Res. Lab., Wright-Patterson AFB, OH), and Allan Aubert
(Naval Air Warfare Ctr., Patuxent River, MD)
A multi-organizational effort led by the Air Force Research Laboratory conducted acoustic emissions measurements from vertical
landing operations of the F-35B aircraft at Marine Corps Air Station Yuma, Arizona, in September 2013. These measurements followed
American National Standards Institute/Acoustical Society of American S12.75-2012 to collect noise data from vertical landing operations for community noise models and noise exposures to aircraft personnel. Three circular arcs and two vertical microphone arrays
were deployed for these measurements. The circular microphone arrays ranged from distances from 250 ft to 1000 ft from touch down
point. A description of the vertical landing acoustic measurements, data processing, preliminary data analysis, the resultant dataset, and
a summary of results will be provided.
9:00
1aPA3. Acoustic emissions from flyover measurements of F-35A and F-35B aircraft. Richard L. McKinley, Alan T. Wall, Hilary L.
Gallagher (Battlespace Acoust. Branch, Air Force Res. Lab., 711 HPW/RHCB, 2610 Seventh St., Bldg 441, Wright-Patterson AFB, OH,
richard.mckinley.1@us.af.mil), Christopher M. Hobbs, Juliet A. Page, and Joseph J. Czech (Wyle Labs., Inc., Arlington, VA)
Acoustic emissions of F-35A and F-35B aircraft flyovers were measured in September 2013, in a multi-organizational effort led by
the Air Force Research Laboratory. These measurements followed American National Standards Institute/Acoustical Society of America
S12.75-2012 guidance on aircraft flyover noise measurements. Measurements were made from locations directly under the flight path to
12,000 ft away with microphones on the ground, 5 ft, and 30 ft high. Vertical microphone arrays suspended from cranes measured noise
from on the ground up to 300 ft above the ground. A linear ground-based microphone array measured noise directly along the flight
path. In total, data were collected at more than 100 unique locations. Measurements were repeated six times for each flyover condition.
Preliminary results are presented to demonstrate the repeatability of noise data over measurement repetitions, assess data quality, and
quantify community noise exposure models.
2079
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2079
9:20
1aPA4. Three-stream jet noise measurements and predictions. Brenda S. Henderson (Acoust., NASA, MS 54-3, 21000 Brookpark
Rd., Cleveland, OH 44135, brenda.s.henderson@nasa.gov) and Stewart J. Leib (Ohio Aerosp. Inst., Cleveland, OH)
An experimental and numerical investigation of the noise produced by high-subsonic three-stream jets was conducted. The exhaust
system consisted of externally mixed-convergent nozzles and an external plug. Bypass- and tertiary-to-core area ratios between 1 and
1.75, and 0.4 and 1.0, respectively, were studied. Axisymmetric and offset tertiary nozzles were investigated for heated and unheated
conditions. For axisymmetric configurations, the addition of the third stream was found to reduce mid- and high-frequency acoustic levels in the peak-jet-noise direction, with greater reductions at the lower bypass-to-core area ratios. The addition of the third stream also
decreased peak acoustic levels in the peak-jet-noise direction for intermediate bypass-to-core area ratios. For the offset configurations,
an s-duct was found to increase acoustic levels relative to those of the equivalent axisymmetric-three-stream jet while half-duct configurations produced acoustic levels similar to those for the axisymmetric jet for azimuthal observation locations of interest. Comparisons of
noise predictions with acoustic data are presented for selected unheated configurations. The predictions are based on an acoustic analogy
approach with mean flow interaction effects accounted for using a Green’s function, computed in terms of its coupled azimuthal modes,
and a source model previously used for round and rectangular jets.
9:40
1aPA5. Acoustic interaction of turbofan exhaust with deflected control surface for blended wing body airplane. Dimitri Papamoschou (Mech. and Aerosp. Eng., Univ. of California, Irvine, 4200 Eng. Gateway, Irvine, CA 92697-3975, dpapamos@uci.edu) and Salvador Mayoral (Mech. and Aerosp. Eng., Univ. of California, Irvine, Irvine, Armed Forces Pacific)
Small-scale experiments simulated the elevon-induced jet scrubbing noise of the Blended-Wing-Body platform with a bypass ratio
ten turbofan nozzle installed above the wing. The elevon chord length at the interaction zone was similar to the exit fan diameter of the
nozzle. The study encompassed variable nozzle position, variable elevon deflection, removable inboard fins, and two types of nozzles—
plain and chevron. Far-field microphone surveys were conducted underneath the wing. The interaction between the jet and the elevon
produces excess noise that intensifies with increasing elevon deflection. When the elevon trailing edge is near the edge of the jet, excess
noise is manifested as a low-frequency bump on the sound pressure level spectrum. An empirical model for this excess noise is presented. The interaction noise becomes severe, and elevates the entire spectrum, when the elevon intrudes significantly into the jet flow.
The increase in effective perceived noise level (EPNL) falls on well-defined trends when correlated versus the penetration of the elevon
trailing edge into the flow field of the isolated jet. The cumulative takeoff EPNL can increase by as much as 19 dB, underscoring the
potentially detrimental effects of jet-elevon interaction on noise compliance.
10:00–10:20 Break
10:20
1aPA6. Comparison of upside-down microphone with flush mounted microphone configuration. Per Rasmussen (G.R.A.S. Sound
& Vib. A/S, Skovlytoften 33, Holte 2840, Denmark, pr@gras.dk)
Measurement of fly-over aircraft noise is often performed using the microphones mounted in an upside-down configuration, with the
microphone placed 7 mm above a hard reflecting surface. This method assumes that most of the sound is coming from the back of the
microphone within an angle of + -60 degrees. The same microphone configuration is proposed for installed and un-installed jet-engine
test in which case, however, the incidence angle for the microphone may be in the range of 60–85 degrees. The response of the upsidedown microphone configuration is compared with flush mounted microphones as reference. The influence of microphone diameter (ranging from 1/8 in. to 1=2 in.) is compared in the different configurations and the effect of windscreens is investigated.
10:40
1aPA7. Active control of noise from hot, supersonic turbulent jets. Tim Colonius, Aaron Towne (Mech. Eng., Caltech, 1200 E. California Blvd., Pasadena, CA 91125, colonius@caltech.edu), Robert H. Schlinker, Ramons A. Reba, and Dan Shannon (Thermal and Fluid
Sci. Dept., United Technologies Res. Ctr., East Hartford, CT)
We report on an experimental and reduced-order modeling study aimed at reducing mixing noise in hot supersonic jets relevant to
military aircraft. A spinning valve is used to modulate four injection nozzles near the main jet nozzle lip over a range of frequencies and
mass flow rates. Diagnostics include near-, mid-, and far-field microphone arrays aimed at measuring the effect of actuation on the nearfield turbulent wavepacket structures and their correlation with mixing noise. The actuators provide more than 4 dB noise reduction at
peak frequencies in the aft arc, and up to 2 dB reduction in OASPL. Experiments are performed to contrast the performance of steady
and unsteady blowing with different amplitudes. The results to date suggest that the noise reduction is primarily associated with attenuated wave packet activity associated with the rapidly thickened shear layers that occur with both steady and unsteady blowing. Mean
flow surveys are also preformed and serve as inputs to reduced-order models for the wave packets based on parabolized stability equations. These models are in turn used to corroborate the experimental evidence suggesting mechanisms of noise suppression in the actuated flow.
2080
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2080
11:00
1aPA8. Efficient jet noise models using the one-way Euler equations.
Aaron Towne and Tim Colonius (Dept. of Mech. and Civil Eng., California
Inst. of Technol., 1200 E California Blvd., MC 107-81, Pasadena, CA
91125, atowne@caltech.edu)
Experimental and numerical investigations have correlated large-scale
coherent structures in turbulent jets with acoustic radiation to downstream
angles, where sound is most intense. These structures can be modeled as linear instability modes of the turbulent mean flow. The parabolized stability
equations have been successfully used to estimate the near-field evolution of
these modes, but are unable to properly capture the acoustic field. We have
recently developed an efficient method for calculating these linear modes
that properly captures the acoustic field. The linearized Euler equations are
modified such that all upstream propagating acoustic modes are removed
from the operator. The resulting equations, called one-way Euler equations,
can be stably and efficiently solved in the frequency domain as a spatial initial value problem in which initial perturbations are specified at the flow
inlet and propagated downstream by integrating the equations. We demonstrate the accuracy and efficiency of the method by using it to model sound
generation and propagation in jets. The results are compared to accurate
large-eddy-simulation data for both subsonic and supersonic jets.
11:15
1aPA9. A new method of estimating acoustic intensity applied to the
sound field near a military jet aircraft. Trevor A. Stout, Kent L. Gee, Tracianne B. Neilsen, Derek C. Thomas, Benjamin Y. Christensen (Phys. and
Astronomy, Brigham Young Univ., 688 north 500 East, Provo, UT 84606,
tstout@byu.edu), and Michael M. James (Blue Ridge Res. and Consulting
LLC, Asheville, NC)
Intensity probes are traditionally made up of closely spaced microphones, with the finite-difference method used to approximate acoustic
intensity. This approximation is not reliable approaching the Nyquist frequency limit determined by microphone spacing. However, the new phase
and amplitude estimation (PAGE) method allows for accurate intensity
approximation far above this limit. The PAGE method is applied to measurements from a three-dimensional intensity probe, which took data to the
sideline and aft of a tethered F-22A Raptor. It is shown that the PAGE
method produces physically meaningful intensity approximations for frequencies up to about 6 kHz, while the finite-difference method is only reliable up to about 2 kHz. [Work supported by ONR.]
11:30
1aPA10. Three transformations of a crackling jet noise waveform and
their potential implications for quantifying the “crackle” percept. S.
Hales Swift (School of Aeronautics and Astronautics, Purdue Univ., 2286
Yeager Rd., West Lafayette, IN 47906, hales.swift@gmail.com), Kent L.
Gee, and Tracianne B. Neilsen (Dept. of Phys. and Astronomy, Brigham
Young Univ., Provo, UT)
In the 1975 paper by Ffowcs-Williams et al. on jet “crackle,” there are
several potentially competing descriptors—including a qualitative description of the sound quality or percept, a statistical measure, and commentary
on the relation of the presence of shocks to the sound’s quality. These
descriptors have led to disparate conclusions about what constitutes a
crackling jet, waveform, or sound quality. This presentation considers three
modifications of a jet noise waveform that exhibits a crackling sound quality
and initially satisfies all three definitions. These modifications alter the statistical distributions of primarily the pressure waveform or its first time difference in order to demonstrate how these modifications do or do not
correspond to changes in the sound quality of the waveform. The result,
although preliminary, demonstrates that the crackle percept is tied to the statistics of the pressure difference waveform instead of the pressure waveform
itself.
MONDAY MORNING, 27 OCTOBER 2014
MARRIOTT 5, 9:30 A.M. TO 12:00 NOON
Session 1aSC
Speech Communication: Speech Processing and Technology (Poster Session)
Michael Kiefte, Chair
Human Communication Disorders, Dalhousie University, 1256 Barrington St., Halifax, NS B3J 1Y6, Canada
All posters will be on display from 9:30 a.m. to 12:00 noon. To allow contributors an opportunity to see other posters, contributors of
odd-numbered papers will be at their posters from 9:30 a.m. to 10:45 a.m. and contributors of even-numbered papers will be at their
posters from 10:45 a.m. to 12:00 noon.
Contributed Papers
1aSC1. Locus equations estimated form a corpus of running speech.
Michael Kiefte (Human Commun. Disord., Dalhousie Univ., 1256 Barrington St., Halifax, NS B3J 1Y6, Canada, mkiefte@dal.ca) and Terrance M.
Nearey (Linguist, Univ. of AB, Edmonton, NS, Canada)
Locus equations, or the linear relationship between onset and vowel second-formant frequency F2 in terms of slope and y-intercept, have been presented as possible invariant correlates to consonant place of articulation
[e.g., Sussman et al. (1998). Behav. Brain Sci. 21, 241–299]. In the current
2081
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
study, formant measurements were extracted from both stressed and
unstressed vowels taken from a database of spontaneous and read speech.
Locus equations were estimated for several places of articulation of the preceding consonant. In addition, optimal time frames for estimating locus
equations are determined with reference to automatic classification of consonant place of articulation as well as vowel identification. Formant frequencies are first measured at multiple time frames—both before and after
voicing onset in the case of voiceless plosives—to find the pair of time
frames that best estimates place of articulation via discriminant analysis and
168th Meeting: Acoustical Society of America
2081
1a MON. AM
Contributed Papers
other classification methods. In addition, locus-equation slopes are compared between stressed and unstressed vowels as well as between spontaneous and read speech samples. In addition, the role of total vowel duration
across these contexts is described. The evaluation of several strategies for
optimizing the automatic extraction of formant frequencies from running
speech are also reported. [Work supported by SSHRC.]
1aSC2. Formant trajectory analysis using dynamic time warping: Preliminary results. Kirsten T. Regier (Linguist, Indiana Univ., 3201 W
Woodbridge Dr., Muncie, IN 47304, krtodt@indiana.edu)
In English, there are at least two mechanisms that affect vowel duration—vowel identity and postvocalic consonant voicing. Previous studies
have shown that these two mechanisms have independent effects on vowel
duration (Port 1981, Todt 2010). This study presents preliminary results on
the use of dynamic time warping to distinguish between the effects of vowel
identity and postvocalic consonant voicing on the formant trajectories of
English front vowels. Using PraatR (Albin 2014), formant trajectories are
extracted from sound files in Praat and imported into R, where the dynamic
time warping analysis is conducted using the dtw package (Giorgino 2009).
Albin, A. L. (2014). PraatR: An architecture for controlling the phonetics
software “Praat” with the R programming language. JASA 135, 2198. Giorgino T. (2009). “Computing and Visualizing Dynamic Time Warping Alignments in R: The dtw Package,”J. Stat. Software, 31(7), pp. 1–24. Port, R. F.
(1981). Linguistic timing factors in combination. JASA 69(1), 262–274. R
Core Team (2014). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. Todt, K. R.
(2010). The production of English front vowels by Spanish speakers: A
study of vowel duration based on vowel tenseness and consonant voicing,
JASA 128, 2489.
1aSC3. A “pivot” model for extracting formant measurements based on
vowel trajectory dynamics. Aaron L. Albin and Wil A. Rankinen (Dept. of
Linguist, Indiana Univ., Memorial Hall 322, 1021 E 3rd St., Bloomington,
IN 47405-7005, aaalbin@indiana.edu)
Formant measurements are commonly extracted at fixed fractions across
a vowel’s duration (e.g., the 1/2 point for a monophthong and the 1/3 and 2/
3 points for a diphthong). This approach tacitly relies on the convenience
assumption that a speaker always maximally approximates the intended
acoustic target at roughly the same point across a vowel’s duration. The
present paper proposes an alternate method whereby every formant point
sampled within a vowel is considered as a possible "pivot" (i.e., turning
point), with monophthongs modeled as having one pivot and diphthongs
modeled as having two pivots. The optimal pivot for the vowel is then determined by fitting regression lines to the formant trajectory and comparing the
goodness-of-fit of these lines to the raw formant data. When applied to a
corpus of an American English dialect, the resulting measurements were
found to be significantly correlated with previous methods. This suggests
that the aforementioned convenience assumption is unnecessary and that the
proposed model, which is more faithful to our understanding of articulatory
dynamics, is a viable alternative. Moreover, rather than being assumed a priori, the location of the measurement can be treated as an empirical question
in its own right.
1aSC4. Exploiting second-order statistics improves statistical learning
of vowels. Fernando Llanos (School of Lang. and Cultures, Purdue Univ.,
220 FERRY ST APT 6, Lafayette, IN 45901, fllanos@purdue.edu), Yue
Jiang, and Keith R. Kluender (Dept. of Speech, Lang. and Hearing Sci., Purdue Univ., West Lafayette, IN)
Unsupervised clustering algorithms were used to evaluate three models
of statistical learning of minimal contrasts between English vowel pairs.
The first two models employed only first-order statistics with assumptions
of uniform [M1] or Gaussian [M2] distributions of vowels in an F1-F2
space. The third model [M3] employed second-order statistics by encoding
covariance between F1 and F2. Acoustic measures of F1/F2 frequencies for
12 vowels spoken by 139 men, women, and children (Hillendrand et al.
1995) were used as input to the models. Effectiveness of each model was
tested for each minimal-pair contrast across 100 simulations. Each
2082
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
simulation consisted of two centroids that adjusted on a trial-by-trial basis
as 1000 F1/F2 pairs were input to the models. With addition of each pair,
centroids were reallocated by a k-means algorithm, an unsupervised clustering algorithm that provides an optimal partition of the space into uniformlysized convex cells. The first-order Gaussian model [M2] performed better
than a uniform distribution [M2] for six of seven minimal pairs. The second-order model [M3] was significantly superior to both first-order models
for every pair. Results have implications for optimal perceptual learning of
phonetic differences in ways that respect lawful covariance across vocal
tract lengths that vary across talkers.
1aSC5. Analysis of acoustic to articulatory speech inversion for natural
speech. Ganesh Sivaraman (Elec. & Comput. Eng., Univ. of Maryland College Park, 7704 Adelphi Rd., Apt 11, Hyattsville, MD 20783, ganesa90@
umd.edu), Carol Espy-Wilson (Elec. & Comput. Eng., Univ. of Maryland
College Park, College Park, MD), Vikramjit Mitra (SRI Int.., Menlo Park,
CA), Hosung Nam (Korea Univ., Seoul, South Korea), and Elliot Saltzman
(Physical Therapy & Athletic Training, Boston Univ., New Haven,
Connecticut)
Speech inversion is a technique to estimate vocal tract configurations
from speech acoustics. We constructed two such systems using feedforward
neural networks. One was trained using natural speech data from the XRMB
database and the second using synthetic data generated by the Haskins Laboratories TADA model that approximated the XRMB data. XRMB pellet
trajectories were first converted into vocal tract constriction variables (TVs),
providing a relative measure of constriction kinematics (location and
degree) and synthetic TV data was obtained directly using TADA. The natural and synthetic speech inversion systems were trained as TV estimators
using these respective sets of acoustic and TV data. TV-estimators were first
tested using previously collected acoustic data on the utterance “perfect
memory” spoken at slow, normal, and fast rates. The TV estimator trained
on XRMB data (but not on TADA data) was able to recover the tongue tip
gesture for /t/ in the fast utterance despite the gesture occurring partly during
the acoustic silence of the closure. Further, the XRMB system (but not the
TADA system) could distinguish between bunched and retroflexed /r/.
Finally, we compared the performance of the XRMB system with a set of
independently trained speaker-dependent systems (using the XRMB database) to understand the role of speaker-specific differences in the partitioning of variability across acoustic and articulatory spaces.
1aSC6. Testing AutoTrace: A machine-learning approach to automated
tongue contour data extraction. Gustave V. Hahn-Powell (Linguist, Univ.
of Arizona, 2850 N Alvernon Way, Apt 17, Tucson, AZ 85712, hahnpowell@email.arizona.edu) and Diana Archangeli (Linguist, Univ. of Hong
Kong, Tucson, Arizona)
While ultrasound provides a remarkable tool for tracking the tongue’s
movements during speech, it has yet to emerge as the powerful research tool
it could be. A major roadblock is that the means of appropriately labeling
images is a laborious, time-intensive undertaking. In earlier work, Fasel and
Berry (2010) introduced a "translational" deep belief network (tDBN)
approach to automated labeling of ultrasound images of the tongue, and
tested it against a single-speaker set of 3209 images. This study tests the
same methodology against a much larger data set (about 40,000 images),
using data collected for different studies with multiple speakers and multiple
languages. Retraining a “generic” network with a small set of the most erroneously labeled images from language-specific development sets resulted in
an almost three-fold increase in precision in the three test cases examined.
R landmark analysis system for teach1aSC7. Usability of SpeechMarkV
ing speech acoustics. Marisha Speights and Suzanne E. Boyce (Dept. of
Commun. Sci. and Disord., Univ. of Cincinnati, PO Box 670379, Cincinnati, OH 45267-0379, speighma@mail.uc.edu)
Learning about the intersection of articulation and acoustics, and particularly acoustic measurement techniques, is challenging for students in Linguistics, Psychology and Communication Sciences and Disorders curricula.
There is a steep learning curve before students can apply the material to an
interesting research question; for those in more applied programs such as
168th Meeting: Acoustical Society of America
2082
R
1aSC8. Surveying the nasal peak: A1 and P0 in nasal and nasalized
vowels. Will Styler and Rebecca Scarborough (Linguist, Univ. of Colorado,
295 UCB, Boulder, CO 80309, william.styler@colorado.edu)
Nasality can be measured in the acoustical signal using A1-P0, where A1
is the amplitude of the harmonic under F1, and P0 is the amplitude of a lowfrequency nasal peak (~250 Hz) (Chen 1997). In principle, as nasality
increases, P0 goes up and A1 is damped, yielding lower A1-P0. However,
the details of the relationship between A1 and P0 in natural speech have not
been well described. We examined 4778 vowels in French and English elicited words, measuring A1, P0, and the surrounding harmonic amplitudes,
and comparing oral and nasal tokens (phonemic nasal vowels in French, and
coarticulatorily nasalized vowels in English). Linear mixed-effects regressions confirmed that A1-P0 is predictive of nasality: 4.16 dB lower in English
nasal contexts relative to oral and 5.73 dB lower in French (both p<0.001).
In English, as expected, P0 increased 1.42 dB and A1 decreased 3.93 dB
(p<0.001). In French, however, both A1 and P0 lowered with nasality (5.73
and 0.93 dB, respectively, p<0.001). Even so, in both languages, P0 became
more prominent relative to adjacent harmonics in nasal vowels. These data
reveal cross-linguistic differences in the acoustic realization of nasal vowels
and suggest P0 prominence as a potential perceptual cue to be investigated.
1aSC9. Impact of mismatch conditions between mobile phone recordings on forensic voice comparison. Balamurali B T Nair, Esam A. Alzqhoul, and Bernard J. Guillemin (Dept. of Elec. and Comput., The Univ. of
Auckland, Bldg. 303, Rm. 240, Level 2, Sci. Ctr., 38 Princes St., Auckland,
Auckland, Auckland 1142, New Zealand, bbah005@aucklanduni.ac.nz)
Mismatched conditions between the recordings of suspect, offender and
relevant background population represent a typical scenario in real forensic
casework. In this paper, we investigate the impact of mismatch conditions
associated with mobile phone speech recordings on forensic voice comparison (FVC). The two major mobile phone technologies currently in use today
are the Global System for Mobile Communications (GSM) and Code Division Multiple Access (CDMA). These are fundamentally different in the
way in which they handle the speech signal, which in turn will lead to significant mismatch between speech recordings. Our results suggest that the
resulting degradation on the accuracy of a FVC analysis can be very significant (as high as 150%). Surprisingly, though, our results also suggest that
the reliability of a FVC analysis may actually improve. We propose a strategy for lessening this impact by passing the suspect speech data through the
2083
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
GSM or CDMA codecs, depending on the network origin of the offender
data, prior to the FVC analysis. Though this goes a long way to mitigating
the impact (a reduction in loss of accuracy from 150% to 80%), it is still not
as good as analysis under matched conditions.
1aSC10. 99.8 percent accuracy achieved on Peterson and Barney (1952)
acoustic measurements. Michael A. Stokes (R & D, Waveform Commun.,
3929 Graceland Ave., Indianapolis, IN 46208, waveform.model@yahoo.
com)
In 2012, a paper was presented (Reetz, 2012) discussing the lack of
working phonemic models, which was an acknowledgment to an earlier presentation (Ladefoged, 2004) discussing 50 + years of phonetics and phonology. These presentations highlighted the successes in phonological research
over the last 60 and 50 years, respectively, but both concluded that there is
still no recognized working model of phoneme identification. This presentation will discuss the Waveform Model of Vowel Perception (Stokes, 2009)
achieving 99.8% accuracy on the Peterson and Barney (1952) dataset using
30 conditional statements across all ten vowels produced by the 33 males
(509/510 for the vowels identified by humans at 100%). These results replicate and improve on the 99.2% achieved across the vowels produced by the
males in the Hillenbrand (1995) dataset (Stokes, 2011). As a logical progression, ELBOW was developed in 2013 using the algorithm developed for
static data to identify streaming vowel productions achieving over 91%
before introducing improvements. Beyond ELBOW, it was essential to replicate earlier results on the most cited dataset in the literature. The Waveform Model has now replicated human performance across multiple datasets
and is being successfully introduced into automatic speech recognition
applications.
1aSC11. Lombard effect based speech analysis across noisy environments for voice communications with cochlear implant subjects. Jaewook Lee, Hussnain Ali, Ali Ziaei, and Jonh H. Hansen (Elec. Eng., Univ.
of Texas at Dallas, 800 West Campbell Rd., EC33, Office ECSN 4.414,
Richardson, TX 75080, jaewook@utdallas.edu)
Changes in speech production including vocal effort based on auditory
feedback are an important research domain for improved human communication. For example, in the presence of environmental noise, a speaker experiences the well-known phenomenon known as Lombard effect. Lombard
effect has been studied for normal hearing listeners as well as for automatic
speech/speaker recognition systems, but not for cochlear implant (CI) recipients. The objective of this study is to analyze the speech production of CI
users with respect to environmental change. We observe and study this
effect using mobile personal audio recordings from continuous single-session audio streams collected over an individual’s daily life. Prior advancements in this domain include the “Prof-Life-Log” longitudinal study at
UTDallas. Four CI speakers participated by producing read and spontaneous
speech in six naturalistic noisy environments (e.g., office, car, outdoor, cafeteria, etc.). A number of speech production parameters (e.g., short-time logenergy, fundamental frequency, etc.) known to be sensitive to Lombard
speech were measured for both communicative and non-communicative
speech as a function of environment. Results indicate that variability in the
speech production parameters were found in the upward direction with an
increase in background noise level. Overall higher values in acoustic variables were observed in the inter-personal conversations related to the nonconversational speech.
168th Meeting: Acoustical Society of America
2083
1a MON. AM
Communication Disorders or ESL, there is an additional challenge in envisioning how the knowledge can be applied in changing behavior. The availability of software tools such as Wavesurfer, Praat, Audacity, TF32, the
University College of London software suite, among others, has made it
possible for instructors to design laboratory experiences in visualization,
manipulation, and measurement of speech acoustics. Many students have
found them complex for their first exposure to taking scientific measurements. The SpeechMarkV acoustic landmark analysis system has been
developed to automate the detection of specific acoustic events important
for speech, such as voicing offset and onset, stop bursts, fricative noise, and
vowel midpoints, and to provide automated formant frequency measurement
used for vowel space analysis. This paper describes a qualitative multiple
case study in which seven teachers of speech acoustics were interviewed to
explore whether such pre-analysis of the acoustic signal could be useful for
teaching.
MONDAY MORNING, 27 OCTOBER 2014
INDIANA G, 8:40 A.M. TO 11:15 A.M.
Session 1aSP
Signal Processing in Acoustics: Sampling Methods for Bayesian Signal Processing
Cameron J. Fackler, Cochair
Graduate Program in Architectural Acoustics, Rensselaer Polytechnic Institute, 110 8th St, Troy, NY 12180
Ning Xiang, Cochair
School of Architecture, Rensselaer Polytechnic Institute, Greene Building, 110 8th Street, Troy, NY 12180
Invited Papers
8:40
1aSP1. Statistical sampling and Bayesian illumination waveform design for multiple-hypothesis target classification in cognitive
signal processing. Grace A. Clark (Grace Clark Signal Sci., 532 Alden Ln., Livermore, CA 94550, clarkga1@comcast.net)
Statistical sampling algorithms are widely used in Bayesian signal processing for drawing real-valued independent, identically distributed (i.i.d.) samples from a desired distribution. This paper focuses on the more difficult problem of how to draw complex correlated
samples from a distribution specified by both an arbitrary desired probability density function and a desired power spectral density. This
problem arises in cognitive signal processing. A cognitive signal processing system (for example, in radar or sonar) is one that observes
and learns from the environment; then uses a dynamic closed-loop feedback mechanism to adapt the illumination waveform so as to provide system performance improvements over traditional systems. Current cognitive radar algorithms focus only on target impulse
responses that are Gaussian distributed to achieve mathematical tractability. This research generalizes the cognitive radar target classifier
to deal effectively with arbitrary non-Gaussian distributed target responses. The key contribution lies in the use of a kernel density estimator and an extension of a new algorithm by Nichols et al. for drawing complex correlated samples from target distributions specified
by both an arbitrary desired probability density function and a desired power spectral density. Simulations using non-Gaussian target
impulse response waveforms demonstrate very effective classification performance.
9:00
1aSP2. Bayesian inversion and sequential Monte Carlo sampling techniques applied to nearfield acoustic sensor arrays. Mingsian
R. Bai (Power Mech. Eng., Tsing Hua Univ., 101 sec.2, Kuang_Fu Rd., Hsinchu 30013, Taiwan, msbai63@gmail.com), Amal Agarwal
(Power Mech. Eng., Tsing Hua Univ., Mumbai, India), Ching-Cheng Chen, and Yen-Chih Wang (Power Mech. Eng., Tsing Hua Univ.,
Taipei, Taiwan)
This paper demonstrates that inverse source reconstruction can be performed using a methodology of particle filters that relies primarily on the Bayesian approach of parameter estimation. The proposed approach is applied in the context of nearfield acoustic holography based on the equivalent source method (ESM). A state-space model is formulated in light of the ESM. The parameters to estimate
are amplitudes and locations of the equivalent sources. The parameters constitute the state vector which follows a first-order Markov
process with the transition matrix being the identity for every frequency-domain data frame. The implementation of recursive Bayesian
filters involves a sequential Monte Carlo sampling procedure that treats the estimates as point masses with a discrete probability mass
function (PMF) which evolves with iteration. It is evident from the results that the inclusion of the appropriate prior distribution is crucial in the parameter estimation.
9:20
1aSP3. Bayesian sampling for practical design of multilayer microperforated panel absorbers. Cameron J. Fackler and Ning Xiang
(Graduate Program in Architectural Acoust., Rensselaer Polytechnic Inst., 110 8th St, Greene Bldg., Troy, NY 12180, facklc@rpi.edu)
Bayesian sampling is applied to produce practical designs for microperforated panel acoustic absorbers. Microperforated panels
have the capability to produce acoustic absorbers with very high absorption coefficients, without the use of porous materials. However,
the absorption produced by a single panel is limited to a narrow frequency range, particularly at high absorption coefficient values. To
provide broadband absorption, multiple microperforated panel layers may be combined into a multilayer absorber. To design such an
absorber, the necessary number of layers must be determined and four design parameters must be specified for each layer. Using Bayesian model selection and parameter estimation, this work presents a practical method for designing multilayer microperforated panel
absorbers. Particular attention is paid to aspects of the underlying sampling method that enable automatic handling of design constraints
such as limitations of the manufacturing process and availability of raw materials.
2084
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2084
9:40
1a MON. AM
1aSP4. Particle filtering for robust modal identification and sediment sound speed estimation. Nattapol Aunsri and Zoi-Heleni
Michalopoulou (Mathematical Sci., New Jersey Inst. of Technol., 323 ML King Blvd., Newark, NJ 07102, michalop@njit.edu)
Bayesian methods provide a wealth of information on acoustic features of a propagation medium and the uncertainty surrounding
their estimation. In previous work, we showed how sequential Bayesian (particle) filtering can be used to extract dispersion characteristics of a waveguide. Here, we utilize these characteristics for the estimation of geoacoustic properties of sediments. As expected, the
method relies on accurate identification of modes. The effect of correct/erroneous mode identification on geoacoustic estimates is quantified and approaches are developed for robust modal recognition in conjunction with the particle filter. Additionally, the statistical behavior of the noise present in the data measurements is further investigated with more complex noise modeling leading to improved results.
The approaches are validated with both synthetic and real data collected during the Gulf of Mexico Experiment. [Work supported by
ONR.]
10:00–10:20 Break
10:20
1aSP5. Efficient trans-dimensional Bayesian inversion for geoacoustic profile estimation. Stan E. Dosso, Jan Dettmer, Gavin Steininger (School of Earth & Ocean Sci, Univ. of Victoria, PO Box 1700, Victoria, BC V8W 3P6, Canada, sdosso@uvic.ca), and Charles
W. Holland (Appl. Res. Lab., The Penn State Univ., State College, PA)
This paper considers sampling efficiency of trans-dimensional (trans-D) Bayesian inversion based on the reversible-jump Markovchain Monte Carlo (rjMCMC) algorithm, with application to seabed acoustic reflectivity inversion. Trans-D inversion is applied to sample the posterior probability density over geoacoustic parameters for an unknown number of seabed layers, providing profile estimates
with uncertainties that include the uncertainty in the model parameterization. However, the approach is computationally intensive. The
efficiency of rjMCMC sampling is largely determined by the proposal schemes applied to perturb existing parameters and to assign values for parameters added to the model. Several proposal schemes are examined, some of which appear new for trans-D geoacoustic
inversion. Perturbations of existing parameters are considered in a principal-component space based on an eigen-decomposition of the
unit-lag parameter covariance matrix (computed from successive models along the Markov chain, a diminishing adaptation). The relative efficiency of proposing new parameters from the prior versus a Gaussian distribution focused near existing values is considered. Parallel tempering, which employs a sequence of interacting Markov chains with successively relaxed likelihoods, is also considered to
increase the acceptance rate of new layers. The relative efficiency of various proposal schemes is compared through repeated inversions
with a pragmatic convergence criterion.
10:40
1aSP6. Bayesian tsunami-waveform inversion with trans-dimensional tsunami-source models. Jan Dettmer (Res. School of Earth
Sci., Australian National Univ., 3800 Finnerty Rd., Victoria, Br. Columbia V8W 3P6, Canada, jand@uvic.ca), Jakir Hossen, Phil R.
Cummins (Res. School of Earth Sci., Australian National Univ., Canberra, ACT, Australia), and Stan E. Dosso (School of Earth and
Ocean Sci., Univ. of Victoria, Victoria, BC, Canada)
This paper develops a self-parametrized Bayesian inversion to infer the spatio-temporal evolution of tsunami sources (initial sea
state) due to megathrust earthquakes. To date, tsunami-source uncertainties are poorly understood, and the effect of choices such as discretization have not been studied. The approach developed here is based on a trans-dimensional self-parametrization of the sea surface,
avoids regularization constraints and provides rigorous uncertainty estimation that accounts for model-selection ambiguity associated
with the source discretization. The sea surface is parametrized using self-adapting irregular grids, which match the local resolving power
of the data and provide parsimonious solutions for complex source characteristics. Source causality is ensured by including rupture-velocity and obtaining delay times from the Eikonal equation. The data are recorded on ocean-bottom pressure and coastal wave gauges
and predictions are based on Green-function libraries computed from ocean-basin scale tsunami models for cases that include/exclude
dispersion effects. The inversion is applied to tsunami waveforms from the great 2011 Tohoku-Oki (Japan) earthquake. The tsunami
source is strongest near the Japan trench with posterior mean amplitudes of ~5 m. In addition, the data appear sensitive to rupture velocity, which is part of our kinematic source model.
Contributed Paper
11:00
1aSP7. Model selection using Bayesian samples: An introduction to the
deviance information criterion. Gavin Steininger, Stan E. Dosso, Jan
Dettmer (SEOS, U Vic, 201 1026 Johnson St., Victoria, BC v7v 3n7, Canada, gavin.amw.steininger@gmail.com), and Charles W. Holland (SEOS, U
Vic, State College, Pennsylvania)
This paper presents the deviance information criterion (DIC) as a metric
for model selection based on Bayesian sampling approaches, with examples
from seabed geoacoustic and/or scattering inversion. The DIC uses all samples
of a distribution to approximate Bayesian evidence, unlike more common
measures such as the Bayesian information criterion, which only use point estimates. The DIC uses distribution samples to approximate Bayesian evidence,
2085
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
unlike more common measures such as the Bayesian information criterion
based on point estimates. Hence the DIC is more appropriate for non-linear
Bayesian inversions utilizing posterior sampling. Two examples are considered: determining the dominant seabed scattering mechanism (interface and/or
volume scattering), and choosing between seabed profile parameterizations
based on smooth gradients (polynomial splines) or discontinuous homogeneous layers. In both cases, the DIC is applied to trans-dimensional inversions of
simulated and measured data, utilizing reversible jump Markov chain Monte
Carlo sampling. For the first case, the DIC is found to correctly select the true
scattering mechanism for simulations, and its choice for the measured data
inversion is consistent with sediment cores extracted at the experimental site.
For the second case, the DIC selects the polynomial spline parameterization
for soft seabeds with smooth gradients. [Work supported by ONR.]
168th Meeting: Acoustical Society of America
2085
MONDAY MORNING, 27 OCTOBER 2014
INDIANA F, 8:45 A.M. TO 11:55 A.M.
Session 1aUW
Underwater Acoustics: Understanding the Target/Waveguide System–Measurement and Modeling I
Kevin L. Williams, Chair
Applied Physics Lab., University of Washington, 1013 NE 40th St., Seattle, WA 98105
Chair’s Introduction—8:45
Invited Papers
8:50
1aUW1. Very-high-speed 3-dimensional modeling of littoral target scattering. David Burnett (Naval Surface Warfare Ctr., Code
CD10, 110 Vernon Ave., Panama City, FL 32407, david.s.burnett@navy.mil)
NSWC PCD has developed a high-fidelity 3-D finite-element (FE) modeling system that computes acoustic color templates (target
strength vs. frequency and aspect angle) of single or multiple realistic objects (e.g., target + clutter) in littoral environments. High-fidelity means that 3-D physics is used in all solids and fluids, including even thin shells, so that solutions include not only all propagating
waves but also all evanescent waves, the latter critically affecting the former. Although novel modeling techniques have accelerated the
code by several orders of magnitude, it takes about one day to compute an acoustic color template. However, NSWC PCD wants to be
able to compute thousands of templates quickly, varying target/environment features by small amounts, in order to develop statistically
robust classification algorithms. To accomplish this, NSWC PCD is implementing a radically different FE technology that has already
been developed and verified. It preserves all the 3-D physics but promises to accelerate the code another two to three orders of magnitude. Porting the code to an HPC center will accelerate it another one to two orders of magnitude, bringing performance to seconds per
template. The talk will briefly review the existing system and then describe the new technology.
9:10
1aUW2. Modeling three-dimensional acoustic scattering from targets near an elastic bottom using an interior-transmission formulation. Saikat Dey, William G. Szymczak (Code 7131, NRL, 4555 Overlook Ave. SW, Washington, DC 20375, saikat.dey@nrl.
navy.mil), Angie Sarkissian (Code 7130, NRL, Washington, DC), Joseph Bucaro (Excet Inc., Springfield, VA), and Brian Houston
(Code 7130, NRL, Washington, DC)
For targets near the sediment–fluid interface, the scattering response is fundamentally influenced by the characterization of the sediment in the model. We show that if the model consists of a three-dimensional elastic sediment with acoustic fluid on top, then the use of
perfectly matched-layer (PML) approximation for the truncation of the infinite exterior domain for scattering applications has fundamental problems and gives erroneous results. We present a novel formulation using the an interior-transmission representation of the scattering problem where the exterior truncation with PML does not induce errors in the result. Numerical examples will be presented to verify
the application of this formulation to scattering from elastic targets near a fluid–sediment interface.
9:30
1aUW3. The fluid–structure interaction technique specialized to axially symmetric targets. Ahmad T. Abawi (HLS Res., 3366
North Torrey Pines Court, Ste. 310, La Jolla, CA 92037, abawi@hlsresearch.com) and Petr Krysl (Structural Eng., Univ. of California,
San Diego, La Jolla, CA)
The fluid–structure interaction technique provides a paradigm for solving scattering from elastic targets embedded in a fluid by a
combination of finite and boundary element methods. In this technique, the finite element method is used to compute the target’s impedance matrix and the Helmholtz–Kirchhoff integral with the appropriate Green’s function is used to represent the field in the exterior medium. The two equations are coupled at the surface of the target by imposing the continuity of pressure and normal displacement. This
results in a Helmholtz–Kirchhoff boundary element equation that can be used to compute the scattered field anywhere in the surrounding
environment. This method reduces a finite element problem to a boundary element one with drastic reduction in the number of
unknowns, which translates to a significant reduction in numerical cost. This method was developed and tested for general 3D targets. In
this paper, the method is specialized to axially symmetric targets, which provides further reduction in numerical cost, and validated
using benchmark solutions.
2086
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2086
9:50
1aUW4. A new T matrix for acoustic target scattering by elongated
objects in free-field and in bounded environments. Raymond Lim (Code
X11, NSWC Panama City Div., 110 Vernon Ave., Code X11, Panama City,
FL 32407-7001, raymond.lim@navy.mil)
The transition (T) matrix of Waterman has been very useful for
computing fast, accurate acoustic scattering predictions for axisymmetric
elastic objects but this technique is usually limited to fairly smooth objects
that are not too aspherical unless complex basis functions or stabilization
schemes are used. To remove this difficulty, a spherical-basis formulation
adapted from approaches proposed recently by Waterman [J. Acoust. Soc.
10:05–10:20 Break
10:20
1aUW5. Kirchhoff approximation for spheres and cylinders partially
exposed at flat surfaces and application to the interpretation of backscattering. Aaron M. Gunderson, Anthony R. Smith, and Philip L. Marston
(Phys. and Astronomy Dept., Washington State Univ., Pullman, WA 991642814, aaron.gunderson01@gmail.com)
For cylinders partially exposed at flat surfaces, the Kirchhoff approximation was previously evaluated analytically and compared with measured
backscattering at a free surface as a function of exposure [K. Baik and P. L.
Marston, IEEE J. Ocean. Eng. 33, 386–396 (2008)]. In the present research,
this approach is extended to the cases of numerical integration for high
Am. 125, 42–51 (2009)] and Doicu, et al. [Acoustic & Electromagnetic
Scattering Analysis Using Discrete Sources, Academic Press, London,
2000] is suggested. The new method is implemented by simply transforming
the high-order outgoing spherical basis functions within standard T-matrix
formulations to low-order functions distributed along the object’s symmetry
axis. A free-field T-matrix is produced in a nonstandard form but computations with it become much more stable for aspherical shapes. Some advantages of this approach over Waterman’s and Doicu, et al.’s approaches are
noted and, despite its nonstandard form, the feasibility of extension to
objects in a plane-stratified environment is demonstrated. Sample calculations for an elongated spheroid demonstrate the enhanced stability.
frequency backscattering by partially exposed spheres and cylinders. The
cylinder case was limited to broadside illumination at grazing incidence for
which one-dimensional integration is sufficient and the limits of integration
were previously discussed by Baik and Marston. In the corresponding
sphere case, however, two-dimensional integration is required and the corresponding limits of integration become complicated functions of the amount
of exposure and the grazing angle of the illumination. These approximations
of the backscattering, while they omit Franz wave and elastic contributions,
are useful for modeling the evolution of how the reflected scattering contributions depend on the target exposure. They are also useful for understanding the time evolution of specular scattering contributions. The sphere case
was compared with the exact analysis of backscattering by a half exposed
rigid sphere at a free surface that also displays partially reflected Franz
wave contributions. [Work supported by ONR.]
Invited Papers
10:35
1aUW6. Acoustic ray model for the scattering from an object on the sea floor. Steven G. Kargl, Aubrey L. Espana, and Kevin L.
Williams (Appl. Phys. Lab., Univ. of Washington, 1013 NE 40th St., Seattle, WA 98105, kargl@uw.edu)
Target scattering within a waveguide is recast into a ray model, where time-of-flight wave packets are tracked. The waveguide is
replaced by an equivalent set of image sources and receivers, where rays are associated with these images and interactions with the
waveguide’s boundaries are taken into account. By transforming wave packets into the frequency domain, scattering becomes a multiplication of a wave packet’s spectrum at the target location and the target’s free-field scattering amplitude. Data- and model-model comparisons for an aluminum replica of a 100-mm unexploded ordnance will be discussed. For the data-model comparisons, synthetic aperture
sonar (SAS) data were collect during Pond Experiment 2010 from this replica, where it was placed on a water-sand sediment boundary.
The model-model comparisons use the results from a hybrid 2-D/3-D model. The hybrid model combines a 2D finite-element model to
predict the scattered pressure and its derivatives in the near-field of the target, and then a 3D Helmholtz integral to propagate the pressure to the far field. The data- and model-model comparisons demonstrate the viability of using the ray model to quickly generate realistic pings suitable for both SAS and acoustic color template processing. [Research supported by SERDP and ONR.]
10:55
1aUW7. Orientation dependence for backscattering from a solid cylinder near an interface: Imaging and spectral properties.
Daniel Plotnick, Philip L. Marston (Washington State Univ., 1510 NW Turner Dr., Apt. 4, Pullman, WA 99163, dsplotnick@gmail.
com), Aubrey Espana, and Kevin L. Williams (Appl. Phys. Lab., Univ. of Washington, Seattle, WA)
When a solid cylinder lies proud on horizontal sand sediment significant contributions to backscattering, specular and elastic, involve
multipath reflections from the cylinder and interface. The scattering structure and resulting spectrum versus azimuthal angle, the
“acoustic template,” may be understood using a geometric model [K. L. Williams et al., J. Acoust. Soc. Am. 127, 3356–3371 (2010)]. If
the cylinder is tilted such that the cylinder axis is no longer parallel to the interface the multipath structure is modified. Some changes in
the acoustic template can be approximately modeled using a combination of geometric and physical acoustics. For near broadside scattering the analysis gives a simple expression relating certain changes in the template to the orientation of the cylinder and the source geometry. These changes are useful for inferring the cylinder orientation from the scattering. Changes to the template at end-on and
intermediate angles are also examined. The resulting acoustic images show strong dependence on the cylinder orientation in agreement
with this model. A similar model applies to a metallic cylinder adjacent to a flat free surface and was confirmed in tank experiments.
The effect of vertical tilt on the acoustic image was also investigated. [Work supported by ONR.]
2087
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2087
1a MON. AM
Contributed Papers
11:15
1aUW8. Acoustic scattering enhancements for partially exposed cylinders in sand and at a free surface caused by Franz waves
and other processes. Anthony R. Smith, Aaron M. Gunderson, Daniel S. Plotnick, Philip L. Marston (Phys. and Astronomy Dept.,
Washington State Univ., Pullman, WA, spacetime82@gmail.com), and Grant C. Eastland (NW Fisheries Sci. Ctr., Frank Orth & Assoc.
(NOAA Affiliate), Seattle, WA)
Creeping waves on solid cylinders having slightly subsonic phase velocities and large radiation damping are described as Franz
waves because of association with complex poles investigated by Franz. For free-field high frequency broadside backscattering in water,
the associated echoes are weak due to radiation damping. It was recently demonstrated, however, that for partially exposed solid metal
cylinders at a free surface viewed at grazing incidence, the Franz wave echo can be large relative to the specular echo when the grazing
angle is sufficiently small [G. C. Eastland and P. L. Marston, J. Acoust. Soc. Am. 135, 2489–2492 (2014)]. The Fresnel zone associated
with the specular echo is occluded making it weak while the Franz wave is partially reflected at the interface behind the cylinder. This
hypothesis is also supported by calculating the exact backscattering by half-exposed infinitely long rigid cylinders viewed over a range
of grazing angles. Additional experiments concern the high frequency backscattering by cylinders partially buried in sand viewed at
small grazing angles. From the time evolution of the associated backscattering by short tone bursts, situations have been identified for
which partially reflected Franz wave contributions become significant. Franz waves may contribute to sonar clutter from rocks. [Work
supported by ONR.]
11:35
1aUW9. Pressure gradient coupling to an asymmetric cylinder at an interface. Christopher Dudley (NSWC PCD, 110 Vernon Ave.,
Panama City, FL 32407, mhhd@hotmail.com)
Invited Abstract Special session: “Investigation of target response near interfaces, where coupling between target and environmental
properties are important.” Acoustic scattering results from solid and hollow notched aluminum cylinders are presented as a function of
the incident angle. This flat machined into the circular cylinder resembles the topography(geometry) of an finned unexploded ordnance
(UXO). Prior experiments have shown selective coupling to modes of a flat ended cylinder and the effect of pressure nodes to coupling
to a similar notched cylinder [Espana et al., J. Acoust. Soc. Am. 126, 2187 (2009) and Marston & Marston, J. Acoust. Soc. Am. 127,
1750 (2010)]. The wavefront crossing the flat face of the notch in the paddle has a pressure gradient when not co-linear with the normal
to the flat face of the notch. This pressure gradient applies a torque to the cylinder. Torsional modes can be setup in multiple scaled version of the pseudo-UXOs. Analysis of scattering experiments in the Gulf of Mexico and laboratory scale water tanks indicate robust
returns form these fin like targets.
MONDAY AFTERNOON, 27 OCTOBER 2014
MARRIOTT 7/8, 1:00 P.M. TO 5:15 P.M.
Session 1pAA
Architectural Acoustics: Computer Auralization as an Aid to Acoustically Proper Owner/Architect Design
Decisions
Robert C. Coffeen, Cochair
Architecture, University of Kansas, 4721 Balmoral Drive, Lawrence, KS 66047
Kevin Butler, Cochair
Henderson Engineers, Inc., 8345 Lenexa Dr., #300, Lenexa, KS 66214
Chair’s Introduction—1:00
Invited Papers
1:05
1pAA1. The impact of auralization on design decisions for the House of Commons of the Canadian Parliament. Ronald Eligator
(Acoustic Distinctions, 145 Huguenot St., New Rochelle, NY 10801, religator@ad-ny.com)
The House of Commons of the Canadian Parliament will be temporary relocated to a 27,000 m3 glass-enclosed atrium with stone
and glass walls while their home Chamber is being renovated and restored. Acoustic goals include excellent speech intelligibility for
Members and guests in the room, and production of high-quality audio recordings of all proceedings for live and recorded streaming and
broadcast. Room modeling and auralization using CATT Acoustic has been used to evaluate the acoustic environment of the temporary
2088
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2088
location during design. Modeling and testing of the current House Chamber has also been performed to validate the results and conclusions drawn from the model of the new space. The use of auralizations has helped the Owner and Architect understand the impact of
design choices on the achievement of the acoustic performance goals, and smoothed the path for the integration of design features that
might otherwise have been difficult for them to accept. Measured and calculated data as well as audio examples will be presented.
1:25
1p MON. PM
1pAA2. Cost effective auralizations to help architects and owners make informed decisions for sound isolating assemblies. David
Manley and Ben Bridgewater (D.L. Adams Assoc., Inc., 1536 Ogden St., Denver, CO 80218, dmanley@dlaa.com)
As an acoustical consultant, subjective descriptions of noise environments only get you so far. For example, it can be difficult for an
Architect to qualify the difference between STC 35 and STC 40 windows on a given office space next to a highway. Often, justifying
the increased cost for the increased sound isolation performance is at the forefront of the decision making process for the Owner and
Architect. To help them understand the relative difference in performance, DLAA uses a simplified auralization process to create audio
demonstrations of the difference between sound isolating assemblies. This presentation will discuss the process of creating the auralizations and review case studies where the auralizations helped the client make a more informed decision.
1:45
1pAA3. Using auralization to aid in decision making to meet customer requirements for room response and speech intelligibility.
Thomas Tyson (Professional Systems Div., Bose, 5160 South Deborah Ct., Springfield, MO 65810, Tom_Tyson@bose.com)
To meet specific design goals such as a high degree of speech intelligibility along with targeted reverberation time; the presenter
will show how the use of auralization can help determine the effectiveness of acoustic treatments and loudspeaker directivity types,
beyond just the use of predicted numerical data.
2:05
1pAA4. Bridging the gap between eyes and ears with auralization. Robin S. Glosemeyer Petrone, Scott D. Pfeiffer (Threshold
Acoust..com, 53 W Jackson Blvd., Ste. 815, Chicago, IL 60604, robin@thresholdacoustics.com), and Marcus Mayell (Judson Univ.,
Elgin, IL)
Ray trace animation, level plots, and impulse responses, while all useful tools in providing a visual representation of sound, do not
always bridge the gap between the eye and ear. Threshold utilized auralization to inform decisions for an upcoming theater renovation
with the goal of improving the room’s acoustic support of orchestral performance. To achieve the desired acoustic response, the renovation will require major modifications to the shaping of a hall with a very distinctive architectural vernacular; a distinctive vernacular that
will need to be preserved in some form to maintain the facility’s identity. Along with other modeling tools, auralization provided useful
support, reassuring both the client and the design team of the validity of the concepts.
2:25
1pAA5. Extended tools for simulated impulse responses. Wolfgang Ahnert and Stefan Feistel (Ahnert Feistel Media Group, Arkonastr. 45-49, Berlin D-13189, Germany, wahnert@ada-amc.eu)
To calculate impulse responses is already done since more than 25 years. The routines did allow simple calculations without and
now always with scattered sound components. Today, sophistic routines calculate frequency-dependent full impulse responses comparable with measured ones. Parallel to this development, auralization routines have been developed first for monaural and binaural reproduction and nowadays ambisonic signals are created in B-Format of first and second order. These signals make audible during the
reproduction in an ambisonic playback configuration the distribution of wall and ceiling reflections in computer models in EASE. Beside
the acoustic detection of desired or unwanted reflections, which always is asking for the correct reproduction of the ambisonic signals
the visualization of the reflection distribution is desired. In EASE, a new tool has been implemented to correlate the reflections in an
impulse response with their position in a 3D presentation. This new hedgehog presentation of full impulse responses correlates angle-dependent with the view position of the model. So, any wanted or unwanted reflections may be identified quickly. A comparison with
ambisonic signals via auralization is possible.
2:45
1pAA6. Auralization as an aid in decision-making: Examples from professional practice. Benjamin Markham, Robert Connick, and
Jonah Sacks (Acentech Inc., 33 Moulton St., Cambridge, MA 02138, bmarkham@acentech.com)
The authors and our colleagues have presented dozens of auralizations in the service of our architectural acoustics consulting work,
on projects ranging from large atriums to classrooms to sound isolation between nightclubs and surrounding facilities (and many others).
The aim of most of these presentations is to communicate the relative efficacy of design alternatives or acoustical treatment options. In
some cases, the effects are profound; in others, the acoustical impact may be rather subtle. Without perfect correlation, we have noted a
general trend: when the observable change in acoustical attributes presented in the auralization is substantial, so too is the interest on the
part of the owner to invest in significant or even aggressive acoustical design alternatives; by contrast, subtler changes in perceived
acoustical character often leave owners and architects less inclined to dedicate design resources to pursue alternatives that differ from
the architect or owner’s original vision. Examples of auralizations following (and contradicting) this trend will be presented, along with
descriptions of the design direction taken following meetings and discussions that accompanied the auralizations.
3:05–3:20 Break
2089
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2089
3:20
1pAA7. Auralization and the real world. Shane J. Kanter (Threshold Acoust., 53 W. Jackson Blvd., Ste. 815, Chicago, IL 60604, skanter@thresholdacoustics.com), Ben Bridgewater (D.L. Adams, Denver, CO), and Robert C. Coffeen (School of Architecture, Design &
Planning, The Univ. of Kansas, Lawrence, KS)
Architects value their senses and strive to design spaces that are engaging all five of them. However, architects typically make design
decisions based primarily on how spaces appear and feel, as opposed to acousticians who normally justify design intent with the use of
numbers, graphs, and charts. Although the data are clear to acousticians, auralizations are a useful tool to engage architects, building
owners, and other clients and their sense of hearing to help them make informed decisions. If auralizations are used to demonstrate the
effect of design decisions based on acoustics, there must be confidence in the accuracy and realism of these audio simulations. In order
to better understand the accuracy and realism of auralizations, a study was conducted comparing auralizations created from models of an
existing facility to listening within the facility. Listeners were asked to compare the “real world” sound to the auralizations of this sound
by completing a survey with questions focusing on such comparisons. By presenting the actual sound and the auralizations in the same
space, a direct comparison can be made and the accuracy and realism of the auralizations can be determined. Results and observations
from the study will be presented.
3:40
1pAA8. Directing room acoustic decisions for a college auditorium renovation by using auralization. Robert C. Coffeen (Architecture, Univ. of Kansas, 4721 Balmoral Dr., Lawrence, KS 66047, rcoffeen@ku.edu)
From an acoustical viewpoint, the renovation of a multipurpose college auditorium was predicted by music and theater faculty to be
a compromise not suitable for either music or theater. It was obvious that either variable sound absorption or active acoustics would be
required to satisfy the multipurpose uses of the auditorium. Active acoustics was rejected by the college due to cost and an experience
by one faculty member. And the faculty committee was not familiar with variable sound absorption. Using a computer model of the auditorium it was determined that the volume of the venue could be established to produce the desired maximum reverberation time for
music and that vertical rising drapery could produce the desired reverberation time for drama. Auralization was used to demonstrate to
the faculty committee that with variable sound absorption the auditorium could properly accommodate music of various types and theatrical performances including drama.
Contributed Papers
4:00
4:30
1pAA9. “Illuminating” reflection orders in architectural acoustics using
SketchUp and light rendering. J. Parkman Carter (Architectural Acoust.,
Rensselaer Polytechnic Inst., 32204 Waters View Circle, Cohoes, NY
12047, cartej8@rpi.edu)
1pAA11. Vibrolization: Simulating whole-body structural vibration for
clients and colleagues with the Motion Platform. Clemeth Abercrombie
(Acoust., Arup, New York, NY), Tom Wilcock (Adv. Tech. and Res., Arup,
New York, NY), and Andrew Morgan (Acoust., Arup, 77 Water St., Arup,
New York, NY 10005, andrew.morgan@arup.com)
The conventional architecture workflow tends to—quite literally—
“overlook” matters of sound, given that the modeling tools of architectural
design are almost exclusively visual in nature. The modeling tools used by architectural acousticians, however, produce visual representations, which are,
frankly, less than inspirational for the design process. This project develops a
simple scheme to visualize acoustic reflection orders using light rendering in
the freely available and widely used Trimble SketchUp 3D modeling software. In addition to allowing architectural designers to visualize acoustic
reflections in a familiar modeling environment, this scheme also works easily
with complex geometry. The technique and examples will be presented.
4:15
1pAA10. Using auralization to evaluate the decay characteristics that
impact intelligibility in a school auditorium. Bruce C. Olson (Ahnert Feistel Media Group, 8717 Humboldt Ave. North, Brooklyn Park, MN 55444,
bcolson@afmg.eu) and Bruce C. Olson (Olson Sound Design, Brooklyn
Park, MN)
Auralization was used to evaluate the effectiveness of the loudspeaker
design in a high school auditorium to provide good speech intelligibility when
used for lectures. The goals of this project where to offer an aural impression
that enhances the visual printouts of the simulation results from the 3D model
of the space in EASE using the Analysis Utility for Room Acoustics. The process used will be described and some of the results will be presented.
2090
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Arup has recently introduced an experiential design tool for demonstrating whole-body vibration. The Motion Platform, a bespoke simulator,
moves vertically and can reproduce structural vibration in buildings, transport, and any other situations that involve shaking. Beyond humans, the
platform can also shake objects—opening the door for developing new
vibration criteria for devices such as video cameras and projectors. We will
share our experience in developing the platform and how it has helped us
communicate design ideas to clients and design team members.
4:45
1pAA12. The role of auralization utilizing the end user source signal in
determining final material finishes for the Chapel at St. Dominics. David
S. Woolworth (Oxford Acoust., 356 CR 102, Oxford, MS 38655, dave@
oxfordacoustics.com)
The Chapel at St. Dominics Hospital in Jackson, Mississippi, was created for religious services, prayer time, and serve other spiritual needs of the
hospital’s patients, employees, medical staff, hospital visitors, and the
greater community. It is an intimate space seating up to 100 people and is
used daily by the Dominican Sisters, who first started the Jackson Infirmary
in 1946. This paper outlines the process used to record the voices of the sisters and then use them to generate auralizations, which helped drive decisions regarding acoustic finishes.
168th Meeting: Acoustical Society of America
2090
before we fully understand how the directional distribution of sound should
influence architectural design decisions. A three-dimensional array of 28
loudspeakers and two subwoofers has been constructed in a hemi-anechoic
chamber at PSU, allowing for accurate reproduction of sound fields. For the
array, closed-box loudspeakers were built and digitally equalized to ensure
a flat frequency response. With this facility, subjective studies investigating
spatial sound in concert halls can be conducted using measured sound fields
and perceptually motivated auralizations, not tied to a physical room. Such
a facility is instrumental in understanding and communicating subtle differences in sound fields to listeners, whether they be musicians, architects, or
clients. The flexibility and versatility of this system will facilitate room
acoustics research at Penn State for years to come. [Work supported by NSF
Award 1302741.]
1pAA13. The construction and implementation of a multichannel loudspeaker array for accurate spatial reproduction of sound fields. Matthew
T. Neal, Colton D. Snell, and Michelle C. Vigeant (Graduate Program in
Acoust., Penn State Univ., 201 Appl. Sci. Bldg., University Park, PA 16802,
mtn5048@psu.edu)
The spatial distribution of sound has a strong impact upon a listener’s
overall impression of a room and must be reproduced accurately for auralization. In concert hall acoustics, directionally independent metrics such as
reverberation time and clarity index simply do not predict this impression.
Late lateral energy level, lateral energy fraction, and the interaural correlation coefficient are measures of spatial impression, but more work is needed
MONDAY AFTERNOON, 27 OCTOBER 2014
LINCOLN, 1:00 P.M. TO 5:00 P.M.
Session 1pAB
Animal Bioacoustics and Signal Processing in Acoustics: Array Localization of Vocalizing Animals
Michelle Fournet, Cochair
College or Earth Ocean and Atmospheric Sciences, Oregon State University, 425 SE Bridgeway Ave., Corvallis, OR 97333
David K. Mellinger, Cochair
Coop. Inst. for Marine Resources Studies, Oregon State University, 2030 SE Marine Science Dr., Newport, OR 97365
Chair’s Introduction—1:00
Invited Papers
1:05
1pAB1. Exploiting the sound-speed minimum to extend tracking ranges of vertical arrays in deep water environments. Aaron
Thode, Delphine Mathias (SIO, UCSD, 9500 Gilman Dr., MC 0238, La Jolla, CA 92093-0238, athode@ucsd.edu), Janice Straley (Univ.
of Alaska, Southeast, Sitka, AK), Russel D. Andrews (Alaska SeaLife Ctr., Seward, AK), Chris Lunsford, John Moran (Auke Bay
Labs., NOAA, Juneau, AK), Jit Sarkar, Chris Verlinden, William Hodgkiss, and William Kuperman (SIO, UCSD, La Jolla, CA)
Underwater acoustic vertical arrays can localize sounds by measuring the vertical elevation angles of various multipath arrivals generated by reflections from the ocean surface and bottom. This information, along with measurements of the relative arrival times of the multipath, can be sufficient for obtaining the range and depth of an acoustic source. At ranges beyond a few kilometers ray refraction effects
add additional multipath possibilities; in particular, the existence of a sound-speed minimum in deeper waters permits purely refracted
ray arrivals to be detected and distinguished on an array, greatly extending the tracking range for short-aperture systems. Here, two experimental vertical array deployments are presented. The first is a simple two-element system, deployed using longline fishing gear off Sitka,
AK. By tracking a tagged sperm whale, this system demonstrated an ability to localize this species out to 35 km range, and provide estimates of the detection range of these animals as a function of sea state. The second deployment—a field trial of an 128-element, mid-frequency vertical array system off Southern California—illustrates how multi-element array gain can further extend the detection and
tracking ranges of sperm and humpback whales in deep-water environments. [Work supported by NPRB, NOAA, and ONR.]
1:25
1pAB2. Arrayvolution—An overview of array systems to study bats and toothed whales. Jens C. Koblitz (German Oceanographic
Museum, Katharinenberg 14-20, Stralsund 18439, Germany, Jens.Koblitz@meeresmuseum.de), Magnus Wahlberg (Dept. of Biology,
RMIT Univ., Odense, Denmark), Peter Stilz (Freelance Biologist, Hechingen, Germany), Jamie MacAulay (Sea Mammal Res. Unit,
Univ. of St Andrews, St. Andrews, United Kingdom), Simone G€
otze, Anna-Maria Seibert (Animal Physiol., Inst. for Neurobiology,
Univ. of T€ubingen, T€
ubingen, Germany), Kristin Laidre (Polar Sci. Ctr., Appl. Phys. Lab, Univ. of Washington, Seattle, WA), HansUlrich Schnitzler (Animal Physiol., Inst. for Neurobiology, Univ. of T€
ubingen, Z€
ubingen, Germany), and Harald Benke (German Oceanographic Museum, Stralsund, Germany)
Some echolocation signal parameters can be studied using a single receiver. However, studying parameters such as source level,
directionality, and direction of signal emission require the use of multi-receiver arrays. Acoustic localization allows for determination of
the position of echolocators at the time of signal emission, and when multiple animals are present, calls can be assigned to individuals
2091
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2091
1p MON. PM
5:00
based on their location. This combination makes large multi-receiver arrays a powerful tool. Here we present an overview of different
array configurations used to study both toothed whales and bats, using a suite of systems ranging from semi-3D-minimum receiver number-number-arrays (3D-MINNAs), linear-2-D-over determined arrays (2D-ODAs), to 3-D-over-determined-arrays (3D-ODAs). We discuss approaches to process and summarize the usually large amounts of data. In some studies, the absolute position of an echolocator
and not only relative to the array is crucial. Combining acoustic localizations from a source with geo-referenced receivers allows for
determining geo-referenced movements of an echolocator. Combining these animal tracks with other geo-referenced data such as hydrographic parameters will allow new insights into habitat use.
1:45
1pAB3. Tracking Cuvier’s beaked whales using small aperture arrays. Martin Gassmann, Sean M. Wiggins, and John Hildebrand
(Scripps Inst. of Oceanogr., Univ. of California San Diego, 9152 Regents Rd., Apt. L, La Jolla, CA 92037, mgassmann@ucsd.edu)
Cuvier beaked whales are deep-diving animals that produce strongly directional sounds using high frequencies (>30 kHz) at which
attenuation due to absorption and scattering is high (>8 dB/km). This makes it difficult to track beaked whales in three dimensions with
standard large-aperture hydrophone arrays. By embedding two volumetric small-aperture (~1 m element spacing) arrays into a largeaperture (~1 km element spacing) array of five nodes, individuals and even groups of Cuvier beaked whales were tracked in three dimensions continuously up to one hour within an area of 10 km2 in the Southern California Bight. This passive acoustic tracking technique
provides a tool to study the characteristics of beaked whale echolocation, and their behavior during deep-diving.
2:05
1pAB4. Using ocean bottom seismometer networks to better understand fin whale distributions at different spatial scales.
Michelle Weirathmueller, William SD Wilcock, and Dax C. Soule (Univ. of Washington, 1503 NE Boat St., Seattle, WA 98105,
michw@uw.edu)
Ocean bottom seismometers (OBSs) are designed to monitor ground motion caused by earthquakes, but they also record low frequency vocalizations of fin and blue whales. Seismic networks used for opportunistic whale datasets are rarely optimized for acoustic
localization of marine mammals. We demonstrate the use of OBSs for studying fin whales using two different networks. The first example is a small, closely spaced network of 8 OBSs deployed on the Juan de Fuca Ridge from 2003 to 2006. An automated method for
identifying arrival times and locating fin whale calls using a grid search was applied to obtain 154 individual fin whale tracks over one
year, revealing information on swimming patterns and spatial distribution in the vicinity of a mid ocean ridge. The second example is a
network with widely spaced OBSs, such that a given call can only be detected on one instrument. The Cascadia Initiative Experiment is
a sparse array of 70 OBSs covering the Juan de Fuca Plate from 2011 to 2015. Localization methods based on differential arrival times
are not possible but techniques to locate the range and bearing to fin whales with a single OBS can be applied to constrain larger scale
spatial distributions by comparing call densities in different regions.
2:25
1pAB5. Baleen whale localization using hydrophone streamers during seismic reflection surveys. Shima H. Abadi (Lamont–Doherty Earth Observatory, Columbia Univ., 122 Marine Sci. Bldg., University of Washington 1501 NE Boat St., Seattle, Washington
98195, shimah@ldeo.columbia.edu), Maya Tolstoy (Lamont–Doherty Earth Observatory, Columbia Univ., Palisades, NY), William S.
D. Wilcock (School of Oceanogr., Univ. of Washington, Seattle, WA), Timothy J. Crone, and Suzanne M. Carbotte (Lamont–Doherty
Earth Observatory, Columbia Univ., Palisades, NY)
Seismic reflection surveys use acoustic energy to image the structure beneath the seafloor, but concern has been raised about their
potential impact on marine animals. Most of the energy from seismic surveys is low frequency, so the concern about their impact is
focused on Baleen whales that communicate in the same frequency range. To better mitigate against this impact, safety radii are established based on the criteria defined by the National Marine Fisheries Service. Marine mammal observers use visual and acoustic techniques to monitor safety radii during each experiment. However, additional acoustic monitoring, in particular, locating marine mammals,
could demonstrate the effectiveness of the observations, and help us understand animal responses to seismic experiments. A novel sound
source localization technique using a seismic streamer has been developed. Data from seismic reflection surveys conducted with the R/V
Langseth are being analyzed with this method to locate baleen whales and verify the accuracy of visual detections during experiments.
The streamer is 8 km long with 636 hydrophones sampled at 500 Hz. The work focuses on time intervals when only a mitigation gun is
firing because of marine mammal sightings. [Sponsored by NSF.]
2:45
1pAB6. Faster than real-time automated acoustic localization and call association for humpback whales on the Navy’s Pacific
Missile Range Facility. Tyler A. Helble (SSC-PAC, 2622 Lincoln Ave., San Diego, CA 92104, tyler.helble@gmail.com), Glenn Ierley,
Gerald D’Spain (Scripps Inst. of Oceanogr., San Diego, CA), and Stephen Martin (SSC-PAC, San Diego, CA)
Optimal time difference of arrival (TDOA) methods for acoustically localizing multiple marine mammals have been applied to the
data from the Navy’s Pacific Missile Range Facility in order to localize and track humpback whales. Modifications to established methods were necessary in order to simultaneously track multiple animals on the range without the need for post-processing and in a fully
automated way, while minimizing the number of incorrect localizations. The resulting algorithms were run with no human intervention
at computational speeds faster than the data recording speed on over 40 days of acoustic recordings from the range, spanning several
years and multiple seasons. Spatial localizations based on correlating sequences of units originating from within the range produce estimates having a standard deviation typically 10 m or less (due primarily to TDOA measurement errors), and a bias of 20 m or less (due
to sound speed mismatch). Acoustic modeling and Monte Carlo simulations play a crucial role in minimizing both the variance and bias
of TDOA localization methods. These modeling and simulation techniques will be discussed for optimizing array design, and for maximizing the quality of localizations from existing data sets.
2092
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2092
3:05
1pAB7. Applications of an adaptive back-propagation method for passive acoustic localizations of marine mammal sounds. Ying-Tsong Lin,
Arthur E. Newhall, and James F. Lynch (Appl. Ocean Phys. and Eng.,
Woods Hole Oceanographic Inst., Bigelow 213, MS#11, WHOI, Woods
Hole, MA 02543, ytlin@whoi.edu)
An adaptive back-propagation localization method utilizing the dispersion relation of the acoustic modes of low-frequency sound signals is
reviewed in this talk. This method employs an adaptive array processing
technique (the maximum a posteriori mode filter) to extract the acoustic
modes of sound signals, and it is capable of separating signals from noisy
data. The concept of the localization algorithm is to back-propagate modes
to a location where the modes align with each other. Gauss-Markov inverse
theory is applied to make the normal mode back-propagator adaptive to the
signal-to-noise ratio (SNR). When the SNR is high, the localization procedure will push the algorithm to achieve high resolution. On the other hand,
when the SNR is low, the procedure will try to retain its robustness and
reduce the noise effects. Examples will be shown in the talk to demonstrate
the localization performance with comparisons to other methods. Applications to baleen whale sounds collected in Cape Cod Bay, Massachusetts,
will also be presented. Lastly, population density estimation using this passive acoustic localization method will be discussed.
3:20–3:45 Break
3:45
1pAB8. Tracking porpoise underwater movements in tidal rapids using
drifting hydrophone arrays. Jamie D. Macaulay, Doug Gillespie, Simon
Northridge, and Jonathan Gordon (SMRU, Univ. of St Andrews, 15 Crichton St., Anstruther, Fife KY103DE, United Kingdom, jdjm@st-andrews.ac.
uk)
The growing interest in generating electrical power from tidal currents
using tidal turbine generators raises a number of environmental concerns,
including the risk that cetaceans might be injured or killed through collision
with rotating turbine blades. To understand this risk we need better information on how cetaceans use tidal rapid habitats and in particular their underwater movements and dive behavior. Focusing on harbor porpoises, a
European protected species, we have developed an approach which uses
time of arrival differences of narrow band high frequency (NBHF) clicks
detected on large aperture hydrophone arrays drifting in tidal rapids, to
determine dive tracks of porpoises underwater. Probabilistic localization
algorithms have been developed to filter echoes and provide accurate 2D or
geo-referenced 3D locations. Calibration trials have been carried out that
show that the system can provide depth and location data with submeter
errors. Data collected over three seasons in tidal races around Scotland has
provided new insights into how harbor porpoises are using these unique habitats, information vital for assessing the risk tidal turbines may pose.
4:00
1pAB9. Using a coherent hydrophone array for observing sperm whale
range, classification, and shallow-water dive profiles. Duong D. Tran,
Wei Huang, Alexander C. Bohn, Delin Wang (Elec. and Comput. Eng.,
Northeastern Univ., 006 Hayden Hall, 370 Huntington Ave., Boston, MA
02115, wang.del@husky.neu.edu), Zheng Gong, Nicholas C. Makris (Mech.
Eng., Massachusetts Inst. of Technol., Cambridge, MA), and Purnima Ratilal (Elec. and Comput. Eng., Northeastern Univ., Boston, MA)
Sperm whales in the New England continental shelf and slope were passively localized, in both range and bearing, and classified using a single
low-frequency (<2500 Hz), densely sampled, towed horizontal coherent
hydrophone array system. Whale bearings were estimated using time-
2093
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
domain beamforming that provided high coherent array gain in sperm whale
click signal-to-noise ratio. Whale ranges from the receiver array center were
estimated using the moving array triangulation technique from a sequence
of whale bearing measurements. Multiple concurrently vocalizing sperm
whales, in the far-field of the horizontal receiver array, were distinguished
and classified based on their horizontal spatial locations and the inter-pulse
intervals of their vocalized click signals. The dive profile was estimated for
a sperm whale in the shallow waters of the Gulf of Maine with 160 m watercolumn depth located close to the array’s near-field where depth estimation
was feasible by employing time difference of arrival of the direct and multiply reflected click signals received on the horizontal array. By accounting
for transmission loss modeled using an ocean waveguide-acoustic propagation model, the sperm whale detection range was found to exceed 60 km in
low to moderate sea state conditions after coherent array processing.
4:15
1pAB10. Testing the beam focusing hypothesis in a false killer whale
using hydrophone arrays. Laura N. Kloepper (Dept. of Neurosci., Brown
Univ., 185 Meeting St. Box GL-N, Providence, RI 02912, laura_kloepper@
brown.edu), Paul E. Nachtigall, Adam B. Smith (Zoology, Univ. of Hawaii,
Honolulu, HI), John R. Buck (Elec. and Comput. Eng., Univ. of Massachusetts Dartmouth, Dartmouth, MA), and Jason E. Gaudette (Neurosci., Brown
Univ., Providence, RI)
The odontocete sound production system is complex and composed of
tissues, air sacs, and a fatty melon. Previous studies suggested that the emitted sonar beam might be actively focused, narrowing depending on target
distance. In this study, we further tested this beam focusing hypothesis in a
false killer whale (Pseudorca crassidens) in a laboratory setting. Using three
linear arrays, we recorded the same emitted click at 2, 4, and 7 m distance
while the animal performed a target detection task with the target distance
varying between 2, 4, and 7 m. For each click, we calculated the beamwidth,
intensity, center frequency, and bandwidth as recorded on each array. As the
distance from the whale to the array increased, the received click intensity
was higher than predicted by spreading loss. Moreover, the beamwidth varied with range as predicted by the focusing model and contrary to a piston
model or spherical spreading. These results support the hypothesis that the
false killer whale adaptively focuses its sonar beam according to target
range. [Work supported by ONR and NSF.]
4:30
1pAB11. Sei whale localization and tracking using a moored, combined
horizontal and vertical line array near the New Jersey continental shelf.
Arthur E. Newhall, Ying-Tsong Lin, James F. Lynch (Appl. Ocean Phys.
and Eng., Woods Hole Oceanographic Inst., 210 Bigelow Lab. MS11,
Woods Hole, MA 02543, anewhall@whoi.edu), and Mark F. Baumgartner
(Biology, Woods Hole Oceanographic Inst., Woods Hole, MA)
In 2006, a multidisciplinary experiment was conducted in the Mid-Atlantic continental shelf off the New Jersey coast. During a 2 day period in
mid-September 2006, more than 200, unconfirmed but identifiable, sei
whale (Balaenoptera borealis) calls were collected on a moored, combined
horizontal and vertical line hydrophone array. Sei whale movements were
tracked over long distances (up to tens of kilometers) using a normal mode
back propagation method. This approach uses low-frequency, broadband
passive sei whale call receptions from a single-station, two-dimensional
hydrophone array to perform long distance localization and tracking by
exploiting the dispersive nature of propagating acoustic modes in a shallow
water environment. Source depth information and the source signal can also
be determined from the localization application. This passive whale tracking, combined with the intensive oceanography measurements performed
during the experiment, was also used to examine sei whale movements in
relation to oceanographic features observed in this region.
168th Meeting: Acoustical Society of America
2093
1p MON. PM
Contributed Papers
4:45
1pAB12. Obtaining underwater acoustic impulse responses via blind
channel estimation. Brendan P. Rideout, Eva-Marie Nosal (Dept. of Ocean
and Resources Eng., Univ. of Hawaii at Manoa, 2540 Dole St., Holmes Hall
402, Honolulu, HI 96822, bprideou@hawaii.edu), and Anders Hst-Madsen
(Dept. of Elec. Eng., Univ. of Hawaii at Manoa, Honolulu, HI)
Blind channel estimation is the process of obtaining the impulse
responses between a source and multiple (arbitrarily placed) receivers without prior knowledge about the source characteristics or the environment.
This approach could simplify localization of non-impulsive submerged
sound sources (e.g., pinnipeds or cetaceans); the process of picking arrivals
(direct and reflected) could be carried out on the estimated impulse
responses rather than on the recorded waveforms, thus facilitating the use of
time of arrival-based localization approaches. Blind channel estimation
could also be useful in estimating the original source signal of a vocalizing
animal through deconvolution of the estimated channel impulse responses
and the recorded waveforms. In this paper, simulation and controlled pool
studies will be used to explore requirements on source and environment
characteristics and to quantify blind channel estimation performance for
underwater passive acoustic applications.
MONDAY AFTERNOON, 27 OCTOBER 2014
INDIANA A/B, 1:15 P.M. TO 5:30 P.M.
Session 1pBA
Biomedical Acoustics: Medical Ultrasound
Robert McGough, Chair
Department of Electrical and Computer Engineering, Michigan State University, 2120 Engineering Building,
East Lansing, MI 48824
Contributed Papers
1:15
1pBA1. Investigation of fabricated 1 MHz lithium niobate transfer
standard ultrasonic transducer. Patchariya Petchpong (Acoust. and Vib.
Dept., National Inst. Metrology of Thailand, 75/7 Rama VI Rd., Thungphayathai, Rajthevi, Bangkok 10400, Thailand, patchariya@nimt.or.th) and
Yong Tae Kim (Div. of Convergence Technol., Korea Res. Inst. of Standards and Sci., Daejeon, South Korea)
The fabrication of a single element transducer made from Lithium Niobate (LiNbO3) operating at 1 MHz is focused on this paper. The air-backed
LiNbO3 transducer is developed to be used as the standard transfer ultrasonic transducer to calibrate the ultrasound power-meter, which is measured
the total emitted acoustic power radiated from the medical equipment. To
clarify the precision of the acoustic power, the primary standard calibration
measurement (radiation force balance, RFB) based on IEC 61161 is used to
investigate the fabricated transducer. The geometry of the piezoelectric
active element was first designed by the prediction of Krimholtz, Leedom,
and Matthaei (KLM) simulation technique. The electrical impedance measurements of the LiNbO3 element, before and after assembling into the transducer, were checked and compared. The results of electrical impedance
show that the operating frequency is in the range from 1 MHz to 10 MHz by
forming harmonics. The evaluations of total emitted power and radiation
conductance of fabricated transducer were also revealed. Results of acoustic
power have been responding up to 2.1 W, which can be assessed within 6%
of expanded uncertainty (k = 2).
1:30
1pBA2. Sustained acoustic medicine for stimulation of wound healing:
A translational research report. Matthew D. Langer and George K. Lewis
(ZetrOZ, 56 Quarry Rd., Trumbull, CT 06611, mlanger@zetroz.com)
The healing of both acute and chronic wounds is a challenging clinical
issue affecting more than 6.5 million Americans. The regeneration phase of
wound healing is critical to restoration of function, but is often prolonged
because of the adverse environment for cell growth. Therapeutic ultrasound
2094
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
increases nutrient absorption by cells, accelerates cellular metabolism, and
stimulates production of ECM proteins, which all increase the rate of wound
healing. To test the effect of long duration ultrasound exposure, an initial
study of wound healing was conducted in a rat model, with wounds sutured
to prevent closure via contraction. In this study, a 6 mm wound healed in
962 days when exposed to 6 hours of ultrasound therapy, and 1561 days
with a placebo device (p<0.01). Following IRB approval of a similar protocol for use in humans, a case study was performed on the wound closure of
a chronic wound. Four weeks of daily LITUS therapy reduced the wound
size by 90% from its size after 21 days of treatment with standard of care.
These results demonstrate the efficacy of long duration LITUS for healing
wounds in an animal model and an initial case of healing in a human
subject.
1:45
1pBA3. Long duration ultrasound facilitates delivery of a therapeutic
agent. Kelly Stratton, Rebecca Taggart, and George K. Lewis (ZetrOZ, 56
Quarry Rd., Trumbull, CT 06611, george@zetroz.com)
The ability for ultrasound to enhance drug delivery through the skin has
been established in an animal model. This research tested the delivery of a
therapeutic agent into human skin using sustained ultrasonic application
over multiple hours. An IRB-approved pilot study was conducted using hyalaronan, a polymer found in the skin and associated with hydration. To
assess the effectiveness of the delivery, a standard protocol was applied to
measure moisture of the volar forearm with a corneometer. Fifteen subjects
applied the hyalaronan to their forearms daily. One location was then treated
with a multi-hour ultrasonic treatment, and the other was not. Baseline skin
hydration measurements were taken for one week, followed by daily treatments with moisturizer and corneometer measurements twice per week for
three weeks. Subjects experienced double the increase in sustained moisture
when ultrasound was used in conjunction with a moisturizer when compared
to moisturizer alone (p<0.001) over the four weeks. This study successfully
demonstrated ultrasound treatment enhanced delivery of a therapeutic agent
into the skin.
168th Meeting: Acoustical Society of America
2094
2:30
1pBA4. Characterizing the pressure field in a modified flow cytometer
quartz flow cell: A combined measurement and model approach to validate the internal pressure. Camilo Perez (BioEng. and Ctr. for Industrial
and Medical Ultrasound - Appl. Phys. Lab., Univ. of Washington, 1013 NE
40th St., Seattle, WA 98105-6698, camipiri@uw.edu), Chenghui Wang
(Inst. of Acoust., College of Phys. & Information Technol., Shaanxi Normal
Univ., Xi’an, Shaanxi, China), Brian MacConaghy (Ctr. for Industrial and
Medical Ultrasound, Appl. Phys. Lab., Univ. of Washington, Seattle, WA),
Juan Tu (Key Lab. of Modern Acoust., Nanjing Univ., Nanjing, Jiangsu,
China), Jarred Swalwell (Oceanogr., Univ. of Washington, Seattle, WA),
and Thomas J. Matula (Ctr. for Industrial and Medical Ultrasound, Appl.
Phys. Lab., Univ. of Washington, Seattle, WA)
1pBA6. Effects of fluid medium flow and spatial temperature variation
on acoustophoretic motion of microparticles in microfluidic channels.
Zhongzheng Liu and Yong-Joe Kim (Texas A&M Univ., 3123 TAMU, College Station, TX 77843, liuzz008@tamu.edu)
We incorporated an ultrasound transducer into a flow cytometer to "activate" microbubbles passing the laser interrogation zone (J. Acoust. Soc. Am.
126, 2954–2962, (2009)). This system allows high throughput recording of
the volume oscillations of microbubbles, and has led to a new bubble dynamics model that incorporates shear thinning (Phys. Med. Biol. 58, 985–998
(2013)). Important parameters in the model include the ambient microbubble
size, R0, driving pressure, PA, and the shell parameters v and j, the shell elasticity and viscosity, respectively. R0 is obtained by calibrating the cytometer.
Pressure calibration is difficult because the flow channel width (<200mm) is
too small to insert a hydrophone. The objective of this study was to develop a
calibration method for a 20-cycle, 1 MHz transient pressure field. The pressure field propagating through the channel and into water was compared to a
3-D FEM model. After validation, the model was used to simulate the driving
pressure as input for the bubble dynamics model, leaving only v and j variables. This approach was used to determine the mechanical properties for different bubbles (albumin, lipid, and lyzozyme shells). Excellent fits were
obtained in many cases, but not all, suggesting heterogeneity in microbubble
shell parameters.
2:15
1pBA5. Entropy based detection of molecularly targeted nanoparticle
ultrasound contrast agents in tumors. Michael Hughes (Int. Med./Cardiology, Washington Univ. School of Medicine, 1632 Ridge Bend Dr., St.
Louis, MO 63108, mshatctrain@gmail.com), John McCarthy (Dept. of
Mathematics, Washington Univ., St. Louis, MO), Jon Marsh, and Samuel
Wickline (Int. Med./Cardiology, Washington Univ. School of Medicine,
Saint Louis, MO)
In this study, we demonstrate the use of “joint entropy” of two random
variables (X,Y) can be applied to markedly improve tumor conspicuity
(where X = f(t) =backscattered waveform and Y = g(t) = a reference waveform; both differentiable functions). Previous studies have shown that a
good initial choice of reference is a reflection of the original insonifying
pulse taken from a stainless-steel reflector. Using this choice, joint entropy
analysis is more sensitive to accumulation of targeted contrast agents than
conventional gray-scale or signal energy analysis by roughly a factor of
2 [Hughes, M. S., et al., J. Acoust. Soc. Am., 133(1), p 283, 2013]. We now
derive an improved reference that is applied to three groups of (MDA-435,
breast tumor) flank tumor-implanted athymic nude mice to identify tumor
vasculature after binding perfluorocarbon nanoparticles (~250 nm) to neovascular avb3 integrins. Five mice received i.v.avb3-targeted nanoparticles,
five received nontargeted nanoparticles, and five received saline at a dose of
1 ml/kg, which was allowed to circulate for up to two hours prior to imaging. Three analogous groups of nonimplanted mice were imaged in the same
region following the same imaging protocol. Our results indicate an
improvement in contrast by a factor of 2.5 over previously published results.
Thus, judicious selection of the reference waveform is critical to improving
contrast-to-noise in tumor environments when attempting to detect targeted
nanostructures for molecular imaging of sparse features.
Current, state-of-the-art models of acoustophoretic forces, applied to
microparticles suspended in fluid media inside microfluidic channels, and
acoustic streaming velocities inside the microfluidic channels have been
mainly derived with the assumption of “static” fluid media with uniform
temperature distributions. Therefore, it has been challenging to understand
the effects of “moving” fluid media and fluid medium temperature variation
on acoustophoretic microparticle motion in the microfluidic channels. Here,
a numerical modeling method to accurately predict the acoustophoretic
motion of compressible microparticles in the microfluidic channels is presented to address the aforementioned challenge. In the proposed method, the
Mass, Momentum, and Energy Conservation Equations and the State Equation are decomposed by using a perturbation method into the zeroth- to the
second-order equations. Here, the fluid medium flow and temptation variation are considered in the zeroth-order equations and the solutions of the
zeroth-order equations (i.e., the zeroth-order fluid medium velocities and
temperature distribution) are propagated into the higher-order equations,
ultimately affecting the second-order acoustophoretic forces and acoustic
streaming velocities. The effects of the viscous fluid medium flow and the
medium temperature variation on the acoustophoretic forces and the acoustic streaming velocities were then studied in this article by using the proposed numerical modeling method.
2:45
1pBA7. Thrombolytic efficacy and cavitation activity of rt-PA echogenic
liposomes versus Definity exposed to 120-kHz ultrasound. Kenneth B.
Bader, Guillaume Bouchoux, Christy K. Holland (Internal Medicine, Univ.
of Cincinnti, 231 Albert Sabin Way, CVC 3933, Cincinnati, OH 452670586, Kenneth.Bader@uc.edu), Tao Peng, Melvin E. Klegerman, and David
D. McPherson (Internal Medicine, Univ. of Texas Health Sci. Ctr., Houston,
TX)
Echogenic liposomes can be used as a vector for co-encapsulation of the
thrombolytic drug rt-PA and microbubbles. These agents can be acoustically
activated for localized cavitation-enhanced drug delivery. The objective of
our study was to characterize thrombolytic efficacy and sustained cavitation
nucleation and activity from rt-PA-loaded echogenic liposomes (t-ELIP). A
spectrophotometric method was used to determine the enzymatic activity of
rt-PA released from t-ELIP and compared to unencapsulated rt-PA. The
thrombolytic efficacy of t-ELIP, rt-PA alone, or rt-PA and the commercial
contrast agent DefinityV exposed to sub-megahertz ultrasound was determined in an in vitro flow model. Ultraharmonic (UH) emissions from stable cavitation were recorded during insonation. Both UH emissions and
thrombolytic efficacy were significantly greater for rt-PA and DefinityV
over either rt-PA alone or t-ELIP with equivalent rt-PA loading. Furthermore, the enzymatic activity of t-ELIP was significantly lower than free rtPA. When the dosage of t-ELIP was adjusted to compensate for the lack of
enzymatic activity, similar thrombolytic efficacy was found for t-ELIP and
DefinityV and rt-PA. However, sustained ultraharmonic emissions were not
observed for t-ELIP in the flow phantom.
R
R
R
3:00
1pBA8. Temporal stability evaluation of fluorescein-nanoparticles
loaded on albumin-coated microbubbles. Marianne Gauthier (Dept. of
Elec. and Comput. Eng., BioAcoust. Res. Lab., Univ. of Illinois at UrbanaChampaign, 4223 Beckman Inst.,405 N. Mathews, Urbana, IL 61801,
frenchmg@illinois.edu), Jamie R. Kelly (Dept. of BioEng., BioAcoust. Res.
Lab., Univ. of Illinois at Urbana-Champaign, Urbana, IL), and William D.
O’Brien (Dept. of Elec. and Comput. Eng., BioAcoust. Res. Lab., Univ. of
Illinois at Urbana-Champaign, Urbana, IL)
Purpose: This study aims to evaluate the temporal stability of newly
designed FITC-nanoparticles (NPs) loaded on albumin-coated microbubbles
(MBs) to be used for future drug delivery purposes. Materials and Methods:
MBs (3.6 108 MB/mL) were obtained by sonicating 5% bovine serum
2095
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2095
1p MON. PM
2:00
albumin and 15% dextrose solution. NPs (5 mg/mL) were produced from
fluorescein (FITC)-PLA polymers and functionalized using EDC/NHS. NPloaded MBs resulted from the covalent linking between functionalized NPs
and MBs via carbodiimide technique. Three parameters were quantitatively
monitored over a 4-week duration at 8 time points: MB diameter was determined using a circle detection routine based on the Hough transform, MB
number density was evaluated using a hemocytometer, and NP-loading yield
was assessed based on the loaded-MB fluorescence uptake. Based on the
hypotheses, analyses of variance or Kruskal Wallis test were run to evaluate
the stability of these physical parameters over the time of the experiment.
Results: Statistical analysis exhibited no significant differences in NPloaded MB mean sizes, number densities, and loading yields over time (p >
0.05). Conclusion: Newly designed NP-loaded MBs are stable over at least
a 4-week duration and can be used without extra precaution concerning their
temporal stability. [This work was supported by NIH R37EB002641.]
3:15–3:30 Break
3:30
1pBA9. Chronotropic effect in rats heart caused by pulsed ultrasound.
Olivia C. Coiado and William D. O’Brien Jr. (Dept. of Elec. and Comput.
Eng., Univ. of Illinois at Urbana-Champaign, 405 N Mathews, 4223 Beckman Inst., Urbana, IL 61801, oliviacoiado@hotmail.com)
This study investigated the dependence of an increasing/decreasing
sequence of pulse repetition frequencies (PRFs) on the chronotropic effect
via the application of 3.5-MHz pulsed ultrasound (US) on the rat heart. The
experiments were divided into three 3-month-old female rat groups (n = 4
ea): control, PRF increase and PRF decrease. Rats were exposed to transthoracic ultrasonic pulses at ~0.50% of duty factor at 2.0-MPa peak rarefactional pressure amplitude. For the PRF increase group, the PRF started
lower than that of the rat’s heart rate and was increased sequentially in 1-Hz
steps every 5 s (i.e., 4, 5, and 6 Hz) for a total duration of 15 s. For the PRF
decrease group, the PRF started greater than that of the rat’s heart rate and
was decreased sequentially in 1-Hz steps every 5 s (i.e., 6, 5, and 4 Hz). For
the PRF decrease and control groups, the ultrasound application resulted in
a significant negative chronotropic effect (~11%) after ultrasound exposure.
However, for the PRF increase group, a significant but less decrease of the
heart rate (~3%) was observed after ultrasound exposure. The ultrasound
application caused a negative chronotropic effect after US exposure for
increase/decrease US group. [Support: NIH Grant R37EB002641.]
3:45
1pBA10. Ultrasonic welding in orthopedic implants. Kristi R. Korkowski
and Timothy Bigelow (Mech. Eng., Iowa State Univ., 2201 Coover Hall,
Ames, IA 50011, korkowsk@iastate.edu)
A critical event in hip replacement is the occurrence of osteolysis.
Cemented hip replacements most commonly use polymethylmethacrylate
(PMMA), not as an adhesive but rather a filler to limit micromotion and provide stability. PMMA, however, contributes to osteolysis through both a
thermal response during curing and implant wear debris. In order to mitigate
the occurrence of osteolysis, we are exploring ultrasonic welding as a means
of attachment. Weld strength was assessed using ex vivo bovine rib and femur bones. A flat end mill provided 20 site locations for insertion of an acrylonitrile butadiene styrene, ABS pin. Each location was characterized on
topography, porosity, discoloration, and any other notable features. Each
site was welded using a Branson Ultrasonic Welder 2000iw; 20 kHz, 1100
W. Machine parameters include weld force, weld time, and hold time. The
bond strength was determined using a tensile tester. Tensile testing showed
a negative correlation between porosity and bond strength. Further evaluation and characterization of bone properties to bond strength will enable
appropriate selection of welding properties to ensure a superior bond.
2096
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
4:00
1pBA11. Estimation of subsurface temperature profiles from infrared
measurements during ultrasound ablation. Tyler R. Fosnight, Fong Ming
Hooi, Sadie B. Colbert, Ryan D. Keil, and T. Douglas Mast (Biomedical
Eng., Univ. of Cincinnati, 3938 Cardiovascular Res. Ctr., 231 Albert Sabin
Way, Cincinnati, OH 45267-0586, doug.mast@uc.edu)
Measurement of in situ spatiotemporal temperature profiles would be
useful for developing and validating thermal ablation methods and therapy
monitoring approaches. Here, finite difference and analytic solutions to
Pennes’ bio-heat transfer equation were used to determine spatial correlations between temperature profiles on parallel planes. Time delays and scale
factors for correlated profiles were applied to infrared surface-temperature
measurements to estimate subsurface temperatures. To test this method,
ex vivo bovine liver tissue was sonicated by linear image-ablate arrays with
1–6 pulses of 5.0 MHz unfocused (7.5 s, 64.4–92.0 W/cm2 in situ ISPTP) or
focused (1 s, 562.7–799.6 W/cm2 in situ ISPTP, focus depth 10 mm) ultrasound. Temperature was measured on the liver surface by an infrared camera at 1 fps and extrapolated to the imaging/ablation plane, 3 mm below the
surface. Echo decorrelation maps were computed from pulse-echo signals
captured at 118 fps during 5.0 s rest periods beginning 1.1 s after each sonication pulse. Tissue samples were frozen at 80 C, sectioned, vitally
stained, imaged, and segmented for analysis. Estimated thermal dose profiles showed correspondence with segmented tissue histology, while thresholded temperature profiles corresponded with measured echo decorrelation.
These results suggest utility of this method for thermal ablation research.
4:15
1pBA12. Temperature dependence of harmonics generated by nonlinear ultrasound beam propagation in water. Borna Maraghechi, Michael
C. Kolios, and Jahan Tavakkoli (Phys., Ryerson Univ., 350 Victoria St., Toronto, ON M5B 2K3, Canada, borna.maraghechi@ryerson.ca)
Ultrasound thermal therapy is used for noninvasive treatment of cancer.
For accurate ultrasound based temperature monitoring in thermal therapy,
the temperature dependence of acoustic parameters is required. In this study,
the temperature dependence of acoustic harmonics was investigated in
water. The pressure amplitudes of the transmitted fundamental frequency
(p1), and its harmonics (second (p2), third (p3), fourth (p4), and fifth (p5))
generated by nonlinear ultrasound propagation were measured by a calibrated hydrophone in water. The hydrophone was placed at the focal point
of a focused 5-MHz transducer (f-number 4.5) to measure the acoustic pressure. Higher harmonics were generated by transmitting a 5-MHz 15-cycle
pulse that resulted in a focal positive peak pressure of approximately 0.26
MPa in water. The water temperature was increased from 26 C to 52 C in
increments of 2 C. Due to this temperature elevation, the value of p1
decreased by 9%61.5% (compared to its value at 26 C) and values of p2,
p3, p4, and p5 increased by %562%, 22%68%, 44%67%, and 55%65%,
respectively. The results indicate that the nonlinear harmonics are highly
temperature dependent and their temperature sensitivity increase with the
harmonic number. It is concluded that the nonlinear harmonics could potentially be used for ultrasound-based thermometry.
4:30
1pBA13. Implementation of a perfectly matched layer in nonlinear continuous wave ultrasound simulations. Xiaofeng Zhao and Robert
McGough (Dept. of Elec. and Comput. Eng., Michigan State Univ., East
Lansing, MI, zhaoxia6@msu.edu)
FOCUS, the "Fast Object-Oriented C + + Ultrasound Simulator" (http://
www.egr.msu.edu/~fultras-web), simulates nonlinear ultrasound propagation by numerically evaluating the Khokhlov–Zabolotskaya–Kuznetsov
(KZK) equation. For continuous-wave excitations, KZK simulations in
FOCUS previously required that the simulations extend over large radial
distances relative to the aperture radius, which reduced the effect of reflections from the boundary on the main beam. To reduce the size of the grid
required for these calculations, a perfectly matched layer (PML) was
recently added to the KZK simulation routines in FOCUS. Simulations of
the linear pressure fields generated by a spherically focused transducer with
an aperture radius of 1.5 cm and a radius of curvature of 6cm are evaluated
for a peak surface pressure of 0.5 MPa and a 1 MHz fundamental frequency.
168th Meeting: Acoustical Society of America
2096
4:45
1pBA14. An improved time-base transformation scheme for computing
waveform deformation during nonlinear propagation of ultrasound.
Boris de Graaff, Shreyas B. Raghunathan, and Martin D. Verweij (Acoust.
Wavefield Imaging, Delft Univ. of Technol., Lorentzweg 1, Delft 2628CJ,
Netherlands, m.d.verweij@tudelft.nl)
Nonlinear propagation plays an important role in various applications of
medical ultrasound, like higher harmonic imaging and high intensity focused
ultrasound (HIFU) treatment. Simulation of nonlinear ultrasound fields can
greatly assist in explaining experimental observations and in predicting the
performance of novel procedures and devices. Many numerical simulations
are based on the generic split-step approach, which takes the ultrasound field
at the transducer plane and propagates this forward over successive parallel
planes. Usually, the spatial steps between the planes are small and the diffraction, attenuation, and nonlinear deformation may be treated as separate
substeps. For the majority of methods, e.g., for all KZK-type methods, the
nonlinear substep relies on the implicit solution of the one-dimensional Burgers equation, which is implemented using a time-base transformation. This
generally works fine, but when the shock wave regime is approached,
reduced spatial steps are required to avoid time points to "cross over," and
the method can become notoriously slow. This paper analyses the fundamental difficulty with the common time base transformation, and provides an alternative that does not suffer from the mentioned slowdown. Numerical
results will be shown to demonstrate that this alternative will allow much
larger spatial steps without compromising the numerical accuracy.
5:00
1pBA15. An error reduction algorithm for numeric calculation of the
spatial impulse response. Nils Sponheim (Inst. of Industrial Dev., Faculty
of Technol., Art and Design, Oslo and Akershus Univ. College of Appl.
Sci., Pilestredet 35, P.O. Box 4, St. Olavs plass, Oslo NO-0130, Norway,
nils.sponheim@hioa.no)
The most frequently used method for calculation of the pulsed pressure
field of ultrasonic transducers is the spatial impulse response (SIR)
2097
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
method. This paper presents a new numeric approach that reduce the
numeric error by weighting the contribution of each source element into
the SIR time array, by considering the exact time of arrival of each contribution. The resolution of the time array Dt must be finite. This results in an
error in travel time of 6Dt/2. However, we know the exact travel time and
based on this, we can share the contribution from each source element
between the two closest time elements so that the average time corresponds to the exact travel time and thereby reduce the numeric error. This
study compares the old and the new numeric algorithm with the analytic
solution for a planar circular disk because it has a simple analytic solution.
The paper presents calculations of the SIR for selected points in space and
calculations of the RMS-error between the numeric algorithms and the
analytic solution. The proposed new numeric algorithm decreases the
numeric noise or error with a factor of 5 compared to the old numeric
algorithm.
5:15
1pBA16. Teaching auscultation visually with low cost system, is it
feasible? Sergio L. Aguirre (Universidade Federal de Santa Maria, Rua
Professor Heitor da Graça Fernandes, Avenida Roraima 1000 Centro de
Tecnologia, Santa Maria, Rio Grande do Sul 97105-170, Brazil, sergio.
aguirre@eac.ufsm.br), Ricardo Brum, Stephan Paul, Bernardo H. Murta,
and Paula P. Jardin (Universidade Federal de Santa Maria, Santa Maria,
RS, Brazil)
Cardiac auscultation can generate important information in the diagnosis of diseases. The sounds that the cardiac system provides are understood
in the frequency range of human hearing, but in a region of low sensitivity.
This project aims to build a low cost didactic software/hardware set for
teaching cardiac auscultation technique in Brazilian universities. The frequencies of interest to describe the human cardiac cycle were found in the
range of 20 Hz to 1 kHz which includes low frequencies where available
low-cost transducers usually have large errors. To create the system, an
optimization of the geometry of the chestpiece is being programmed with
finite element simulations; meanwhile, digital filters for specific frequencies of interest and an interface based on MATLAB are being developed.
There were needed filters for the gallops (20 to 70 Hz), heart beats (20 to
100 Hz), ejection murmurs (100 to 500 Hz), mitral stenosis (30 to 80 Hz),
and regurgitations (200 to 900 Hz). The FEM simulation of a chestpiece
demonstrates high signaling levels on the desired frequency range, which
can be used with the filters to obtain specific information. Furthermore, the
ideal signal recording equipments will be defined, implemented, and
tested.
168th Meeting: Acoustical Society of America
2097
1p MON. PM
Results of linear KZK simulations with and without the PML are compared
to an analytical solution of the linear KZK equation on-axis, and the results
show that simulations without the PML require a radial boundary that is at
least seven times the aperture radius, whereas the PML enables accurate
simulations for a radial boundary that is only two times the aperture radius.
[This work was supported in part by NIH Grant R01 EB012079.]
MONDAY AFTERNOON, 27 OCTOBER 2014
MARRIOTT 3/4, 12:55 P.M. TO 3:20 P.M.
Session 1pNS
Noise and Physical Acoustics: Metamaterials for Noise Control II
Olga Umnova, Cochair
University of Salford, The Crescent, Salford M5 4WT, United Kingdom
Keith Attenborough, Cochair
DDEM, The Open University, Walton Hall, Milton Keynes MK7 6AA, United Kingdom
Chair’s Introduction—12:55
Invited Paper
1:00
1pNS1. Sound propagation in the presence of a resonant surface. Logan Schwan (Univ. of Salford, The Crescent, Salford m5 4wt,
United Kingdom, logan.schwan@gmail.com) and Olga Umnova (Acoust. Res. Ctr., Univ. of Salford, Salford, United Kingdom)
The interactions between acoustic waves and an array of resonators are studied. The resonators are arranged periodically on an impedance surface so that the scale separation between sound wavelength and the array period is achieved. An asymptotic multi-scale model
which accounts for viscous and thermal losses in the resonators is developed and is used to derive an effective surface admittance. It is
shown that the boundary conditions at the surface are substantially modified around the resonance frequency. The pressure field on the surface is nearly canceled leading to a phase shift between the reflected and the incident waves. The array can also behave as an absorbing
layer. The predictions of the homogenized model are compared with multiple scattering theory (MST) applied to a finite size array and the
limitations of the former are identified. The influence of the surface roughness and local scattering on the reflected wave is discussed.
Contributed Papers
1:20
1pNS2. Flexural wave induced coherent scattering in arrays of cylindrical shells in water. Alexey S. Titovich and Andrew N. Norris (Mech. and
Aerosp. Eng., Rutgers Univ., 98 Brett Rd., Piscataway, NJ 08854,
alexey17@eden.rutgers.edu)
A periodic array of elastics shells in water is a sonic crystal with local
resonances in the form of flexural vibrations. This acoustic metamaterial has
seen application in wave steering by grading the index in the array, as well as
acoustic filters manifested by Bragg scattering. The primary reason for using
shells is that they can be tuned quasi-statically to have water-like effective
acoustic properties. The issue is that the modally dense flexural resonances
can form pseudogaps in the frequency response resulting in total reflection
from the array. Furthermore, if a flexural resonance falls in the Bragg band
gap, total transmission is possible at that frequency. Although the scattered
wave due to low order flexural vibration of a thin shell is evanescent, when
several shells are closely spaced, the effect on the far-field response is dramatic. In this paper, the interaction of neighboring shells is investigated theoretically using the Love-Timoshenko shell theory and multiple scattering. A
simple model is offered to describe the interaction of modes based on the analytical work. The directionality of the lowest flexural modes is also discussed
as it can lead to phasing between neighboring shells.
1:35
1pNS3. A thin-panel underwater acoustic absorber. Ashley J. Hicks, Michael R. Haberman, and Preston S. Wilson (Mech. Eng. and Appl. Res.
Labs, Univ. of Texas at Austin, 3607 Greystone Dr., Apartment 1410, Austin, TX 78731, a.jean.hicks@utexas.edu)
We present experimental results on the acoustic behavior of thin-panel
underwater sound absorbers composed of a sub-wavelength layered
2098
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
structure. The panels are formed using an inner layer of Delrin or PLA plastic with circular air-filled holes sandwiched between two rubber outer
layers. The panel structure mimics a planar encapsulated bubble screen
exactly one bubble thick, but displays performance that is significantly more
broadband than a comparable bubble screen, which is only useful near the
resonance frequency of the bubble. Initial results indicate 10 dB of insertion
loss in the frequency range 1 kHz to 5 kHz for a panel that is about 1/250th
of a wavelength in thickness at the lowest frequency. The effect of air volume fraction and the use of a 3-D printed (porous) inner layer on insertion
loss will be presented and discussed. [Work supported by ONR.]
1:50
1pNS4. Micromechanical effective medium modeling of metamaterials
of the Willis form. Michael B. Muhlestein, Michael R. Haberman, and
Preston S. Wilson (Appl. Res. Labs. and Dept. of Mech. Eng., Univ. of
Texas at Austin, 3201 Duval Rd. #928, Austin, TX 78759, mimuhle@gmail.
com)
The unique behavior of acoustic metamaterials (AMM) results from
deeply sub-wavelength structures with hidden degrees of freedom rather
than the inherent material properties of their constituents. This distinguishes
AMM from classical composite or cellular materials and also complicates
attempts to model their overall response. This is especially true when subwavelength structures yield anisotropic effective material response, a key
feature of AMM devices designed using transformation acoustics. Further,
previous work has shown that the dynamic response of heterogeneous materials must include coupling between the overall strain and momentum fields
[Milton and Willis, Proc. R. Soc. A 463, 855–880, (2007)]. A micromechanical homogenization model of the overall Willis constitutive equations is
presented to address these difficulties. The model yields a low-volume-fraction estimate of anisotropic and frequency-dependent effective properties in
168th Meeting: Acoustical Society of America
2098
2:05
1pNS5. Acoustic metamaterial homogenization based on equivalent
fluid media with coupled field response. Caleb F. Sieck (Appl. Res. Labs.
and Dept. of Elec. & Comput. Eng., The Univ. of Texas at Austin, 4021
Steck Ave #115, Austin, TX 78759, cfsieck@utexas.edu), Michael R. Haberman (Appl. Res. Labs. and Dept. of Mech. Eng., The Univ. of Texas at
Austin, Austin, TX), and Andrea Al
u (Dept. of Elec. & Comput. Eng., The
Univ. of Texas at Austin, Austin, TX)
Homogenization schemes for wave propagation in heterogeneous electromagnetic (EM) and elastic materials indicate that EM bianisotropy and
elastic momentum-strain and stress-velocity field coupling is required to
correctly describe the effective behavior of the medium [Alu, Phys. Rev. B,
84, 075153 (2011); Milton and Willis, Proc. R. Soc. A, 463, 855–880,
(2007)]. Further, the determination of material coupling terms in EM
resolves apparent violations of causality and passivity which is present in
earlier models [A. Alu, Phys. Rev. B, 83, 081102(R) (2011)]. These details
have not received much attention in fluid acoustics, but they are important
for a proper description of acoustic metamaterial behavior. We derive
expressions for effective properties of a heterogeneous fluid medium from
expressions for the conservation of mass, the conservation of momentum,
and the equation of state and find a physically meaningful effective material
response from first-principles. The results show inherent coupling between
the ensemble averaged volume strain-momentum and pressure-velocity
field. The approach is valid for an infinite periodic lattice of heterogeneities
and employs zero-, first-, and second-order tensorial Green’s functions to
relate point-discontinuities in compressibility and density to far field pressure and particle velocity fields. [This work was supported by the Office of
Naval Research.]
2:20
1pNS6. Nonlinear behavior of a coupled multiscale material containing
snapping acoustic metamaterial inclusions. Stephanie G. Konarski, Michael R. Haberman, and Mark F. Hamilton (Appl. Res. Labs., The Univ. of
Texas at Austin, P.O. Box 8029, Austin, TX 78713-8029, skonarski@
utexas.edu)
Snapping acoustic metamaterial (SAMM) inclusions are engineered subwavelength structures that exhibit regimes of both positive and negative
stiffness. Snapping is defined as large, rapid deformations resulting from the
application of an infinitesimal change in externally applied pressure. This
snapping leads to a large hysteretic response at the inclusion scale and is
thus of interest for enhancing absorption of energy in acoustic waves. The
research presented here models the forced dynamics of a multiscale material
consisting of SAMM inclusions embedded in a nearly incompressible viscoelastic matrix material to explore the influence of small-scale snapping on
enhanced macroscopic absorption. The microscale is characterized by a single SAMM inclusion, while the macroscale is sufficiently large to encompass a low volume fraction of non-interacting SAMM inclusions within the
nearly incompressible matrix. A model of the forced dynamical response of
this heterogeneous material is achieved by coupling the two scales in time
and space using a generalized Rayleigh-Plesset analysis, which has been
adapted from the field of bubble dynamics. A loss factor for the heterogeneous medium is examined to characterize energy dissipation due to the forced
behavior of these metamaterial inclusions. [Work supported by the ARL:UT
McKinney Fellowship in Acoustics and Office of Naval Research.]
2:35
1pNS7. Cloaking of an acoustic sensor using scattering cancelation. Matthew D. Guild (Dept. of Electronics Eng., Universitat Politecnica de Valencia, Camino de vera s/n (Edificio 7F), Valencia 46022, Spain, mdguild@
utexas.edu), Andrea Al
u (Dept. of Elec. and Comput. Eng., Univ. of Texas
at Austin, Austin, TX), and Michael R. Haberman (Appl. Res. Labs. and
Dept. of Mech. Eng., Univ. of Texas at Austin, Austin, TX)
Acoustic scattering cancelation (SC) is an approach enabling the elimination of the scattered field from an object, thereby cloaking it, without
restricting the incident wave from interacting with the object. This aspect of
an SC cloak lends itself well to applications in which one wishes to extract
energy from the incident field with minimal scattering, such as for sensing
and noise control. In this work, an acoustic cloak designed based on the
scattering cancelation method, and made of two effective fluid layers, is
applied to the case of an acoustic sensor consisting of a hollow piezoelectric
shell with mechanical absorption, providing a 20–50 dB reduction in the
scattering strength. The cloak is shown to increase the range of frequencies
over which there is nearly perfect phase fidelity between the acoustic signal
and the voltage generated by the sensor, while remaining within the physical
bounds of a passive absorber. The feasibility of achieving the necessary
fluid layer properties is demonstrated using sonic crystals with the use of
readily available acoustic materials. [Work supported by the US ONR and
Spanish MINECO.]
2:50
1pNS8. Cloaking non-spherical objects and collections of objects using
the scattering cancelation method. Ashley J. Hicks (Appl. Res. Labs. and
Dept. of Mech. Eng., The Univ. of Texas at Austin, Appl. Res. Labs., 10000
Burnet Rd., Austin, TX 78758, ahicks@arlut.utexas.edu), Matthew D. Guild
(Wave Phenomena Group, Dept. of Electronics Eng., Universitat Politècnica
de València, Valencia, Spain), Michael R. Haberman (Appl. Res. Labs. and
Dept. of Mech. Eng., The Univ. of Texas at Austin, Austin, TX), Andrea
Al
u (Dept. of Elec. and Comput. Eng., The Univ. of Texas at Austin, Austin,
TX), and Preston S. Wilson (Appl. Res. Labs. and Dept. of Mech. Eng., The
Univ. of Texas at Austin, Austin, TX)
Acoustic cloaks can be designed using transformation acoustics (TA) to
guide acoustic disturbances around an object. TA cloaks, however, require
the use of exotic materials such as pentamode materials [Proc. R. Soc. A.
464, pp. 2411–2434, (2008)]. Alternatively, the scattering cancelation (SC)
method allows the cloaked object to interact with the acoustic wave and can
be realized with isotropic materials [Phys. Rev. B., 86, 104302 (2012)].
Unfortunately, SC cloaking performance may be degraded if the shape of
the cloaked object diverges from the one for which the cloak was originally
designed. This study investigates the design of two-layer SC cloaks for
imperfect spherical objects. The cloaking material properties are determined
by minimizing the scattered field from a model of the imperfect object
approximated as a series of concentric shells. Predictions from this approximate analytical model are compared with three-dimensional finite element
(FE) models of the cloaked and uncloaked non-spherical shapes. Analytical
and FE results are in good agreement for ka 5, indicating that the SC
method is robust to object imperfections. Finally, FE models are used to
explore SC cloak robustness to multiple-scattering by investigating linear
arrays of cloaked objects for different incident angles. [Work supported by
ONR.]
3:05
1pNS9. Parity-time symmetric metamaterials and metasurfaces for
loss-immune and broadband acoustic wave manipulation. Romain
Fleury, Dimitrios Sounas, and Andrea Alu (ECE Dept., The Univ. of Texas
at Austin, 1 University Station C0803, Austin, TX 78712, romain.fleury@
utexas.edu)
We explore the largely uncharted scattering properties of acoustic systems that are engineered to be invariant under a special kind of space-time
2099
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2099
1p MON. PM
the long-wavelength limit. The model employs volume averages of the
dyadic Green’s function calculating the particle displacement resulting from
a unit force source. This Green’s function is shown to be analogous to one
that determines the particle velocity in a fluid resulting from a unit dipole
moment. The predicted effective properties for isotropic materials with
spherical inclusions fall within the Hashin-Shtrikman bounds and agree with
self-consistent estimates. [Work supported by ONR.]
symmetry, consisting in taking their mirror image and running time backwards. Known as Parity-Time symmetry, this special condition is shown
here to lead to acoustic metamaterials that possess a balanced distribution of
gain (amplifying) and loss (absorbing) media, at the basis of ideal loss-compensation, and under certain conditions, unidirectional invisibility. We have
designed and built the first acoustic metamaterial with parity-time symmetric properties, obtained by pairing the acoustic equivalent of a lasing system
with a coherent perfect acoustic absorber, implemented using electro-
acoustic resonators loaded with non-Foster electrical circuits. The active
system can be engineered to be fully stable and, in principle, broadband. We
discuss the underlying physics and present the realization of a unidirectional
invisible acoustic sensor with unique sensing properties. We also discuss the
potential of PT acoustic metamaterials and metasurfaces for a variety of
metamaterial-related applications, which we obtain in a loss-immune and
broadband fashion, including perfect cloaking of sensors, planar focusing,
and unidirectional cloaking of large objects.
MONDAY AFTERNOON, 27 OCTOBER 2014
INDIANA C/D, 1:15 P.M. TO 4:45 P.M.
Session 1pPA
Physical Acoustics and Noise: Jet Noise Measurements and Analyses II
Richard L. McKinley, Cochair
Battlespace Acoustics, Air Force Research Laboratory, 2610 Seventh Street, Wright-Patterson AFB, OH 45433-7901
Kent L. Gee, Cochair
Brigham Young University, N243 ESC, Provo, UT 84602
Alan T. Wall, Cochair
Battlespace Acoustics Branch, Air Force Research Laboratory, Bldg. 441, Wright-Patterson AFB, OH 45433
Chair’s Introduction—1:15
Invited Papers
1:20
1pPA1. Considerations for array design and inverse methods for source modeling of full-scale jets. Alan T. Wall (Battlespace
Acoust. Branch, Air Force Res. Lab., Bldg. 441, Wright-Patterson AFB, OH 45433, alantwall@gmail.com), Blaine M. Harker, Trevor
A. Stout, Kent L. Gee, Tracianne B. Neilsen (Dept. of Phys. and Astronomy, Brigham Young Univ., Provo, UT), Michael M. James
(Blue Ridge Res. and Consulting, Asheville, NC) and Richard L. McKinley (Air Force Res. Lab., Boston, OH)
Microphone array-based measurements of full-scale jet noise sources necessitate the adaptation and incorporation of advanced array
processing methodologies. Arrays for full-scale jets measurements can require large apertures, high spatial sampling densities, and strategies to account for partially coherent fields. Many approaches have been taken to sufficiently capture radiated noise in past jet noise
investigations, including patch-and-scan measurements with a small dense array, one-dimensional measurements along the extent of the
jet in conjunction with an axisymmetric assumption, and full two-dimensional source coverage with a large microphone set. Various
measurement types are discussed in context of physical jet noise field properties, such as spatial coherence, source stationary, and frequency content.
1:40
1pPA2. Toward the development of a noise and performance tool for supersonic jet nozzles: Experimental and computational
results. Christopher J. Ruscher (Spectral Energies, LLC, 2654 Solitaire Ln. Apt. #3, Beavercreek, OH 45431, cjrusche@gmail.com),
Barry V. Kiel (RQTE, Air Force Res. Lab., Dayton, OH), Sivaram Gogineni (Spectral Energies, LLC, Dayton, OH), Andrew S. Magstadt, Matthew G. Berry, and Mark N. Glauser (Dept. of Mech. and Aerosp. Eng., Syracuse Univ., Syracuse, NY)
Modal decomposition of experimental and computational data for a range of two- and three-stream supersonic jet nozzles will be
conducted to study the links between the near-field flow features and the far-field acoustics. This is accomplished by decomposing nearfield velocity and pressure data using proper orthogonal decomposition (POD). The resultant POD modes are then used with the far-field
sound to determine a relationship between the near-field modes and portions of the far-field spectra. A model will then be constructed
for each of the fundamental modes, which can then be used to predict the entire far-field spectrum for any supersonic jet. The resultant
jet noise model will then be combined with an existing engine performance code to allow parametric studies to optimize thrust, fuel consumption, and noise reduction.
2100
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2100
2:00
1pPA3. Finely resolved spatial variation in F-22 spectra. Tracianne B. Neilsen, Kent L. Gee, Hsin-Ping C. Pope, Blaine Harker
(Brigham Young Univ., N311 ESC, Provo, UT 84602, tbn@byu.edu), and Michael M. James (Blue Ridge Res. and Consulting, LLC,
Asheville, NC)
1p MON. PM
Examination of the spatial variation in the spectrum from ground-based microphones near an F-22 Raptor has revealed spectral features at high engine power that are not seen at intermediate power or in laboratory-scale jet noise. At military and afterburner powers, a
double peaked spectrum is detected around the direction of maximum radiation. In this region, there is not a continuous variation in
peak frequency with downstream distance, as seen in lab-scale studies, but a transition between the relative levels for two discrete onethird octave bands. Previous attempts to match similarity spectra for turbulent mixing noise to a few of these measurements split the difference between the two peak frequencies [Neilsen et al., J. Acoust. Soc. Am. 133, 2116–2125 (2013)]. The denser spatial resolution
afforded by examining the spectral variation on all 50 ground-based microphones, located 11.6 m to the sideline and spanning 30 m, provides the opportunity to further investigate this phenomenon and propose a more complete formulation of expected spectral shapes. Special care must be given to account for the relative amount of waveform steepening, which varies with level, distance, and angular
position. [Work supported by ONR.]
2:20
1pPA4. Experimental and computational studies of noise reduction for tactical fighter aircraft. Philip Morris, Dennis K. McLaughlin, Russell Powers, Nidhi Sikarwar, and Matthew Kapusta (Aerosp. Eng., Penn State Univ., 233C Hammond Bldg., University Park, PA
16802, pjm@psu.edu)
The noise levels generated by tactical fighter aircraft can result in Noise Induced Hearing Loss for Navy personnel, particularly those
involved in carrier deck operations. Reductions in noise source levels are clearly necessary, but these must be achieved without a loss in
aircraft performance. This paper describes an innovative noise reduction technique that has been shown in laboratory scale measurements to provide significant reductions in both mixing as well as broadband shock-associated noise. The device uses the injection of relatively low pressure and low mass flow rate air into the diverging section of the military-style nozzle. This injection generates “fluidic
inserts” that change the effective nozzle area ratio and generate streamwise vorticity that breaks up the large scale turbulent structures in
the jet exhaust that are responsible for the dominant mixing noise. The paper describes noise measurements with and without forward
flight that demonstrate the noise reduction effectiveness of the inserts. The experiments are supported by computations that help to
understand the flow field generated by the inserts as well as help to optimize the distribution and strength of the flow injection.
2:40
1pPA5. Detection and analysis of shock-like waves emitted by heated supersonic jets using shadowgraph flow visualization. Nathan
E. Murray (National Ctr. for Physical Acoust., The Univ. of MS, 1 Coliseum Dr., University, MS 38677, nmurray@olemiss.edu)
Shock-like waves in the acoustic field adjacent to the shear layer formed by a supersonic, heated jet are observed using the method
of retro-reflective shadowgraphy. The two inch diameter jet issued from a converging–diverging nozzle at a pressure ratio of 3.92 with a
temperature ratio of 3.3. Image sets were obtained near the jet exit and in the post-potential core region. In both locations, shock-like
waves can be observed immediately adjacent to the jet shear layer. Each image is subdivided into a set of overlapping tiles. A radon
transform is applied to the auto-correlation of each tile providing a quantitative measure of the dominant propagation direction of waves
in each sub-region. The statistical distribution of propagation angles over the image space provides a measure of the distribution of
source convection speeds and source locations in the jet shear layer. Results show general agreement with a convection speed on the
order of 70 percent of the jet velocity.
3:00–3:20 Break
3:20
1pPA6. Where are the nonlinearities in jet noise? Charles E. Tinney (Ctr. for AeroMech. Res., The Univ. of Texas at Austin, ASE/
EM, 210 East 24th St., Austin, TX 78712, cetinney@utexas.edu) and Woutijn J. Baars (Mech. Eng., The Univ. of Melbourne, Parkville,
VIC, Australia)
For some time now it has been theorized that spatially evolving instability waves in the irrotational near-field of jet flows couple
both linearly and nonlinearly to generate far-field sound [Sandham and Salgado, Philos. Trans. R. Soc. Am. 366 (2008); Suponitsky, J.
Fluid Mech. 658 (2010)]. An exhaustive effort at The University of Texas of Austin was initiated in 2008 to better understand this phenomenon, which included the development of a unique analysis technique for quantifying their coherence [Baars et al., AIAA Paper
2010–1292 (2010); Baars and Tinney, Phys. Fluids 26, 055112 (2014)]. Simulated data have shown this technique to be effective, albeit,
insurmountable failures arise when exercised on real laboratory measurements. The question that we seek to address is how might jet
flows manifest nonlinearities? Both subsonic and supersonic jet flows are considered with simulated and measured data sets encompassing near-field and far-field pressure signals. The focus then turns to considering nonlinearities in the form of cumulative distortions, and
the conditions required for them to be realized in a laboratory scale facility [Baars, et al., J. Fluid Mech. 749 (2014)].
3:40
1pPA7. Characterization of supersonic jet noise and its control. Ephraim Gutmark, Dan Cuppoletti, Pablo Mora, Nicholas Heeb, and
Bhupatindra Malla (Aerosp. Eng. and Eng. Mech., Univ. of Cincinnati, 799 Rhodes Hall, Cincinnati, OH 45221, gutmarej@ucmail.uc.edu)
As supersonic aircraft and their turbojet engines become more powerful they emit more noise. The principal physical difference
between the jets emanating from supersonic jets and those from subsonic jets is the presence of shocks in the supersonic one. This paper
summarizes a study of noise reduction technologies applied to supersonic jets. The measurements are performed with a simulated
2101
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2101
exhaust of a supersonic nozzle representative of supersonic aircraft. The nozzle has a design Mach number of 1.56 and is examined at
design and off-design conditions. Several components of noise are present including mixing noise, screech, broadband shock associated
noise, and crackle. Chevrons and fluidic injection by microjets and a combination of them are shown to reduce the noise generated by
the main jet. These techniques provide significant reduction in jet noise. PIV provides detailed information of the flow and brings out the
physics of the noise production and reduction process.
Contributed Papers
4:00
1pPA8. Influence of windscreen on impulsive noise measurement. per
rasmussen (G.R.A.S. Sound & Vib. A/S, Skovlytoften 33, Holte 2840, Denmark, pr@gras.dk)
The nearfield noise from jet engines may contain impulsive sound signals with high crest factors. Most jet engine noise measurements are performed outside in potentially windy conditions, and it may, therefore, be
necessary to use windscreens on microphones to reduce the influence of
wind induced noise on the microphone. The windscreen will, however,
influence the frequency response of the microphone especially at high frequencies. This will change both the magnitude and the phase response and,
therefore, change the measured impulse. The effect of different sizes of
windscreen is investigated and the effect on impulsive type signals is evaluated both in the time domain and the frequency domain.
4:15
1pPA9. Comparison of nonlinear, geometric, and absorptive effects in
high-amplitude jet noise propagation. Brent O. Reichman, Kent L. Gee,
Tracianne B. Neilsen, Joseph J. Thaden (Brigham Young Univ., 453 E 1980
N, #B, Provo, UT 84604, brent.reichman@byu.edu), and Michael M. James
(Blue Ridge Research and Consulting, LLC, Asheville, NC)
In recent years, understanding of nonlinearity in noise from high-performance jet aircraft has increased, with successful modeling of nonlinear
propagation in the far field. However, the importance and characteristics of
nonlinearity in the near field are still debated. An ensemble-averaged, frequency-domain version of the Burgers equation can be inspected to directly
compare the effects of nonlinearity on the sound pressure level with the
effects of atmospheric absorption and geometric spreading on a decibel
2102
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
scale. This nonlinear effect is calculated using the quadspectrum of the pressure and the squared pressure waveforms. Results from applying this analysis to F-22A data at various positions in the near field reveal that in the near
field the nonlinear effects are of the same order of magnitude as geometric
spreading and that both of these effects are significantly greater than absorption in the area of maximum radiation. [Work supported by ONR and an
ORISE fellowship through AFRL.]
4:30
1pPA10. Correlation lengths in deconvolved cross-beamforming measurements of military jet noise. Blaine M. Harker, Kent L. Gee, Tracianne
B. Neilsen (Dept. of Phys. and Astronomy, Brigham Young Univ., N283
ESC,, Provo, UT 84602, blaineharker@byu.net), Alan T. Wall (Battlespace
Acoust. Branch, Air Force Res. Lab., Wright-Patterson Air Force Base,
OH), and Michael M. James (Blue Ridge Research and Consulting, LLC,
Asheville, NC)
Beamforming algorithms have been applied in multiple contexts in aeroacoustic applications, but difficulty arises when applying these to the partially correlated and distributed sources found in jet noise. To measure and
more accurately distinguish correlated sources, cross-beamforming methods
are employed to incorporate correlation information. Deconvolution methods such as DAMAS-C, an extension of the deconvolution approach for the
mapping of acoustic sources (DAMAS), remove array effects from crossbeamforming applications and further resolve beamforming results. While
DAMAS-C results provide insight to correlation between sources, the extent
to which these results relate to source correlation remains to be analyzed.
Numerical simulations of sources with varying degrees of correlation are
provided to benchmark the DAMAS-C results. Finally, correlation lengths
are established for DAMAS-C results from measurements for full-scale
military jet noise sources. [Work supported by ONR.]
168th Meeting: Acoustical Society of America
2102
MONDAY AFTERNOON, 27 OCTOBER 2014
MARRIOTT 1/2, 1:00 P.M. TO 5:00 P.M.
Session 1pSCa
1p MON. PM
Speech Communication and Biomedical Acoustics: Findings and Methods in Ultrasound Speech
Articulation Tracking
Keith Johnson, Cochair
Linguistics, University of California, Berkeley, 1203 Dwinelle Hall, Berkeley, CA 94720
Susan Lin, Cochair
UC Berkeley, 1203 Dwinelle Hall, UC Berkeley, Berkeley, CA 94720
Chair’s Introduction—1:00
Invited Papers
1:05
1pSCa1. Examining suprasegmental and morphological effects on constriction degree with ultrasound imaging. Lisa Davidson
(Linguist, New York Univ., 10 Washington Pl., New York, NY 10003, lisa.davidson@nyu.edu)
Two case studies of ultrasound imaging use tongue shape differences to investigate whether suprasegmental influences affect the articulatory implementation of otherwise equivalent phonemic sequences. First, we examine whether word-medial and word-final stop codas
have the same degree of constriction (e.g., "blacktop" vs. "black top"). Previous research on syllable position effects on articulatory implementation have conflated syllable position with word position, and this study investigates whether each prosodic factor has an independent
contribution. Results indicate that where consistent differences are found, they are due not to the prosodic position but to speaker-specific
implementation. Second, we examine whether morphological status influences the darkness of American English /l/ in comparing words
like "tallest" and "flawless." While the intervocalic /l/s in "tall-est" and "flaw-less" are putatively assigned the same syllabic status, the /l/ in
"tallest" corresponds to the coda /l/ of the stem "tall" whereas that of "flawless" is the onset of the affix "-less." Results indicate that /l/ is
darker—the tongue is lower and more retracted—when corresponding to the coda of the stem word. Data in both studies were analyzed
with smoothing spline ANOVA, an effective statistical technique for examining differences between whole tongue curves.
1:25
1pSCa2. Imaging dynamic lingual movements that we could previously only imagine. Amanda L. Miller (Linguist, The Ohio State
Univ., 222 Oxley Hall, 1712 Neil Ave., Columbus, OH 43210-1298, amiller@ling.osu.edu)
Pioneering lingual ultrasound studies of speech demonstrated that almost the entire tongue could be imaged (McKay 1957). Early
studies contributed to our knowledge of tongue shape and tongue bracing in vowels (Morrish et al. 1984; Stone et al. 1987). However,
until recently, lingual ultrasound studies have been limited to standard video frame rates of 30 fps, which are sufficient only for imaging
stable speech sounds such as vowels and liquids. High frame rate lingual ultrasound (>100 fps) allows us to view the production of
dynamic speech sounds, such as stop consonants, and even click consonants. The high sampling rate, which yields an image of the
tongue every 8–9 ms, improves image quality, by decreasing temporal smear, allowing even tongue tip movements to be visualized to a
greater extent than was previously possible. Results from several high frame rate ultrasound studies (114 fps) of consonants that were
collected and analyzed using the CHAUSA method (Miller and Finch 2011) are presented. The studies elucidate (a) tongue dorsum and
root gestures in velar and uvular pulmonic consonants; (b) tongue coronal, dorsal, and root gestures in four contrastive click consonants;
and (c) lingual gestures in pulmonic fricatives.
1:45
1pSCa3. Ultrasound evidence for place of articulation of the mora nasal /N/ in Japanese. Ai Mizoguchi (The Graduate Ctr., City
Univ. of New York, 365 Fifth Ave., Rm. 7304, New York, NY 10016, amizoguchi@gc.cuny.edu) and Douglas H. Whalen (Haskins
Labs., New Haven, CT)
The Japanese mora nasal /N/, which occurs in syllable-final position, takes its place of articulation from the following segment if
there is one. However, the mora nasal in utterance-final position is often transcribed as velar, uvular, or even placeless. The present study
examines the tongue shapes in Japanese using ultrasound imaging to investigate whether Japanese mora nasal /N/ is placeless and to
assess whether assimilation to following segments is gradient or categorical. Preliminary results from ultrasound imaging from one
native speaker of Tokyo dialect showed three shapes for final /N/, even though the researchers could not distinguish them perceptually.
Results from assimilation contexts showed that the velar gesture for /N/ was not deleted. All gestures remained and assimilation was not
categorical, even though perceptually, it was. The velar gesture for /N/ might be expected to be deleted before an alveolar /n/ because
they are both lingual, but a blending of the two tongue gestures occurred instead. Variability in place of articulation in final position
occurred even within one speaker. Categorical assimilation was not observed in any phonological environments studied. The mora nasal
may vary across speakers, so further research is needed to determine whether it behaves similarly for more speakers.
2103
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2103
2:05
1pSCa4. A multi-modal imaging system for simultaneous measurement of speech articulator kinematics for bedside applications
in clinical settings. David F. Conant (Neurological Surgery, UCSF, 675 Nelson Rising Ln., Rm. 635, San Francisco, CA 94143, dfconant@gmail.com), Kristofer E. Bouchard (LBNL, San Francisco, CA), Anumanchipalli K. Gopala, Ben Dichter, and Edward F. Chang
(Neurological Surgery, UCSF, San Francisco, CA)
A critical step toward a neurological understanding of speech generation is to relate neural activity to the movement of articulators.
Here, we describe a noninvasive system for simultaneously tracking the movement of the lips, jaw, tongue, and larynx for human neuroscience research carried out at the bedside. We combined three methods previously used separately: videography to track the lips and
jaw, electroglottography to monitor the larynx, and ultrasonography to track the tongue. To characterize this system, we recorded articulator positions and acoustics from six speakers during production of nine American English vowels. We describe processing methods for
the extraction of kinematic parameters from the raw signals and methods to account for artifacts across recording conditions. To understand the relationship between kinematics and acoustics, we used regularized linear regression between the vocal tract kinematics and
speech acoustics to identify which, and how many, kinematic features are required to explain both across vowel and within vowel acoustics. Furthermore, we used unsupervised matrix factorization to derive "prototypical" articulator shapes, and use them as a basis for articulator analysis. These results demonstrate a multi-modal system to non-invasively monitor speech articulators for clinical human
neuroscience applications and introduce novel analytic methods for understanding articulator kinematics.
2:25
1pSCa5. A study of tongue trajectories for English /æ/ using articulatory signals automatically extracted from lingual ultrasound
video. Jeff Mielke, Christopher Carignan, and Robin Dodsworth (English, North Carolina State Univ., 221 Tompkins Hall, Campus Box
8105, Raleigh, NC 27695-8105, ccarign@ncsu.edu)
While ultrasound imaging has made articulatory phonetics more accessible, quantitative analysis of ultrasound data often reduces
speech sounds to tongue contours traced from single video frames, disregarding the temporal aspect of speech. We propose a tracingfree method for directly converting entire ultrasound videos to phonetically interpretable articulatory signals using Principal Component
Analysis of image data (Hueber et al. 2007). Once a batch of ultrasound images (e.g., 36,000 frames from 10 min at 60 fps) has been
reduced to 20 principal components, numerous techniques are available for deriving temporally changing articulatory signals that are
both phonetically meaningful and comparable across speakers. Here we apply a regression model to find the linear combination of PCs
that is the lingual articulatory analog of the front diagonal of the acoustic vowel space (Z2-Z1). We demonstrate this technique with a
study of /æ/ tensing in 20 speakers of North American English varieties with different tensing environments (Labov 2005). Our results
show that /m n/ condition a tongue raising gesture that is aligned to the vowel nucleus, while /g/ conditions anticipatory raising toward
the velar target. /˛/ patterns consistently with the other velar rather than the other nasals.
2:45–3:05 Break
3:05
1pSCa6. Combined analysis of real-time three-dimensional tongue ultrasound and digitized three-dimensional palate impressions: Methods and findings. Steven M. Lulich (Speech and Hearing Sci., Indiana Univ., 4789 N White River Dr., Bloomington, IN
47404, slulich@indiana.edu)
Vocal tract and articulatory imaging has a long and rich history using a wide variety of techniques and equipment. This presentation
focuses on combining real-time 3D ultrasound with high-resolution 3D digital scans of palate impressions. Methods for acquiring and
analyzing these data will be presented, including efforts to accomplish 3D registration of the tongue and hard palate. Findings from an
experiment investigating inter-speaker variability in palate shape and vowel articulation will also be presented.
3:25
1pSCa7. AutoTrace: An automatic system for tracing tongue contours. Gustave V. Hahn-Powell (Linguist, Univ. of Arizona, 2850
N Alvernon Way, Apt. 17, Tucson, AZ 85712, hahnpowell@email.arizona.edu) and Diana Archangeli (Linguist, Univ. of Hong Kong,
Tucson, Arizona)
Ultrasound imaging of the tongue is used for analyzing the articulatory features of speech sounds. In order to be able to study the
movements of the tongue, the tongue surface contour has to be traced for each recorded image. In order to capture the details of the
tongue’s movement during speech, the ultrasound video is generally recorded at the highest frame rate available. Detail comes at a price.
The number of frames produced from even a single non-trivial experiment is often far too large to trace manually. The Arizona Phonological Imaging Lab (APIL) at the University of Arizona has developed a suite of tools to simplify the labeling and analysis of tongue
contours. AutoTrace is a state-of-the-art automatic method for tracing tongue contours that is robust across speakers and languages and
operates independently of frame order. The workshop will outline the software installation procedure, introduce the included tools for
selecting and preparing training data, provide instructions for automated tracing, and overview a method for measuring the network’s accuracy using the Mean Sum of Distances (MSD) metric described by Li et al. (2005).
2104
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2104
Contributed Papers
3:45
lpSCa8. UATracker: A tool for ultrasound data management. Mohsen
Mahdavi Mazdeh and Diana B. Archangeli (Linguist, Univ. of Arizona, 3150
E Bellevue St., #16, Tucson, AZ 85716, mahdavi@email.arizona.edu)
This presentation introduces TraceTracker, a tool for efficiently managing language ultrasound data. Ultrasound imaging of the tongue is used for
analyzing the articulatory features of speech sounds. Most analyses involve
finding data points from individual images. The number of image frames
and the volume of secondary data associated with them tend to grow quickly
in speech analysis studies of this type, making it very hard to handle them
manually. TraceTracker is a data management tool for organizing, modifying, and performing advanced searches over ultrasound tongue images and
the data associated with those images. The setup operation of the program
automatically iterates through file systems and generates a comprehensive
database containing the image files and information such as the speaker, the
video each frame is extracted from, an index, how they have been traced,
etc. The program also automatically reads Praat format TextGrid files and
。Nセウッ」ゥエ・@
specific image frames with the corresponding words and speech
segments based on the annotations in the grids. Once the database is populated, TraceTracker can be used to tag images, generate copies, and perform
advanced search operations over the images based on the aforementioned
criteria including the specific sequence of segments in which it lies.
4:00
lpSCa9. Optical ftow analysis for measuring tongue-motion. Adriano V.
Barbosa (Electron. Eng., Federal Univ. of Minas Gerais, Belo Horizonte,
Brazil) and Eric Vatikiotis-Bateson (Linguist, Univ. Br. Columbia, 2613
West Mall, Vancouver, BC V6N2W4, Canada, evb@mail.ubc.ca)
Most attempts to measure motion of the tongue have focused on locating
the upper surface of the tongue or specific points on that surface. Recently,
we have used our software implementation of optical ftow analysis, Flow Analyzer, to extract measures of tongue motion. The software allows identification of multiple regions of interest, consisting of rectangles whose
dimensions and location are user-definable. For example, a large region
encompassing the visible tongue body provides general information about
the amount and direction (2D) of motion through time; while narrow vertical rectangles can measure the time-varying changes of tongue height at various locations. We will demonstrate the utility of the software, which is
freely available upon request to the authors.
4:15
lpSCalO. An acoustic profile of Spanish trlll/r/. Ahmed Rivera-Campos
and Suzanne E. Boyce (Commun. Sci. and Disord., Univ. of Cincinnati,
3202, Eden Ave., Cincinnati, OH 45267, riveraam@mail.uc.edu)
Unlike English rhotic, there is limited data on the acoustic profile of
Spanish trill /r/. It is well known that one key aspect of the English rhotic /J/
is the lowering of the F3 formant but limited information can be found if
Spanish trill shares the same characteristics. Although it has been described
that a lowering of F3 is not something that characterizes /r/ production and
that F3 values fall under certain ranges that are delimited by vowel contexts,
2105
J. Acoust. Soc. Am., Vol. 136, No.4, Pt. 2, October 2014
analysis of F3 values has not been done using a large sample of native
speakers of Spanish. The following study analyzed the F3 values of 20 participants after production of /r/ by different native speakers of Spanish from
different regions of Latin America, and the Caribbean. Analysis of F3 values
of /r/ provides information about articulatory requirements for adequate /r/
production. This information will benefit professionals that service individuals with articulatory difficulties or are learning Spanish as a second
language.
4:30
lpSCall. Investigation of the role of the tongue root In Kazakh vowel
production using ultrasound. Jonathan N. Washington (Linguist, Indiana
Univ.,
Bloomington, IN 47403-2608, jonwashi@indiana.edu)
It has been argued that Kazakh primarily distinguishes its anterior
("front") vowels from its posterior ("back") vowels through retraction of the
tongue root. This analysis is at odds with the traditional assumption that the
anteriority of Kazakh vowels is contrasted by tongue body position. The
present study uses ultrasound imaging to investigate the extent to which the
position of the tongue root and the tongue body are involved in the anteriority contrast in Kazakh. Native speakers of Kazakh were recorded reading
words (in carrier sentences) containing target vowels, which were controlled
for adjacent consonants and metrical position. An audio recording was also
made of these sessions. Frames containing productions of the target vowels
were extracted from the ultrasound video and the imaged surface of the
tongue was manually traced. Analyses of tongue root and body position
were analyzed for each vowel and will be presented together with formant
measurements from the audio recordings.
4:45
lpSCa12. Vowel production In sighted children and congenitally blind
children. Lucie Menard and Christine Turgeon (Linguist, Universite du PQ
a Montreal, CP 8888, succ. Centre-Ville, Montreal, QC H3C 3P8, Canada,
menard.lucie@uqam.ca)
It is well known that vision plays an important role in speech perception.
At the production level , we have recently shown that speakers with congenital visual deprivation produce smaller displacements of the lips (visible articulator) compared to their sighted peers [L. Menard, C. Toupin, S. Baum,
S. Drouin, J. Aubin, and M. Tiede, J. Acoust. Soc. Am. 134, 2975-2987
(2013)]. To further investigate the impact of visual experience on the articulatory gestures used to produce intelligible speech, a speech production
study was conducted with blind and sighted school-aged children. Eight
congenitally blind children (mean age: 7 years old, from 5 years to II years)
and eight sighted children (mean age: 7 years old, from 5 years to II years)
were recorded using a synchronous ultrasound and Optotrak imaging system
to record tongue and lip positions. Repetitions of the French vowels /i/, /a/,
and /u/ were elicited in a /bVb/ sequence in two prosodic conditions:
neutral and under contrastive focus. Tongue contours, lip positions, and
formant values were extracted. Acoustic data show that focused syllables
are less differentiated from their unfocused counterparts in blind children
than in sighted children. Trade-offs between lip and tongue positions are
examined.
168th Meeting: Acoustical Society of America
2105
MONDAY AFTERNOON, 27 OCTOBER 2014
MARRIOTT 5, 1:00 P.M. TO 5:00 P.M.
Session 1pSCb
Speech Communication: Issues in Cross Language and Dialect Perception (Poster Session)
Tessa Bent, Chair
Dept. of Speech and Hearing Sciences, Indiana Univ., Bloomington, IN 47405
All posters will be on display from 1:00 p.m. to 5:00 p.m. To allow contributors an opportunity to see other posters, contributors of oddnumbered papers will be at their posters from 1:00 p.m. to 3:00 p.m. and contributors of even-numbered papers will be at their posters
from 3:00 p.m. to 5:00 p.m.
Contributed Papers
1pSCb1. Cross-language identification of non-native lexical tone. Jennifer Alexander and Yue Wang (Dept. of Linguist, Simon Fraser Univ., 9201
Robert C Brown Hall Bldg., 8888 University Dr., Burnaby, BC V5A 1S6,
Canada, jennifer_alexander@sfu.ca)
We extend to lexical-tone systems a model of second-language perception, the Perceptual Assimilation Model (PAM) (Best & Tyler, 2007), to
examine whether native-language lexical-tone experience influences identification of novel tone. Native listeners of Cantonese, Thai, Mandarin, and
Yoruba hear six CV syllables, each produced with the three phonemic
Yoruba tones (High-level/H, Mid-level/M, Low-level/L), presented randomly three times. In a 3-AFC task, participants indicate a syllable’s tone
by selecting from a set of arrows the one that illustrates its pitch trajectory.
Accuracy scores (proportion correct) were submitted to a two-way
rANOVA with L1-Group (x4) as the between-subjects factor and Tone (x3)
as the within-subjects factor. There was no main effect of Tone or Group.
The Tone-by-Group interaction was significant (p = 0.031) but driven by
one group: Thai listeners identified H and M more accurately than L (both p
< 0.05), though L accuracy was above chance (59%; chance = 33.33%).
Tone-error patterns indicate that Thai listeners primarily confused L with M
(two-way L1-Group x Response-pattern rANOVA p < 0.05). Overall, despite their different tonal-L1 backgrounds, listeners performed comparably.
As predicted by the PAM, listeners attended to gradient phonetic detail and
acoustic cues relevant to L1 phoneme distinctions (F0 height/direction) in
order to classify non-native contrasts. [NSF grant #0965227.]
1pSCb2. Spectral and duration cues of English vowel identification for
Chinese-native listeners. Sha Tao, Lin Mi, Wenjing Wang, Qi Dong (Cognit. Neurosci. and Learning, Beijing Normal Univ., State Key Lab for Cognit. Neurosci. and Learning, Beijing Normal University, Beijing 100875,
China, taosha@bnu.edu.cn), and Chang Liu (Commun. Sci. and Disord.,
The Univ. of Texas at Austin, Austin, TX)
This study was to investigate how Chinese-native listeners use spectral
and duration cues for English vowel identification. The first experiment was
to examine whether Chinese-native listeners’ English vowel perception was
related to their sensitivity to the change of vowel formant frequency that is a
critical spectral cue to vowel identification. Identification of 12 isolated
American English vowels was measured for 52 Chinese college students in
Beijing. Thresholds of vowel formant discrimination were also examined
for these students. Results showed that there was a significantly moderate
correlation between Chinese college students’ English vowel identification
and their thresholds of vowel formant discrimination. That is, the lower
vowel formant threshold of listeners, the better vowel identification. However, the moderate correlation between vowel identification and formant discrimination suggested some other factors accounting for the individual
variability in English vowel identification for Chinese-native listeners. In
Experiment 2, vowel identification was measured with and without duration
2106
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
cues, showing that vowel identification was reduced by 5.1% when duration
cue was removed. Further analysis suggested that for the listeners who
depended less on duration cue, the better thresholds of formant discrimination, the higher scores of vowel identification, but no such correlation for
listeners who used duration cues remarkably.
1pSCb3. The influence of lexical status in the perception of English allophones by Korean learners. Kyung-Ho Kim and Jeong-Im Han (English,
Konkuk Univ., 120 Neungdong-ro, Gwangjin-gu, Seoul 143-701, South
Korea, gabrieltotti88@gmail.com)
This study investigated whether the allophonic contrast in the second language (L2) may require contact with the lexicon to influence the perception.
Given that English medial voiceless stops occur with aspiration in stressed,
but without aspiration in unstressed syllables, Korean learners of English
were tested for aspirated and unaspirated allophones of /p/ for perceptual preference in appropriate and inappropriate stress contexts in the second syllable
of disyllabic words. The stimuli included four types of non-words and eight
pairs of real words (four pairs each for high-frequency and low-frequency
words), and participants were asked to judge the perceptual preference of
each token on a 7-scale (1 = a bad example, 7 = a good example). The results
demonstrated that in tests with non-words, there was no significant difference
in the ratings as a function of context appropriateness (e.g., [ıp2] vs. [ıph2]),
with higher rankings for initially-stressed words. By contrast, in real words,
participants preferred the correct allophones (e.g., [kep2] vs. [keph2]
“caper”). The frequency of real words further showed a significant effect.
This finding suggests that allophony in L2 is driven by lexicality (Whalen et
al., 1997). Exemplar theory (Pierrehumbert 2001, 2002) provides a more
effective means of modeling this finding than do traditional approaches.
1pSCb4. The perception of English coda obstruents by Mandarin and
Korean second language learners. Yen-Chen Hao (Modern Foreign Lang.
and Literatures, Univ. of Tennessee, 510 14th St. #508, Knoxville, TN
37916, yenchenhao@gmail.com) and Kenneth de Jong (Linguist, Indiana
Univ., Bloomington, IN)
This study investigates the perception of English obstruents by learners
whose native language is either Mandarin, which does not permit coda obstruents, or Korean, which neutralizes laryngeal and manner contrasts into voiceless stop codas. The stimuli are native productions of eight English obstruents
/p b t d f v h ð/ combined with the vowel /A/ in different prosodic contexts.
Forty-one Mandarin and 40 Korean speakers identified the consonant from
the auditorily presented stimuli. The results show that the two groups do not
differ in their accuracy in the onset position, indicating that they are comparable in their proficiency. However, the Mandarin speakers are more accurate in
the coda position than the Koreans. When the fricatives and stops are analyzed separately, it shows that the two groups do not differ with fricatives, yet
168th Meeting: Acoustical Society of America
2106
1pSCb5. Effect of phonetic training on the perception of English consonants by Greek speakers in quiet and noise conditions. Angelos Lengeris
and Katerina Nicolaidis (Theor. and Appl. Linguist, Aristotle Univ. of Thessaloniki, School of English, Aristotle University, Thessaloniki 541 24,
Greece, lengeris@enl.auth.gr)
The present study employed high-variability phonetic training (multiple
words spoken by multiple talkers) to improve the identification of English
consonants by native speakers of Greek. The trainees completed five sessions of identification training with feedback for seven English consonants
(contrasting voiced vs. voiceless stops and alveolar vs. postalveolar fricatives) each consisting of 198 trials with a different English speaker in each
session. Another group of Greek speakers served as controls, i.e., completed
the pre/post test but received no training. Pre/post tests included English
consonant identification in quiet and noise. In the noise condition, participants identified consonants in the presence of a competing English speaker
at a signal-to-noise ratio of -2dB. The results showed that training significantly improved English consonant perception for the group that received
training but not for the control group in both quiet and noise. The results
add to the existing evidence that supports the effectiveness of the high-variability approach to second-language segmental training.
1pSCb6. Perceptual warping of phonetic space applies beyond known
phonetic categories: Evidence from the perceptual magnet effect. Bozena
Pajak (Brain & Cognit. Sci., Univ. of Rochester, 1735 N Paulina St. Apt. 509,
Chicago, Illinois 60622, bpajak@bcs.rochester.edu), Page Piccinini, and
Roger Levy (Linguist, Univ. of California, San Diego, San Diego, CA)
What is the mental representation of phonetic space? Perceptual reorganization in infancy yields a reconfigured space “warped” around nativelanguage (L1) categories. Is this reconfiguration entirely specific to L1 category inventory? Or does it apply to a broader range of category distinctions
that are non-native, yet discriminable due to being defined by phonetic
dimensions informative in the listener’s L1 (Bohn & Best, 2012; Pajak,
2012)? Here we address this question by studying perceptual magnets,
which involve attrition of within-category distinctions and enhancement of
distinctions across category boundaries (Kuhl, 1991). We focus on segmental length, known to yield L1-specific perceptual magnets: e.g., L1-Finnish
listeners have one for [t]/[tt], but L1-Dutch listeners, who lack (exclusively)
length-based contrasts, do not (Herren & Schouten, 2008). We tested 31 L1Korean listeners in an AX discrimination task for [n]-[nn] and [f]-[ff] continua. Korean listeners have been shown to discriminate both (Pajak, 2012),
despite only having the former set in the inventory. We found perceptual
magnets for both continua, demonstrating that perceptual warping goes
beyond the specific L1 categories: when a phonetic dimension is informative
for contrasting some L1 categories, perceptual warping applies not only to
the tokens from those categories, but also to that dimension more generally.
1pSCb7. Language mode effects on second language categorical perception. Beatriz Lopez Prego and Allard Jongman (Linguist, Univ. of Kansas,
1145 Pennsylvania St., Lawrence, KS 66044, lopezb@ku.edu)
This study investigates the perception of the /b/-/p/ voicing contrast in
English and Spanish by native English listeners, native Spanish listeners,
and highly proficient Spanish-speaking second-language (L2) learners of
English with a late onset of acquisition (mean = 10.8) and at least three-year
residence in an English-speaking environment. Participants completed a
forced-choice identification task where they identified target syllables in a
Voice Onset Time (VOT) continuum as "pi" or "bi." They listened to 10
blocks of 19 equidistant steps ranging from + 88 ms-VOT to -89 ms-VOT.
Between blocks, subjects read and wrote responses to language background
questions, thus actively processing the target language. Monolinguals completed the task in their native language (L1). L2 learners completed the task
2107
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
once in their L1 and once in their L2, thus providing a manipulation of "language mode" (Grosjean, 2001). The results showed that L2 learners’ category boundary in English did not differ from that of monolingual English
listeners, but their category boundary in Spanish differed from that of monolingual Spanish listeners and from their own category boundary in English.
These results suggest that the language mode manipulation was successful
and that L2 learners can develop new phonetic categories, but this may have
an impact on their L1 categories.
1pSCb8. Processing of English-accented Spanish voice onset time by
Spanish speakers with low English experience. Fernando Llanos (School
of Lang. and Cultures, Purdue Univ., Stanley Coulter Hall, 640 Oval Dr.,
West Lafayette, IN 47907, fllanos@purdue.edu) and Alexander L. Francis
(Speech, Lang. & Hearing Sci., Purdue Univ., West Lafayette, IN)
Previous research (Llanos & Francis, 2014) shows that the processing of
foreign accented speech sounds can be affected by listeners’ familiarity with
the language that causes the accent. Highly familiar listeners treat foreign
accented sounds as foreign sounds while less familiar listeners treat them
natively. The present study tests the hypothesis that less familiar listeners
may nevertheless be able to apply foreign categorization patterns to
accented words by recalibrating phonetic expectations according to acoustic
information provided by immediate phonetic context. Two groups of Spanish native speakers with little English experience will identify tokens drawn
from a digitally edited VOT continuum ranging from baso "glass" (-60 ms
VOT) to paso "step" (60 ms VOT). Tokens are embedded in a series of
Spanish words beginning with /b/ and /p/ to provide phonetic context. In the
English-accented condition, context words are digitally modified to exhibit
English-like VOT values for /b/ (10 ms) and /p/ (60 ms). In the Spanish condition, these tokens are edited to exhibit prototypical Spanish /b/ (-90 ms)
and /p/ (10 ms) VOT values. If listeners can accommodate foreign accented
sounds according to expectations provided by immediate phonetic context,
then listeners’ VOT boundary in the English-accented condition should be
significantly higher than in the Spanish condition.
1pSCb9. Amount of exposure and its effect on perception of second language front vowels in English. Andrew Jeske (Linguist, Univ. of Pittsburgh, 3211 Brereton St., Pittsburgh, PA 15219, arjeske@gmail.com)
Experience with a second language (L2) has been shown to positively
affect learners’ perception of L2 sounds. However, few studies have focused
on how the amount of L2 exposure in foreign language classrooms impacts
perception of L2 sounds during the incipient stages of language learning in
school-age children. To determine what effect, if any, the amount of L2 exposure has on perception, 64 students from a Spanish-English bilingual elementary school and 60 students from two non-bilingual elementary schools
participated in an AX Categorical Discrimination task, which contained
tokens of five English front vowels: /i I e E æ/. Results show that students
from the bilingual school earned perception scores significantly higher than
those earned by the students from the non-bilingual school (p = 0.002). However, an ANOVA found there to be no significant simple main effect for grade
or significant correlation between grade level and school type. The bilingual
school students perceived all within-category word pairings (e.g., bat-bat) significantly more accurately than the non-bilingual school students suggesting
that increased, early exposure to an L2 may heighten one’s ability to disregard
irrelevant, interpersonal phonetic differences and lead to a within-category
perceptual advantage over those with less L2 exposure early on.
1pSCb10. Does second language experience modulate perception of
tones in a third language? Zhen Qin and Allard Jongman (Linguist, Univ.
of Kansas, 1541 Lilac Ln., Blake Hall, Rm. 427, Lawrence, KS 66045, qinzhenquentin2@ku.edu)
Previous studies have shown that English speakers pay attention to pitch
height rather than direction, whereas Mandarin speakers are more sensitive
to pitch direction than height in perception of lexical tones. The present
study addresses if a second language (L2, i.e., Mandarin) overrides the influence of a native language (L1, i.e., English) in modulating listeners’ use of
pitch cues in the perception of tones in a third language (L3, i.e., Cantonese). English-speaking L2 learners (L2ers) of Mandarin constituted the
168th Meeting: Acoustical Society of America
2107
1p MON. PM
the Mandarin speakers are more accurate than the Koreans with stops. These
findings suggest that having stop codas in their L1 does not necessarily facilitate Koreans’ acquisition of the L2 sounds. Despite their L1 differences, the
two groups display very similar perceptual biases in their error patterns. However, not all of them can be explained by L1 transfer or universal markedness,
suggesting other language-independent factors in L2 perception.
target group. Mandarin speakers and English speakers without knowledge
of Mandarin were included as control groups. In Experiment 1, all groups,
na€ıve to Cantonese tones, discriminated Cantonese tones by distinguishing
either a contour tone from a level tone (pitch direction pair) or a level tone
from another level tone (pitch height pair). The results showed that L2ers
patterned differently from both control groups with regard to pitch cues
under the influence of L2 experience. The acoustics of the tones also
affected all listeners’ discrimination. In Experiment 2, L2ers were instructed
to identify Mandarin tones to measure their sensitivity to L2 tones. The
results showed that L2ers’ sensitivity to L2 tones is not necessarily correlated with their perception of L3 tones.
letters with diacritics, while L1 Swedish and L1 Polish speakers tend to see
these types of characters as different characters of the alphabet. These differing beliefs about orthography may cause English speakers to confuse the
vowels represented in Swedish by the characters å, €a and €
o with vowels represented by the characters a, a, and o, respectively, while Polish speakers
would not be similarly affected. Results of a Swedish vowel perception
study conducted with native speakers of English and Polish after exposure
to Swedish words containing these characters will be presented. These
results contribute to increasing knowledge about the relationship between
L1 orthography and L2 phonology.
1pSCb11. Does early foreign language learning in school affect phonemic discrimination in adulthood? Tetsuo Harada (School of Education,
Waseda Univ., 1-6-1 Nishi Waseda, Shinjuku, Tokyo 169-8050, Japan, tharada@waseda.jp)
1pSCb14. A preliminary investigation of the effect of dialect on the perception of Korean sibilant fricatives. Jeffrey J. Holliday (Second Lang.
Studies, Indiana Univ., 1021 E. Third. St., Memorial Hall M03, Bloomington, IN 47405, jjhollid@indiana.edu) and Hyunjung Lee (English, Hankyong National Univ., Anseong, Gyeonggi-do, South Korea)
Long-term effects of early foreign language learning with a few hours’
classroom contact per week on speech perception are controversial: some
studies show age effects of minimal English input in childhood on phonemic
perception in adulthood, but others don’t (e.g., Lin et al., 2004). This study
investigated effects of a younger starting age in a situation of minimal exposure on perception of English consonants under noise conditions. The listeners were two groups of Japanese university students: early learners (n = 21)
who started studying English in kindergarten or elementary school, and late
learners (n = 24) who began to study in junior high school. The selected target phonemes were word-medial approximants (/l, r/). Each nonword (i.e.,
ala, ara), produced by six native talkers, was combined with speech babble
at the signal-to-noise ratios (SNRs) of 8 dB (medium noise) and 0 dB (quite
high noise for L2 listeners). A discrimination test was given in the ABX format. Results showed that the late learners discriminated /l/ and /r/ better
than the early learners regardless of the noise conditions and talker differences (p < 0.05). A multiple regression analysis revealed that length of learning and English use could contribute to their discrimination ability.
Korean has two sibilant fricatives, /sh/ and /s*/, that are phonologically
contrastive in the Seoul dialect but are widely believed to be phonetically
neutralized in the Gyeongsang dialects spoken in southeastern South Korea,
with both fricatives being acoustically realized as [sh]. The current study
investigated the degree to which the perception of these fricatives by Seoul
listeners is affected by knowledge of the speaker’s dialect. In the first task,
the stimuli were two fricative-initial minimal pairs (i.e., four words) produced by 20 speakers each from Seoul and Gyeongsang. Half of the 18 listeners were told that the speakers were from Seoul, and the other half were
told they were from Gyeongsang. Listeners identified the 160 word-initial
fricatives and provided a goodness rating for each. It was found that neither
the speaker’s actual dialect nor the primed dialect had a significant effect on
either identification accuracy or listeners’ goodness ratings. In a second
task, listeners identified tokens from a seven-step continuum from [sada] to
[s*ada]. It was found that listeners who were primed for Gyeongsang dialect
were more likely to perceive tokens as /s*/ than listeners primed for Seoul,
which may reflect a dialect-based hypercorrective perceptual bias.
1pSCb12. The identification of American English vowels by native
speakers of Japanese before three nasal consonants. Takeshi Nozawa
(Lang. Education Ctr., Ritsumeikan Univ., 1-1-1 Nojihigashi, Kusatsu 5258577, Japan, t-nozawa@ec.ritsumei.ac.jp)
1pSCb15. Language is not destiny: Task-specific factors, and not just
native language perceptual biases, influence foreign sound categorization strategies. Jessamyn L. Schertz and Andrew Lotto (Univ. of Arizona,
Douglass 200, Tucson, AZ 85721, jschertz@email.arizona.edu)
Native speakers of Japanese identified American English vowels that are
uttered before three nasal consonants /m, n, ˛/ and three oral stop consonants
/b, d, g/. Of the seven vowels /i, I, eI, E, æ, A, ˆ/, /æ/ was generally less accurately identified before nasal consonants than before oral stop consonants, and
this tendency was stronger when /˛/ follows. This tendency is probably attributed to the extended raising of /æ/ before /˛/ and the Japanese listeners’ limited sensitivity to differentiate three nasal phonemes in coda position. /I/, on
the other hand, was identified more correctly before /˛/ than before the other
two nasal consonants, also probably because the vowel is raised before /˛/.
This vowel was more often misidentified as /E/ before /m/ and /n/. /A/ and /ˆ/
were less accurately identified before stop consonants, but after nasal consonants, /ˆ/ was more often misidentified as /A/. /A/ and /ˆ/ may sound alike to
Japanese listeners in every context, but before nasal contexts, both of these
vowels may sound closer to the Japanese vowel /o/. The results generally
revealed that identification accuracy cannot be solely accounted for in terms
of the place of articulation of the following consonant.
Listeners were trained to distinguish two novel classes of speech sounds
differing in both Voice Onset Time (VOT) and fundamental frequency at
vowel onset (f0). One group was shown Korean orthography during the
training period (“symbols” group) and the other English orthography
(“letters” group). During a subsequent test phase, listeners classified sounds
with mismatched VOT and f0. The two groups relied on different cues to
categorize the contrast: those exposed to symbols used f0, while those
exposed to letters used VOT. A second experiment employed the same paradigm, but the two dimensions defining the contrast were closure duration
(instead of f0) and VOT. In this more difficult experiment, successful listeners in the “letters” group again classified the hybrid stimuli based on VOT,
while the single listener in the “symbols” group who passed the learning criterion used closure duration. In both experiments, subjects showed different
categorization patterns based on orthography used in the presentation, even
though orthography was irrelevant for the experimental task. Listeners
relied on VOT when the stimuli were presented with English, but not foreign, orthography, showing that task-related information (as opposed to
native language biases alone) can direct attention to different acoustic cues
in foreign contrast classification.
1pSCb13. Effects of beliefs about first language orthography on second
language vowel perception. Mara Haslam (Dept. of Lang. Education,
Stockholm Univ., S:t Ansgars v€ag 4, Solna 16951, Sweden, mara.haslam@
gmail.com)
Recent research has identified that L1 orthography can affect perception
of vowels in a second language (e.g., Escudero and Wanrooij, 2010). The
present study investigates the effect that participants’ beliefs about orthography have on their ability to perceive vowels in a second language. Englishand Polish-speaking learners of Swedish have to encounter some new vowel
sounds and also the characters that are used to represent them, e.g., å, €a, and
€o. New survey data of native speakers of English, Polish, and Swedish confirm that L1 English speakers see these characters like these as familiar
2108
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
1pSCb16. Generational difference in the perception of high-toned [il] in
Seoul Korean. Sunghye Cho (Univ. of Pennsylvania, 3514 Lancaster Ave.,
Apt. 106, Philadelphia, PA 19104, csunghye@sas.upenn.edu)
A word-initial [il] is most frequently H-toned in Seoul Korean (SK) when
it means one, out of three homophones, one, day, and work (Jun & Cha,
2011). However, Cho (2014) finds that 25% of teenagers always produce [il]
with a H tone, regardless of its meaning. This paper examines how young SK
speakers perceive the phenomenon. Thirty-seven SK speakers (aged 14–29)
participated in two identification tasks, hearing only [il] in the first task and
168th Meeting: Acoustical Society of America
2108
1pSCb17. The effect of perceived talker race on phonetic imitation of
pin-pen words. Qingyang Yan (Linguist, The Ohio State Univ., 591 Harley
Dr., Apt. 10, Columbus, OH 43212, yan@ling.ohio-state.edu)
The current study investigated the phonetic imitation of the PIN-PEN
merger by nonmerged participants. An auditory shadowing task was used to
examine how participants changed their /I/ and /E/ productions after auditory
exposure to merged and nonmerged voices. Black and white talker photos
were used as visual cues to talker race. The pairing of voices (merged and
nonmerged) with the talker photos (black and white) was counterbalanced
across participants. A third group of participants completed the task without
talker photos. Participants’ explicit talker attitudes were assessed by a questionnaire, and their implicit racial attitudes were measured by an Implicit
Association Task. Nonmerged participants imitated the PIN-PEN merger, and
the degree of imitation varied depending on the experimental condition. The
merged voice elicited more imitation when it was presented without a talker
photo or with the black talker photo than with the white talker photo. No
effect of explicit talker attitudes or implicit racial attitudes on the degree of
imitation was observed. These results suggest that phonetic imitation of the
PIN-PEN merger is more complex than an automatic response to the merged
voice and that it is mediated by perceived talker race.
1pSCb18. Foreign-accent discrimination with words and sentences.
Eriko Atagi (Volen National Ctr. for Complex Systems, Brandeis Univ.,
Volen National Ctr. for Complex Systems, MS 013, Brandeis University,
415 South St., Waltham, MA 02454-9110, eatagi@brandeis.edu) and Tessa
Bent (Dept. of Speech & Hearing Sci., Indiana Univ., Bloomington, IN)
Native listeners can detect a foreign accent in very short stimuli; however, foreign-accent detection is more accurate with longer stimuli (Park,
2008; Flege, 1984). The current study investigated native listeners’ sensitivity to the characteristics that differentiate between accents—both foreign
versus native accents and one foreign accent versus another—in words and
sentences. Listeners heard pairs of talkers reading the same word or sentence and indicated whether the talkers had the same or different native language backgrounds. Talkers included two native talkers (Midland dialect)
and six nonnative talkers from three native language backgrounds (German,
Mandarin, and Korean). Sensitivity varied significantly depending on the
specific accent pairings and stimulus type. Listeners were most sensitive
when the talker pair included a native talker, but could detect the difference
between two nonnative accents. Furthermore, listeners were generally more
sensitive with sentences than with words. However, for one nonnative pairing, listeners exhibited higher sensitivity with words; for another, listeners’
sensitivity did not differ significantly across stimulus types. These results
suggest that accent discrimination is not simply influenced by stimulus
length. Sentences may provide listeners with opportunities to perceive similarities between nonnative talkers, which are not salient in single words.
[Work supported by NIDCD T32 DC00012.]
1pSCb19. Stimulus length and scale label effects on the acoustic correlates of foreign accent ratings. Elizabeth A. McCullough (Linguist, Ohio
State Univ., 222 Oxley Hall, 1712 Neil Ave., Columbus, OH 43210, eam@
ling.ohio-state.edu)
Previous studies have investigated acoustic correlates of accentedness ratings, but methodological differences make it difficult to compare their results
directly. The present experiment investigated how choices about stimulus
2109
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
length and rating scale labels influence the acoustic correlates of listeners’ rating responses. Four conditions crossed two stimulus lengths (CV syllable vs.
disyllabic word) with two sets of rating labels (“no foreign accent”/“strong
foreign accent” vs. “native”/“not native”). Monolingual American English listeners heard samples of English from native speakers of American English,
Hindi, Korean, Mandarin, and Spanish and indicated their responses on a continuous rating line. Regression models evaluated the correlations between listeners’ ratings and a variety of acoustic properties. Patterns for accentedness
and non-nativeness ratings were identical. VOT, F1, and F2 correlated with
ratings on all stimuli, but vowel duration correlated with ratings on disyllabic
word stimuli only. If vowel duration is interpreted as a reflection of global
temporal properties, this result suggests that listeners may perceive such properties in utterances as short as two syllables. Thus, stimulus design is vital in
identifying components of foreign accent perception that are related to differences between a talker’s first and second languages as opposed to components
that are related to general fluency.
1pSCb20. Language proficiency, context influence foreign-accent adaptation. Cynthia P. Blanco (Linguist, Univ. of Texas at Austin, 305 E. 23rd
St., Austin, TX 78712, cindyblanco@utexas.edu), Hoyoung Yi (Commun.
Sci. & Disord., Univ. of Texas at Austin, Austin, TX), Elisa Ferracane, and
Rajka Smiljanic (Linguist, Univ. of Texas at Austin, Austin, TX)
Listeners adapt quickly to changes in accent (Bradlow & Bent, 2003;
Clarke & Garrett, 2004; inter alia). The cause of this brief delay may be due
to the cost of processing accented speech, or may reflect a surprise effect
associated with task expectations (Floccia et al., 2009). The present study
examines a link between accent familiarity and processing delays with listeners who have varying degrees of familiarity with target languages: monolingual Texans with little or no formal exposure to Spanish, early SpanishEnglish bilinguals, and Korean learners of English. Participants heard four
blocks of English sentences—Blocks 1 and 4 were produced by two native
speakers of American English, and Blocks 2 and 3 were produced by native
speakers of Spanish or Korean- and responded to written probe words. All
listener groups responded more slowly after an accent change; however, the
degree of delay varied with language proficiency. L1 Korean listeners were
less delayed by Korean-accented speech than the other listeners, while
changes to Spanish-accented speech were processed most slowly by
Spanish-English bilinguals. The results suggest that adaptation to foreignaccented speech depends on language familiarity and task expectations. The
processing delays are analyzed in light of intelligibility and accentedness
measures.
1pSCb21. When two become one—Orthography helps link two free variants to one lexical entry. Chung-Lin Yang (Linguist, Indiana Univ.- Bloomington, Memorial Hall 322, 1021 E 3rd St., Bloomington, IN 47408,
cy1@indiana.edu) and Isabelle Darcy (Second Lang. Studies, Indiana
Univ.- Bloomington, Bloomington, IN)
L2 learners can become better at distinguishing an unfamiliar contrast
by knowing the corresponding orthographic forms (e.g., Escudero et al.,
2008). We ask whether learners could associate two free variants with the
same lexical entry when the orthographic form was provided during learning. American learners learned an artificial language where [p]-[b] were in
free variation (both were spelled as <p>) (test condition) while [t]-[d] were
contrastive (control condition), or vice-versa ([t]-[d] in test, counterbalanced
across subjects). Using a word-learning paradigm modified from HayesHarb et al. (2010), in the learning phase, participants heard novel words
paired with pictures. One subgroup of learners saw the spellings as well
(“Orth+”), while another did not (i.e., auditory only, “Orth ”). Then in a
picture-auditory word matching task, the new form of the word was paired
with the original picture. Orth + learners were expected to be more accurate
at accepting the variant as the correct label for the original test item than
Orth . The results showed that Orth + learners detected and learned the [p][b] free variation significantly better than Orth (p < 0.05), but not the [t][d] free variation. Thus, the benefit of orthography in speech learning could
vary depending on the specific contrasts at hand.
168th Meeting: Acoustical Society of America
2109
1p MON. PM
four [il]-initial minimal pairs in the second task. All target words were manipulated into five pitch levels with 30 Hz intervals. In the first task, the 20s
group identified [il] as one 70% of the time at higher pitch levels, while the
teenagers identified [il] as one about 50% of the time at all pitch levels. In the
second task, the 20s group showed a categorical perception, identifying [il]initial words as one only at higher pitch levels, while the teenagers did not.
The results suggest that the teenagers are aware that some peers always produce [il] with a H tone. It explains that the 20s group could identify the meanings of [il] depending on the pitch, while the teenagers could not.
MONDAY AFTERNOON, 27 OCTOBER 2014
INDIANA F, 1:25 P.M. TO 5:15 P.M.
Session 1pUW
Underwater Acoustics: Understanding the Target/Waveguide System–Measurement and Modeling II
Aubrey L. Espana, Chair
Acoustics Dept., Applied Physics Lab, Univ. of Washington, 1013 NE 40th St., Box 355640, Seattle, WA 98105
Chair’s Introduction—1:25
Invited Paper
1:30
1pUW1. Mapping bistatic scattering from spherical and cylindrical targets using an autonomous underwater vehicle in
BAYEX’14 experiment. Erin M. Fischell, Stephanie Petillo, Thomas Howe, and Henrik Schmidt (Mech. Eng., MIT, 77 Massachusetts
Ave., 5-204, Cambridge, MA 02139, emf43@mit.edu)
In May 2014, the MIT Laboratory for Autonomous Marine Sensing Systems (LAMSS) participated in the BAYEX’14 experiment
with the goal of collecting full bistatic data sets around proud spherical and cylindrical targets for use in real-time autonomous target
localization and classification. The BAYEX source was set to insonify both targets, and was triggered to ping at the start of each second
using GPS PPS. The MIT Bluefin 21 in. AUV Unicorn, fitted with a 16-element nose array, was deployed in broadside sampling behaviors to collect the bistatic scattered data set. The AUV’s Chip Scale Atomic Clock was synchronized to GPS on the surface, and the data
was logged using a PPS triggered analog to digital conversion system to ensure synchronization with the source. The MIT LAMSS
operational paradigm allowed the vehicle to be unpacked, tested and deployed over the brief three-day interval available for operations.
MOOS-IvP and acoustic communication enabled the group to command AUV mission changes in situ based on data collection needs.
During data collection, the vehicle demonstrated real-time signal processing and target localization, and the bistatic datasets were used
to demonstrate real-time target classification in simulation. [Work supported by ONR Code 322OA.]
Contributed Papers
1:50
2:05
1pUW2. Elastic features visible on canonical targets with high frequency
imaging during the 2014 St. Andrews Bay experiments. Philip L. Marston
(Phys. and Astronomy Dept., Washington State Univ., Pullman, WA 991642814, marston@wsu.edu), Timothy M. Marston, Steven G. Kargl (Appl.
Phys. Lab., Univ. of Washington, Seattle, WA), Daniel S. Plotnick (Phys. and
Astronomy, Washington State Univ., Pullman, WA), Aubrey Espana, and
Kevin L. Williams (Appl. Phys. Lab., Univ. of Washington, Seattle, WA)
1pUW3. Boundary enhanced coupling processes for rotated horizontal
solid aluminum cylinders: Helical rays, synthetic aperture sonar
images, and coupling conditions. Jon R. La Follett (Shell International
Exploration and Production Inc., Houston, TX) and Philip L. Marston
(Phys. and Astronomy Dept., Washington State Univ., Phys. and Astronomy
Dept., Washington State Univ., Pullman, WA 99164-2814, marston@wsu.
edu)
During the 2014 St. Andrews Bay experiments some canonical metallic
targets (a hollow sphere and some circular cylinders) were viewed with a
synthetic aperture sonar (SAS) capable of acquiring data using a 110–190
kHz chirped source. The targets rested on mud-covered sand and were typically at a range of 20 m. Fast reversible SAS processing using an extension
of line-scan quasi-holography [K. Baik, C. Dudley, and P. L. Marston, J.
Acoust. Soc. Am. 130, 3838–3851 (2011)] was used to extract relevant signal content from images. The significance of target elastic responses in
extracted signals was evident from the frequency response and/or the timedomain response. For example, the negative group velocity guided wave
enhancement of the backscattering by the sphere was clearly visible near
180 kHz. [For a ray model of this type of enhancement see: G. Kaduchak,
D. H. Hughes, and P. L. Marston, J. Acoust. Soc. Am. 96, 3704–3714
(1994).] In another example, the timing of a sequence of near broadside echoes from a solid aluminum cylinder was consistent with reflection and internal reverberation of elastic waves. These observations support the value of
combining reversible imaging with models interpreted using rays. [Work
supported by ONR and SERDP.]
Experiments with solid aluminum cylinders placed near a flat free surface provide insight into scattering processes relevant to other flat reflecting
boundaries [J. R. La Follett, K. L. Williams, and P. L. Marston, J. Acoust.
Soc. Am. 130, 669–672 (2011); J. R. La Follett, Ph.D. thesis, WSU (2010)].
This presentation concerns the coupling to surface guided leaky Rayleigh
waves that have been shown to contribute significantly to backscattering by
solid metallic cylinders [K. Gipson and P. L. Marston, J. Acoust. Soc. Am.
106, 1673–1689 (1999)]. The emphasis here is on horizontal cylinders
rotated about a vertical axis away from broadside viewed at grazing incidence. The range of rotation angles for which helical rays can contribute is
limited in the free field by the cylinder’s length [F. J. Blonigen and P. L.
Marston, J. Acoust. Soc. Am. 112, 528–536 (2002)]. Some examples of surface enhanced backscattering may be summarized as follows. In agreement
with geometrical considerations, the angular range for coupling to helical
rays may be significantly extended when a short cylinder is adjacent to a flat
surface. In addition, the presence of a flat surface splits synthetic aperture
sonar (SAS) image features from various guided wave mechanisms on
rotated cylinders. [Work supported by ONR.]
2110
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2110
Invited Papers
2:20
1pUW4. Denoising structural echoes of elastic targets using spatial time–frequency distributions. Karim G. Sabra (Mech. Eng.,
Georgia Inst. of Technol., 771 Ferst Dr., NW, Atlanta, GA 30332-0405, karim.sabra@me.gatech.edu)
1p MON. PM
Structural echoes of underwater elastic targets, used for detection and classification purposes, can be highly localized in the time–frequency domain and can be aspect-dependent. Hence, such structural echoes recorded along a distributed (synthetic) aperture, e.g., using
a moving receiver platform, would not meet the stationarity and multiple snapshots requirements of common subspace array processing
methods used for denoising array data based on their estimated covariance matrix. To handle these scenarios, a generalized space–time–
frequency covariance matrix can be computed from the single-snapshot data using Cohen’s class time-frequency distributions between
all sensor data pairs. This space–time–frequency covariance matrix automatically accounts for the inherent coherence across the timefrequency plane of the received nonstationary echoes emanating from the same target. Hence, identifying the signal’s subspace from the
eigenstructure of this space–time–frequency covariance matrix provides a means for denoising these non-stationary structural echoes by
spreading the clutter and noise power in the time–frequency domain. The performance of the proposed methodology will be demonstrated using numerical simulations and at-sea data.
2:40
1pUW5. Measurements and modeling of acoustic scattering from targets in littoral environments. Harry J. Simpson (Physical
Acoust. Branch, Naval Res. Lab., 4555 Overlook Ave. SW, Washington, VA20375, harry.simpson@nrl.navy.mil), Zackary J. Waters,
Timothy J. Yoder, Brian H. Houston (Physical Acoust. Branch, Naval Res. Lab., Washington, DC), Kyrie K. Jig, Roger R. Volk (Sotera
Defense Solution, Crofton, MD), and Joseph A. Bucaro (Excet, Inc., Springfield, VA)
Broadband laboratory and at-sea measurements systems have been built by NRL to quantify the acoustic target strength of objects
sitting on or in the bottom of littoral environments. Over the past decade, these measurements and the subsequent modeling of the target
strength have helped to develop an understanding of how the environment, especially near the bottom interface, impacts the structural
acoustic response of a variety of objects. In this talk we will present a set of laboratory, at-sea rail and AUV based back scatter, forward
scatter, and propagation measurements with subsequent analysis to understand the impact of the littoral environment. Simple targets
such as spheres, along with UXO targets will be discussed. The analysis will be focused on quantifying the changes to target strength as
a result of being near the bottom interface. In addition to the traditional backscatter or monosatic target strength, we focus upon efforts
to investigate the multi-static scattering from targets. [Work supported by ONR.]
3:00–3:15 Break
Contributed Papers
3:15
3:30
1pUW6. TREX13 target experiments and case study: Comparison of
aluminum cylinder data to combined finite element/physical acoustics
modeling. Kevin Williams, Steven G. Kargl, and Aubrey L. Espana (Appl.
Phys. Lab., Univ. of Washington, 1013 NE 40th St., Seattle, WA 98105,
williams@apl.washington.edu)
1pUW7. Predicting the acoustic response of complicated targets in complicated environments using a hybrid finite element/propagation model.
Aubrey L. Espana, Kevin L. Williams, Steven G. Kargl (Acoust. Dept.,
Appl. Phys. Lab. - Univ. of Washington, 1013 NE 40th St., Box 355640,
Seattle, WA 98105, aespana@apl.washington.edu), Marten J. Nijhof
(Acoust. and Sonar, TNO, Den Haag, Netherlands), Daniel S. Plotnick, and
Philip L. Marston (Phys. and Astronomy, Washington State Univ., Pullman,
WA)
The apparatus and experimental procedure used during the target portion
of TREX13 are described. A primary goal of the TREX13 target experiments was to test the high speed modeling methods developed and previously tested as part of efforts in more controlled environments where the
sediment/water interface was flat. At issue is to what extent the simplified
physics used in our models can predict the changes seen in acoustic templates (target strength versus angle and frequency) as a function of grazing
angle, i.e., the Target-In-the-Environment-Response (TIER), for a target
proud on an unprepared “natural” sand sediment interface. Data/model comparisons for a 3 ft. long, 1 ft. diameter cylinder are used as a case study.
These comparisons indicate that much of the general TIER dependence is
indeed captured and allows one to understand/predict geometries where the
broadest band of TIER information can be obtained. This case study indicates the predictive utility of dissecting the target physics at the expense of
making the model results “inexact” from a purely finite element, constitutive
equation standpoint. [Work supported by ONR and SERDP.]
2111
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Previous work has shown that hybrid finite element (FE)/propagation
models are a viable tool for estimating the Target-In-The-EnvironmentResponse, or TIER, for simple shapes such as cylinders and pipes on a flat,
undisturbed sand/water interface [K. L. Williams et al., J. Acoust. Soc. Am
127, 3356–3371 (2010)]. Here we examine their use for more complicated
targets located in complicated ocean environments. The targets examined
include various munitions and ordnance-like targets, with intricate internal
structure and filled with either air or water. A hybrid FE/propagation model
is used to predict their TIER on flat, undisturbed sand. Data acquired during
the target portion of TREX13 is used to validate the model results. Next, the
target response is investigated in a more complicated environment, being
partially buried with their axis tilted w.r.t. the flat sand interface. Again
model results are validated using TREX13 data, as well as data acquired in
a controlled tank experiment. These comparisons highlight the feasibility of
using hybrid models for complex target/environment configurations, as well
possible limitations due to the effects of multiple scattering.
168th Meeting: Acoustical Society of America
2111
Invited Papers
3:45
1pUW8. A correlation analysis of the Naval Surface Warfare Center Panama City Division’s (NSWC PCD) database of simulated and collected target scattering responses focused on automated target recognition. Raymond Lim, David E. Malphurs, James
L. Prater, Kwang H. Lee, and Gary S. Sammelmann (Code X11, NSWC Panama City Div., 110 Vernon Ave, Code X11, Panama City,
FL 32407-7001, raymond.lim@navy.mil)
Recently, NSWC PCD participated in a number of computational and experimental efforts aimed at assembling a database of sonar
scattering responses encompassing a variety of objects including UXO, cylindrical shapes, and other clutter-type objects. The range of
data available on these objects consists of a simulated component generated with 3D finite element calculations coupled to a fast Helmholtz-equation-based propagation scheme, a well-controlled experimental component collected in NSWC PCD’s pond facilities, and a
component of measurements in realistic underwater environments off Panama City, FL (TREX13 and BayEX14). The goal is to use the
database to test schemes for automating reliable separation of these objects into desired classes. Here, we report on an initial correlation
analysis of the database projected onto the target aspect vs frequency plane to assess the feasibility of the simulated component against
the measured ones, to investigate some basic questions regarding environmental and range effects on class separation, and to try and
identify phenomena in this plane useful for classification. [Work supported by ONR and SERDP.]
4:05
1pUW9. Identifying buried unexploded ordnance with structural acoustics based numerically trained classifiers: Laboratory
demonstrations. Zachary J. Waters, Harry J. Simpson, Brian H. Houston (Physical Acoust. - Code 7130, Naval Res. Lab., 4555 Overlook Ave. SW, Bldg 2. Rm. 186, Washington, DC 20375, zachary.waters@nrl.navy.mil), Kyrie Jig, Roger Volk, Timothy J. Yoder
(Sotera Defense Solutions Inc., Crofton, MD), and Joseph A. Bucaro (Excet Inc., Springfield, VA)
Strategies for the automated detection and classification of underwater unexploded ordnance (UXO), based upon structural acoustics
derived features, are currently being transitioned to autonomous underwater vehicle based sonar systems. The foundation for this transition arose, in part, from extensive laboratory investigations conducted at the Naval Research Laboratory. We discuss the evolution of
structural acoustic based methodologies, including research into understanding the free-field scattering response of UXO and the coupling of these objects, under varying stages of burial, to water-saturated sediments. In addition to providing a physics-based understanding of the mechanisms contributing to the scattering response of objects positioned near the sediment–water interface, this research
supports the validation of three-dimensional finite-element-based models for large-scale structural–acoustics problems. These efforts
have recently culminated with the successful classification of a variety of buried UXO targets using a numerically trained relevance vector machine (RVM) classifier and the discrimination of these targets, under various burial orientations, from several objects representing
both natural and manmade clutter. We conclude that this demonstration supports the transition of structural acoustic processing methodologies to maritime sonar systems for the classification of challenging UXO targets. [Work supported by ONR and SERDP.]
4:25
1pUW10. Detection and classification of marine targets buried in the sediment using structural acoustic features. Joseph Bucaro
(Excet, Inc. @ Naval Res. Lab., 4555 Overlook Ave SW, Naval Res. Lab., Washington, DC 20375, joseph.bucaro.ctr@nrl.navy.mil),
Brian Houston, Angie Sarkissian, Harry Simpson, Zack Waters (Naval Res. Lab., Washington, DC), Timothy Yoder (Sotera Inc. @ Naval Res. Lab., Washington, DC), and Dan Amon (Naval Res. Lab., Washington, DC)
We present research on detection and classification of underwater targets buried in a saturated sediment using structural acoustic features. These efforts involve simulations using NRL’s STARS3D structural acoustics code and measurements in the NRL free-field and
sediment pool facilities, off the coast of Duck, NC, and off the Coast of Panama City, FL. The measurements in the sediment pool demonstrated RVM classifiers trained using numerical data on two features—target strength correlation and elastic highlight image symmetry. Measurements off the coast of Duck were inconclusive owing to tropical storms resulting in a damaged projector. Extensive
measurements were then carried out in 60 ft. of water in the Gulf using BOSS, an autonomous underwater vehicle with 40 receivers on
its wings. The target field consisted of nine simulant-filled UXO and two false targets buried in the sediment and twenty proud targets.
The AUV collected scattering data during north/south, east/west, and diagonal flights. We discuss the data analyzed so far from which
we have extracted 3-D images and acoustic color constructs for 18 of the targets and demonstrated UXO/false target separation using a
high dimensional acoustic color feature. Finally, we present related work involving targets buried in non-saturated elastic sediments.
[This work is supported by ONR and SERDP.]
Contributed Papers
4:45
1pUW11. Performance metrics for depth-based signal separation using
deep vertical line arrays. John K. Boyle, Gabriel P. Kniffin, and Lisa M.
Zurk (Northwest Electromagnetics and Acoust. Res. Lab. (NEAR-Lab),
Dept. of Elec. & Comput. Eng., Portland State Univ., 1900 SW 4th Ave.,
Ste. 160, Portland, OR 97201, jboyle@pdx.edu)
A publication [McCargar & Zurk, 2013] presented a method for passive
depth-separation of signals received on vertical line arrays (VLAs) deployed
below the critical depth in the deep ocean. This method, based on a modified
Fourier transform of the received signals from submerged targets, makes
2112
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
use of the depth-dependent modulation inherent in the signals due to interference between the direct and surface-reflected acoustic arrivals. Examination of the transform is necessary to determine performance of the algorithm
in terms of the minimum target depth and range, array aperture, and temporal sampling. However, traditional expressions for signal sampling requirements (Nyquist sampling theorem) do not directly apply to the measured
signal along a target trace due to uneven sampling in vertical angle imposed
by the spatiotemporal evolution of the target track as observed on the VLA.
In this paper, the effects of this uneven sampling on the ambiguity in the
estimated depth (i.e., aliasing) are discussed, and expressions for the maximum snapshot length are presented and validated using simulated data
168th Meeting: Acoustical Society of America
2112
5:00
1pUW12. Wideband imaging with the decomposition of time reversal
operator. Chunxiao Li, Mingfei Guo, and Huancai Lu (Zhejiang Univ. of
Technol., 18# ChaoWang Rd., Hangzhou, Zhejiang, Hangzhou 310014,
China, chunxiaoli@zju.edu.cn)
It has been shown that the decomposition of the time reversal operator
(DORT) is effective to achieve detection and selectively focusing on pointlike
scatterers. Moreover, the multiplicity of the invariant of the time reversal operator for a single extended (non-pointlike) scatterer has been also revealed.
Note: Payment of separate fee required to attend
MONDAY AFTERNOON, 27 OCTOBER 2014
HILBERT CIRCLE THEATER, 7:00 P.M. TO 9:00 P.M.
Session 1eID
Interdisciplinary: Tutorial Lecture on Musical Acoustics: Science and Performance
Uwe J. Hansen, Chair
Chemistry & Physics, Indiana State University, Terre Haute, IN 47803-2374
Invited Paper
7:00
1eID1. The physics of musical instruments with performance illustrations and a concert. Uwe J. Hansen (Dept. of Chemistry and
Phys., Indiana State Univ., Indiana State Univ., Terre Haute, IN 47809, uwe.hansen@indstate.edu) and Susan Kitterman (New World
Youth Orchestras, Indianapolis, IN)
Musical Instruments generally rely on the following elements for tone production: a power supply, an oscillator, a resonator, an amplifier, and a pitch control mechanism. The physical basis of these elements will be discussed for each instrument family with performance illustrations by the orchestra. Wave shapes and spectra will be shown for representative instruments. A pamphlet illustrating
important elements for each instrument group will be distributed to the audience. The Science presentation with orchestral performance
illustrations will be followed by a concert of the New World Youth Symphony Orchestra. This orchestra is one of three performing
groups of the New World Youth Orchestras, an organization founded by Susan Kitterman in 1982. Members of the Symphony are chosen from the greater Indianapolis and Central Indiana area by audition.
2113
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2113
1p MON. PM
In this paper, we investigate the characterization and imaging of the scatterers
when an extended scatterer and a pointlike scatterer are simultaneously present. The relationship between the quality of focusing and frequency is investigated by backpropagation of singular vectors using a model of the waveguide
in each frequency bin. When the extended scatterer is present, it is shown that
the second singular vector can also focus on the target. However, the task of
focusing can only be achieved in frequency bins with relatively large singular
values. When both scatterers are simultaneously present, the singular vectors
are a linear combination of the transfer vector from each scatterer. The first
singular vector can achieve focusing on the extended scatterer in frequency
bins with relatively large singular values. The second singular vector can
approximately focus on the pointlike scatterer in frequency bins that its scattering coefficients are relatively high and the first scattering coefficient of the
extended scatterer are relatively low.
produced with a normal-mode propagation model. Initial results are presented to show the requirements for snapshot lengths and target trajectories
for successful depth separation of slow moving targets at low frequencies.
TUESDAY MORNING, 28 OCTOBER 2014
MARRIOTT 7/8, 7:55 A.M. TO 12:00 NOON
Session 2aAA
Architectural Acoustics and Engineering Acoustics: Architectural Acoustics and Audio I
K. Anthony Hoover, Cochair
McKay Conant Hoover, 5655 Lindero Canyon Road, Suite 325, Westlake Village, CA 91362
Alexander U. Case, Cochair
Sound Recording Technology, University of Massachusetts Lowell, 35 Wilder St., Suite 3, Lowell, MA 01854
Chair’s Introduction—7:55
Invited Papers
8:00
2aAA1. Excessive reverberance in an outdoor amphitheater. K. Anthony Hoover (McKay Conant Hoover, 5655 Lindero Canyon
Rd., Ste. 325, Westlake Village, CA 91362, thoover@mchinc.com)
The historic Ford Theatre in Hollywood, CA, is undergoing an overall renovation and expansion. Its centerpiece is the inexplicably
asymmetrical 1200 seat outdoor amphitheater, built of concrete in 1931 after the original 1920 wood structure was destroyed by a brush
fire in 1929, and well before the adjacent Hollywood Freeway was nearly as noisy as now. Renovation includes reorienting seating for
better symmetry while maintaining the historic concrete, and improving audio, lighting, and support spaces. Sited within an arroyo overlooking a busy highway, and in view of the Hollywood Bowl, the new design features an expanded “sound wall” that will help to mitigate highway noise while providing optimal lighting and control positions. New sound-absorptive treatments will address the Ford’s
excessive reverberation, currently more than might be anticipated for an entirely outdoor space. The remarkably uniform distribution of
ambient noise and apparent contributions by the arroyo to the reverberation will be discussed, along with assorted design challenges.
8:20
2aAA2. Room acoustics analysis, recordings of real and simulated performances, and integration of an acoustic shell mock up
with performers for evaluation of a choir shell design. David S. Woolworth (Oxford Acoust., 356 CR 102, Oxford, MS 38655,
dave@oxfordacoustics.com)
The current renovation of the 1883 Galloway Memorial Methodist Church required the repair and replacement of a number of room
finishes, as well as resolution of acoustic problems related to their choir loft. This paper will present the various approaches used to
determine the best course of action using primarily an in-situ analysis that includes construction mockups, simulated sources, and critical
listening.
8:40
2aAA3. A decade later: What we’ve learned from The Pritzker Pavilion at Millennium Park. Jonathan Laney, Greg Miller, Scott
Pfeiffer, and Carl Giegold (Threshold Acoust., 53 W Jackson Blvd., Ste. 815, Chicago, IL 60604, jlaney@thresholdacoustics.com)
Each design and construction process yields a building and systems that respond to a particular client at a particular time. We launch
these projects into the wild and all too frequently know little of their daily lives and annual cycles after that. Occasionally, though, we
have the opportunity to stay close enough to watch a project wear in, weather (sometimes literally), and respond to changing client and
patron dynamics over time. Such is the case with the reinforcement and enhancement systems at the Pritzker Pavilion in Chicago’s Millennium Park. On a fine-grained scale, each outdoor loudspeaker is individually inspected for its condition at the end of each season. Signal-processing and amplification equipment is evaluated as well, so the overall system is maintained at a high degree of readiness and
reliability. Strengths and weaknesses of these components thereby reveal themselves over time. We will discuss these technical aspects
as well as changing audience behaviors, modifications made for special events, and the ways all of these factors inform the future of
audio (and video) in the Park.
2114
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2114
9:00
2aAA4. An electro-acoustic conundrum—Improving the listening experience at the Park Avenue Armory. Steve Barbar (E-coustic
Systems, 30 Dunbarton Rd., Belmont, MA 02478, steve@lares-lexicon.com) and Paul Scarbrough (Akustiks, South Norwalk, CT)
Larger than a hanger for a commercial airliner, the Park Avenue Armory occupies an entire city block in midtown Manhattan. Its
massive internal volume generates reverberation time in excess of three seconds. However, it functions as a true multi-purpose venue
with programming that includes dramatic performances produced by the Manchester International Festival, and musical performances
sponsored by Lincoln Center. We will discuss the unique nature of the venue as well as the tools and techniques employed in staging different productions.
9:20
2aAA5. Sound reinforcement in an acoustically challenging multipurpose space. Deb Britton (K2 Audio, 4900 Pearl East Circle,
Ste. 201E, Boulder, CO 80301, deb@k2audio.com)
2a TUE. AM
Often times, sound system designers are dealt less than ideal cards: design a sound reinforcement system that will provide great
speech intelligibility, in a highly reverberant space, without modifying any of the architectural finishes. While this can certainly be a
challenge, add to those prerequisites, the additional complication of the sound system serving a multi-purpose use, where different types
of presentations must take place in different locations in the space, and with varying audience sizes. This paper presents a case study of
such a scenario, and describes the approach taken in order to achieve the client’s goals.
9:40
2aAA6. Comparison of source stimulus input method on measured speech transmission index values of sound reinforcement
systems. Neil T. Shade (Acoust. Design Collaborative, Ltd., 7509 Lhirondelle Club Rd., Ruxton, MD 21204, nts@akustx.com)
One purpose of a sound reinforcement system is to increase the talker’s speech intelligibility. A common metric for speech intelligibility evaluation is the Speech Transmission Index (STI) defined by IEC-60268-16 Revision 4. The STI of a sound reinforcement system
can be measured by inputting a stimulus signal into the sound system, which is modified by the system electronics, and radiated by the
sound system loudspeakers to the audience seats. The stimulus signal can be input via a line level connection to the sound system or by
playing the stimulus signal through a small loudspeaker that is picked-up by a sound system microphone. This latter approach factors
the entire sound system signal chain from microphone input to loudspeaker output. STI measurements were performed on two sound
systems, one in a reverberant room and the other in relatively non-reverberant room. Measurement results compare both signal input
techniques using omnidirectional and hypercardioid sound system microphones and three loudspeakers claimed to be designed to have
directivity characteristics similar to the human voice.
10:00–10:20 Break
10:20
2aAA7. Enhancements in technology for improving access to active acoustic solutions in multipurpose venues. Ronald Freiheit
(Wenger Corp., 555 Park Dr., Owatonna, MN 55060, ron.freiheit@wengercorp.com)
With advancements in digital signal processing technology and higher integration of functionality, access to active acoustics systems
for multipurpose venues has been enhanced. One of the challenges with active acoustics systems in multipurpose venues is having
enough control over the various acoustic environments within the same room (e.g., under balcony versus over balcony). Each area may
require its own signal processing and control to be effective. Increasing the signal processing capacity to address these different environments will provide a more effective integration of the system in the room. A new signal processing platform with the flexibility to meet
these needs is discussed. The new platform addresses multiple areas with concurrent processing and is integrated with a digital audio
buss and a network-based control system. The system is flexible in its ability to easily expand to meet the needs of a variety of environments. Enhancing integration and flexibility of scale accelerates the potential for active systems with an attractive financial point of
entry.
10:40
2aAA8. Sound levels and the risk of hearing damage at a large music college. Thomas J. Plsek (Brass, Berklee College of Music,
MS 1140 Brass, 1140 Boylston St., Boston, MA 02215, tplsek@berklee.edu)
For a recent sabbatical from Berklee College of Music, my project was to study hearing loss especially among student and faculty
musicians and to measure sound levels in various performance situation ranging from rehearsals to classes/labs to actual public performances. The National Institute for Occupational Safety and Health (NIOSH) recommendations (85 dBA criterion with a 3 dB exchange
rate) were used to determine the daily noise dose obtained in each of the situations. In about half of the situations 100% or more of the
daily noise was reached. More measuring of actual levels reached is needed as are noise dosimetry measurements over an active 12–16
hour day.
11:00
2aAA9. Development of a tunable absorber/diffuser using micro-perforated panels. Matthew S. Hildebrand (Wenger Corp., 555
Park Dr., Owatonna, MN 55060, matt.hildebrand@wengercorp.com)
Shared rehearsal spaces are an all-too-common compromise in music education, pitting vocal, and instrumental ensembles against
each other for desirable room acoustics. More than ever, adjustable acoustics are needed in music spaces. An innovative new acoustic
panel system was developed with this need for flexibility in mind. Providing variable sound absorption with a truly static aesthetic,
2115
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2115
control of reverberation time in the mid-frequency bands is ultimately handed over to the end user. New product development test methods and critical design decisions are discussed, such as curving the micro-perforated panel to improve scattering properties. In situ reverberation measurements are also offered against a 3D CAD model prediction using lab-tested material properties.
11:20
2aAA10. Real case measurements of inflatable membranes absorption technique. Niels W. Adelman-Larsen (Flex Acoust., Diplomvej 377, Kgs. Lyngby 2800, Denmark, nwl@flexac.com)
After some years of development of the patented technology of inflated plastic membranes for sound absorption, an actual product
became available in 2012 and immediately implemented in a Danish music school. It absorbs sound somewhat linearly from 63 to 1k
Hz, when active, advantageous for amplified music. The absorption coefficient is close to 0.0 when deactivated. 75.000 ft2 of the mobile
version of the innovation was employed at the Eurovision Song Contest, the second largest annual television event worldwide. This contributed to a lowering of T30 in the 63, 125, and 250 Hz octave bands from up to 13 s to below 4 s in the former-shipyard venue. The
permanently installed version has been incorporated in a new theater in Korea. More detailed acoustic measurements from these cases
will be presented. The technology will further be used in the new, multi-functional Dubai Opera scheduled for 2015.
11:40
2aAA11. Virtual sound images and virtual sound absorbers misinterpreted as supernatural objects. Steven J. Waller (Rock Art
Acoust., 5415 Lake Murray Blvd. #8, La Mesa, CA 91942, wallersj@yahoo.com)
Complex sound behaviors such as echoes, reverberation, and interference patterns can be mathematically modeled using the modern
concepts of virtual sound sources or virtual sound absorbers. Yet prior to the scientific wave theory of sound, these same acoustical phenomena were considered baffling, and hence led to the illusion that they were due to mysterious invisible sources. Vivid descriptions of
the physical forms of echo spirits, hoofed thunder gods, and pipers’ stones, as engendered from the sounds they either produced or
blocked, are found in ancient myths and legends from around the world. Additional pieces of evidence attesting to these beliefs are
found in archaeological remains consisting of canyon petroglyphs, cave paintings, and megalithic stone circles. Blindfolded participants
in acoustic experimental set-ups demonstrated that they attributed various virtual sound effects to real sound sources and/or attenuators.
Ways in which these types of sonic phenomena can be manipulated to give rise to ultra-realistic auditory illusions of actual objects even
today will be discussed relative to enhancing experiences of multimedia entertainment and virtual reality. Conversely, understanding
how the mind can construct psychoacoustic models inconsistent with scientific reality could serve as a lesson helping prevent the supernatural misperceptions to which our ancestors were susceptible.
TUESDAY MORNING, 28 OCTOBER 2014
LINCOLN, 8:25 A.M. TO 12:00 NOON
Session 2aAB
Animal Bioacoustics, Acoustical Oceanography, and Signal Processing in Acoustics: Mobile Autonomous
Platforms for Bioacoustic Sensing
Holger Klinck, Cochair
Cooperative Institute for Marine Resources Studies, Oregon State University, Hatfield Marine Science Center, 2030 SE
Marine Science Drive, Newport, OR 97365
David K. Mellinger, Cochair
Coop. Inst. for Marine Resources Studies, Oregon State University, 2030 SE Marine Science Dr., Newport, OR 97365
Chair’s Introduction—8:25
Invited Papers
8:30
2aAB1. Real-time passive acoustic monitoring of baleen whales from autonomous platforms. Mark F. Baumgartner (Biology Dept.,
Woods Hole Oceanographic Inst., 266 Woods Hole Rd., MS #33, Woods Hole, MA 02543, mbaumgartner@whoi.edu)
An automated low-frequency detection and classification system (LFDCS) was developed for use with the digital acoustic monitoring (DMON) instrument to detect, classify, and report in near real time the calls of several baleen whale species, including fin, humpback, sei, bowhead, and North Atlantic right whales. The DMON/LFDCS has been integrated into the Slocum glider and APEX
profiling float, and integration projects are currently underway for the Liquid Robotics wave glider and a moored buoy. In a recent
2116
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2116
evaluation study, two gliders reported over 25,000 acoustic detections attributed to fin, humpback, sei, and North Atlantic right whales
over a 3-week period during late fall in the Gulf of Maine. The overall false detection rate for individual calls was 14%, and for right,
humpback, and fin whales, false predictions of occurrence during 15-minute reporting periods were 5% or less. Agreement between
acoustic detections and visual sightings from concurrent aerial and shipboard surveys was excellent (9 of 10 visual detections were
accompanied by real-time acoustic detections of the same species by a nearby glider). We envision that this autonomous acoustic monitoring system will be a useful tool for both marine mammal research and mitigation applications.
8:50
2aAB2. Detection, bearing estimation, and telemetry of North Atlantic right whale vocalizations using a wave glider autonomous
vehicle. Harold A. Cheyne (Lab of Ornithology, Cornell Univ., 95 Brown Rd., Rm. 201, Ithaca, NY 14850, haroldcheyne@gmail.com),
Charles R. Key, and Michael J. Satter (Leidos, Long Beach, MS)
2a TUE. AM
Assessing and mitigating the effects of anthropogenic noise on marine mammals is limited by the typically employed technologies
of archival underwater acoustic recorders and towed hydrophone arrays. Data from archival recorders are analyzed months after the activity of interest, so assessment occurs long after the events and mitigation of those activities is impossible. Towed hydrophone arrays
suffer from nearby ship and seismic air gun noise, and they require substantial on-board human and computing resources. This work has
developed an acoustic data acquisition, processing, and transmission system for use on a Wave Glider, to overcome these limitations by
providing near real-time marine mammal acoustic data from a portable and persistent autonomous platform. Sea tests have demonstrated
the proof-of-concept with the system recording four channels of acoustic data and transmitting portions of those data via satellite. The
system integrates a detection-classification algorithm on-board, and a beam-forming algorithm in the shore-side user interface, to provide a user with aural and visual review tools for the detected sounds. Results from a two-week deployment in Cape Cod Bay will be
presented and future development directions will be discussed.
9:10
2aAB3. Shelf-scale mapping of fish sound production with ocean gliders. David Mann (Loggerhead Instruments Inc., 6576 Palmer
Park Circle, Sarasota, FL 34238, dmann@loggerhead.com), Carrie Wall (Univ. of Colorado at Boulder, Boulder, CO), Chad Lembke,
Michael Lindemuth (College of Marine Sci., Univ. of South Florida, St.. Petersburg, FL), Ruoying He (Dept Marine, Earth, and Atmospheric Sci., NC State Univ., Raleigh, NC), Chris Taylor, and Todd Kellison (Beaufort Lab., NOAA Fisheries, Beaufort, NC)
Ocean gliders are a powerful platform for collecting large-scale data on the distribution of sound-producing animals while also collecting environmental data that may influence their distribution. Since 2009, we have performed extensive mapping on the West Florida
Shelf with ocean gliders equipped with passive acoustic recorders. These missions have revealed the distribution of red grouper as well
as identified several unknown sounds likely produced by fishes. In March 2014, we ran a mission along the shelf edge from Cape Canaveral, FL to North Carolina to map fish sound production. The Gulf Stream and its strong currents necessitated a team effort with ocean
modeling to guide the glider successfully to two marine protected areas. This mission also revealed large distributions of unknown
sounds, especially on the shallower portions of the shelf. Gliders provide valuable spatial coverage, but because they are moving and
most fish have strong diurnal sound production patterns, data analysis on presence and absence must be made carefully. In many of these
cases, it is best to use a combination of platforms, including fixed recorders and ocean profilers to measure temporal patterns of sound
production.
9:30
2aAB4. The use of passively drifting acoustic recorders for bioacoustic sensing. Jay Barlow, Emily Griffiths, and Shannon Rankin
(Marine Mammal and Turtle Div., NOAA-SWFSC, 8901 La Jolla Shores Dr., La Jolla, CA 92037, jay.barlow@noaa.gov)
Passively drifting recording systems offer several advantages over autonomous underwater or surface vessels for mobile bioacoustic
sensing in the sea. Because they lack of any propulsion, self noise is minimized. Also, vertical hydrophone arrays are easy to implement,
which is useful in estimating the distance to specific sound sources. We have developed an inexpensive (<$5000) Drifting Acoustic
Spar Buoy Recorder (DASBR) that features up to 1 TB of stereo recording capacity and a bandwidth of 10 Hz–96 kHz. Given their low
cost, many more recorders can be deployed to achieve greater coverage. The audio and GPS recording system floats at the surface, and
the two hydrophones (at 100 m) are de-coupled from wave action by a dampner disk and an elastic cord. During a test deployment in the
Catalina Basin (Nov 2013) we collected approximately 1200 hours of recordings using 5 DASBRs recording at 192 kHz sampling rate.
Each recorder was recovered (using GPS and VHF locators) and re-deployed 3–4 times. Dolphin whistles and echo-location clicks were
detectable approximately half of the total recording time. Cuvier’s beaked whales were also detected on three occasions. Cetacean density estimation and ocean noise measurements are just two of many potential uses for free-drifting recorders.
9:50
2aAB5. Small cetacean monitoring from surface and underwater autonomous vehicles. Douglas M. Gillespie, Mark Johnson (Sea
Mammal Res. Unit, Univ. of St. Andrews, Gatty Marine Lab., St Andrews, Fife KY16 8LB, United Kingdom, dg50@st-andrews.ac.uk),
Danielle Harris (Ctr. for Res. into Ecological and Environ. Modelling, Univ. of St. Andrews, St. Andrews, Fife, United Kingdom), and
Kalliopi Gkikopoulou (Sea Mammal Res. Unit, Univ. of St. Andrews, St. Andrews, United Kingdom)
We present results of Passive Acoustic surveys conducted from three types of autonomous marine vehicles, two submarine gliders
and a surface wave powered vehicle. Submarine vehicles have the advantage of operating at depth, which has the potential to increase
detection rate for some species. However, surface vehicles equipped with solar panels have the capacity to carry a greater payload and
currently allow for more on board processing which is of particular importance for high frequency odontocete species. Surface vehicles
are also more suited to operation in shallow or coastal waters. We describe the hardware and software packages developed for each vehicle type and give examples of the types of data retrieved both through real time telemetry and recovered post deployment. High frequency echolocation clicks and whistles have been successfully detected from all vehicles. Noise levels varied considerably between
vehicle types, though all were subject to a degree of mechanical noise from the vehicle itself.
2117
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2117
10:10–10:35 Break
10:35
2aAB6. A commercially available sound acquisition and processing board for autonomous passive acoustic monitoring platforms.
Haru Matsumoto, Holger Klinck, David K. Mellinger (CIMRS, Oregon State Univ., 2115 SE OSU Dr., Newport, OR 97365, haru.matsumoto@oregonstate.edu), and Chris Jones (Embedded Ocean Systems, Seattle, WA)
The U.S. Navy is required to monitor marine mammal populations in U.S. waters to comply with regulations issued by federal agencies. Oregon State University and Embedded Ocean Systems (EOS) co-developed a passive acoustic data acquisition and processing
board called Wideband Intelligent Signal Processor and Recorder (WISPR). This low-power, small-footprint system is suitable for autonomous platforms with limited battery and space capacity, including underwater gliders and profiler floats. It includes a high-performance digital signal processor (DSP) running the uClinux operating system, providing extensive flexibility for users to configure or reprogram the system’s operation. With multiple WISPR-equipped mobile platforms strategically deployed in an area of interest, operators
on land or at sea can now receive information in near-real time about the presence of protected species in the survey area. In April 2014,
WISPR became commercially available via EOS. We are implementing WISPR in the Seaglider and will conduct a first evaluation test
off the coast of Oregon in September. System performance, including system noise interference, flow noise, power consumption, and
file compression rates in the data-logging system, will be discussed. [Funding from the US Navy’s Living Marine Resources Program.]
10:55
2aAB7. Glider-based passive acoustic marine mammal detection. John Hildebrand, Gerald L. D’Spain, and Sean M. Wiggins
(Scripps Inst. of Oceanogr., UCSD, Mail Code 0205, La Jolla, CA 92093, jhildebrand@ucsd.edu)
Passive acoustic detection of delphinid sounds using the Wave Glider (WG) autonomous near-surface vehicle was compared with a
fixed bottom-mounted autonomous broadband system, the High-frequency Acoustic Recording Package (HARP). A group of whistling
and clicking delphinids was tracked using an array of bottom-mounted HARPs, providing ground-truth for detections from the WG.
Whistles in the 5–20 kHz band were readily detected by the bottom HARPs as the delphinids approached, but the WG revealed only a
brief period with intense detections as the animals approached within ~500 m. Refraction due to acoustic propagation in the thermocline
provides an explanation for why the WG may only detect whistling delphinids at close range relative to the long-range detection capabilities of the bottom-mounted HARPs. This work demonstrated that sound speed structure plays an important role in determining detection
range for high-frequency-calling marine mammals by autonomous gliders and bottom-mounted sensors.
Contributed Papers
11:15
11:30
2aAB8. Acoustic seagliders for monitoring marine mammal populations. Lora J. Van Uffelen (Ocean and Resources Eng., Univ. of Hawaii at
Manoa, 1000 Pope Rd., MSB 205, Honolulu, HI 96815, loravu@hawaii.
edu), Erin Oleson (Cetacean Res. Program, NOAA Pacific Islands Fisheries
Sci. Ctr., Honolulu, HI), Bruce Howe, and Ethan Roth (Ocean and Resources Eng., Univ. of Hawaii at Manoa, Honolulu, HI)
2aAB9. Prototype of a linear array on an autonomous surface vehicle
for the register of dolphin displacement patterns within a shallow bay.
Eduardo Romero-Vivas, Fernando D. Von Borstel-Luna (CIBNOR, Instituto
Politecnico Nacional 195, Playa Palo de Santa Rita Sur, La Paz, BCS
23090, Mexico, evivas@cibnor.mx), Omar A. Bustamante, Sergio Beristain
(Acoust. Lab, ESIME, IPN, IMA, Mexico City, Mexico), Miguel A. PortaGandara, Franciso Villa Medina, and Joaquın Gutierrez-Jag€
uey (CIBNOR,
La Paz, BCS, Mexico)
A DMON digital acoustic monitoring device has been integrated into a
Seaglider with the goal of passive, persistent acoustic monitoring of cetacean populations. The system makes acoustic recordings as it travels in a
sawtooth pattern between the surface and up to 1000 m depth. It includes
three hydrophones, located in the center of the instrument and on each wing.
An onboard real-time detector has been implemented to record continuously
after ambient noise has risen above a signal-to-noise (SNR) threshold level,
and the glider transmits power spectra of recorded data back to a shore-station computer via iridium satellite after each dive. The glider pilot has the
opportunity to set parameters that govern the amount of data recorded, thus
managing data storage and therefore the length of a mission. This system
was deployed in the vicinity of the Hawaiian Islands to detect marine mammals as an alternative or complement to conventional ship-based survey
methods. System design and implementation will be described and preliminary results will be presented.
2118
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
A semi-resident population of tursiops has been reported in the south of
La Paz bay in Baja California Sur, Mexico, where specific zones for social,
feeding and resting behaviors have been detected. Nevertheless, increasing
human activities and new constructions are attributed to have shifted the
areas of their main activity. Therefore, it becomes important to study displacement patterns of dolphins within the bay and their spatial relationship
to maritime traffic and other sources of anthropogenic noise. A prototype of
an Autonomous Surface Vehicle (ASV) designed for shallow water bathymetry has been adapted to carry a linear array of hydrophones previously
reported for the localization of dolphins from their whistles. Conventional
beam-forming algorithms and electrical steering are used to find Direction
of Arrival (DOA) of the sound sources. The left-right ambiguity typical of a
linear array and front-back lobes for sound sources located at end-fire can
be resolved by the trajectory of the ASV. Geo-referenced positions and
bearing of the array, provided by the Inertial Measurement Unit of the ASV,
along with DOA for various positions allows triangulating and mapping the
sound sources. Results from both, controlled experiments using geo-referenced know sources, and field trials within the bay, are presented.
168th Meeting: Acoustical Society of America
2118
11:45
2aAB10. High-frequency observations from mobile autonomous platforms. Holger Klinck, Haru Matsumoto, Selene Fregosi, and David K. Mellinger (Cooperative Inst. for Marine Resources Studies, Oregon State Univ.,
Hatfield Marine Sci. Ctr., 2030 SE Marine Sci. Dr., Newport, OR 97365,
Holger.Klinck@oregonstate.edu)
With increased human use of US coastal waters—including use by
renewable energy activities such as the deployment and operation of wind,
wave, and tidal energy converters—the issue of potential negative impacts
on coastal ecosystems arises. Monitoring these areas efficiently for marine
mammals is challenging. Recreational and commercial activities (e.g., fishing) can hinder long-term operation of fixed moored instruments.
TUESDAY MORNING, 28 OCTOBER 2014
INDIANA G, 8:25 A.M. TO 12:00 NOON
Session 2aAO
Acoustical Oceanography, Underwater Acoustics, and Signal Processing in Acoustics: Parameter
Estimation in Environments That Include Out-of-Plane Propagation Effects
Megan S. Ballard, Cochair
Applied Research Laboratories, The University of Texas at Austin, P.O. Box 8029, Austin, TX 78758
Timothy F. Duda, Cochair
Woods Hole Oceanographic Institution, WHOI AOPE Dept. MS 11, Woods Hole, MA 02543
Chair’s Introduction—8:25
Invited Papers
8:30
2aAO1. Estimating waveguide parameters using horizontal and vertical arrays in the vicinity of horizontal Lloyd’s mirror in
shallow water. Mohsen Badiey (College of Earth, Ocean, and Environment, Univ. of Delaware, 261 S. College Ave., Robinson Hall,
Newark, DE 19716, badiey@udel.edu)
When shallow water internal waves approach a source-receiver track, the interference between the direct and horizontally refracted
acoustic paths from a broadband acoustic source was previously shown to form Horizontal Lloyd’s mirror (Badiey et al. J. Acoust. Soc.
Am. 128(4), EL141–EL147, 2011). While the modal interference structure in the vertical plane may reveal arrival time for the out of
plane refracted acoustic wave front, analysis of moving interference pattern along the horizontal array allows measurement of the angle
of horizontal refraction and the speed of the nonlinear internal wave (NIW) in the horizontal plane. In this paper we present a full
account of the movement of NIW towards a source-receive track and how we can use the received acoustic signal on an L-shaped array
to estimate basic parameters of the waveguide and obtain related temporal and spatial coherence functions particularly in the vicinity of
the formation of the horizontal Lloyd mirror. Numerical results using Vertical Modes and Horizontal Rays as well as 3D PE calculations
are carried out to explain the experimental observations. [Work supported by ONR 322OA.]
8:50
2aAO2. Slope inversion in a single-receiver context for three-dimensional wedge-like environments. Frederic Sturm (LMFA (UMR
5509 ECL-UCBL1-INSA de Lyon), Ecole Centrale de Lyon, Ctr. Acoustique, Ecole Centrale de Lyon, 36, Ave. Guy de Collongue,
Ecully 69134, France, frederic.sturm@ec-lyon.fr) and Julien Bonnel (Lab-STICC (UMR CNRS 6285), ENSTA Bretagne, Brest Cedex
09, France)
In a single-receiver context, time-frequency (TF) analysis can be used to analyze modal dispersion of low-frequency broadband
sound pulses in shallow-water oceanic environments. In a previous work, TF analysis was used to study the propagation of low-frequency broadband pulses in three-dimensional (3-D) shallow-water wedge waveguides. Of particular interest is that TF analysis turns
out to be a suitable tool to better understand, illustrate and visualize 3-D propagation effects for such wedge-like environments. In the
present work, it is shown that TF analysis can also be used at the core of an inversion scheme to estimate the slope of the seabed in a
2119
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2119
2a TUE. AM
Additionally these shallow waters are often utilized by high-frequency cetaceans (e.g., harbor porpoises) which can only be acoustically detected over
short distances of a few hundred meters. Mobile acoustic platforms are a
useful tool to survey these areas of concern with increased temporal and
spatial resolution compared to fixed systems and towed arrays. A commercially available acoustic recorder (type Song Meter SM2 + , Wildlife Acoustics, Inc.) featuring sampling rates up to 384 kHz was modified and
implemented on an autonomous underwater vehicle (AUV) as well as an
unmanned surface vehicle (USV) and tested in the field. Preliminary results
indicate that these systems are effective at detecting the presence of highfrequency cetaceans such as harbor porpoises. Potential applications, limitations, and future directions of this technology will be discussed. [Project
partly supported by ONR and NOAA.]
same single hydrophone receiving configuration and for similar 3-D wedge-shaped waveguides. The inversion algorithm proposed,
based on a masking process, focuses on specific parts of the TF domain where modal energy is concentrated. The criterion used to quantify the match between the received signal and the replicas by a fully 3-D parabolic equation code, is defined as the amount of measured
time-frequency energy integrated inside the masks. Its maximization is obtained using an exhaustive search. The method is first benchmarked on numerical simulations and then successfully applied on experimental small-scale data.
9:10
2aAO3. Effects of environmental uncertainty on source range estimates from horizontal multipath. Megan S. Ballard (Appl. Res.
Labs., The Univ. of Texas at Austin, P.O. Box 8029, Austin, TX 78758, meganb@arlut.utexas.edu)
A method has been developed to estimate source range in continental shelf environments that exhibit three-dimensional propagation
effects [M. S. Ballard, J. Acoust. Soc. Am. 134, EL340–EL343, 2013]. The technique exploits measurements recorded on a horizontal
line array of a direct path arrival, which results from sound propagating across the shelf to the receiver array, and a refracted path arrival,
which results from sound propagating obliquely upslope and refracting back downslope to the receiver array. A hybrid modeling
approach using vertical modes and horizontal rays provides the ranging estimate. According to this approach, rays are traced in the horizontal plane with refraction determined by the modal phase speed. Invoking reciprocity, the rays originate from the center of the array
and have launch angles equal to the estimated bearing angles of the direct and refracted paths. The location of the source in the horizontal plane is estimated from the point where the rays intersect. In this talk, the effects of unknown environmental parameters, including
the sediment properties and the water-column sound-speed profile, on the source range estimate are discussed. Error resulting from
uncertainty in the measured bathymetry and location of the receiver array will also be addressed. [Work supported by ONR.]
Contributed Papers
9:30
9:45
2aAO4. Acoustical observation of the estuarine salt wedge at low-tomid-frequencies. D. Benjamin Reeder (Oceanogr., Naval Postgrad. School,
73 Hanapepe Loop, Honolulu, HI 96825, dbreeder@nps.edu)
2aAO5. A hybrid approach for estimating range-dependent properties
of shallow water environments. Michael Taroudakis and Costas Smaragdakis (Mathematics and Appl. Mathematics & IACM, Univ. of Crete and
FORTH, Knossou Ave., Heraklion 71409, Greece, taroud@math.uoc.gr)
The estuarine environment often hosts a salt wedge, the stratification of
which is a function of the tide’s range and speed of advance, river discharge
volumetric flow rate and river mouth morphology. Competing effects of
temperature and salinity on sound speed control the degree of acoustic
refraction occurring along an acoustic path. A field experiment was carried
out in the Columbia River to test the hypothesis that the estuarine salt wedge
is acoustically observable in terms of low-to-mid-frequency acoustic propagation. Linear frequency-modulated (LFM) acoustic signals in the 500–
2000 Hz band were collected during the advance and retreat of the salt
wedge during May 27–28, 2013. Results demonstrate that the three-dimensional salt wedge front is the dominant physical feature controlling acoustic
propagation in this environment: received signal energy is relatively stable
under single-medium conditions before and after the passage of the salt
wedge front, but suffers a 10–15 dB loss as well as increased variance during salt wedge front passage due to 3D refraction and scattering. Physical
parameters (i.e., temperature, salinity, current, and turbulence) and acoustic
propagation modeling corroborate and inform the acoustic observations.
H hybrid approach based on statistical signal characterization and a linear inversion scheme for the estimation of range dependent sound speed profiles of compact support in shallow water is presented. The approach is
appropriate for ocean acoustic tomography when there is a single receiver
available, as the first stage of the method is based on the statistical characterization of a single reception using wavelet transform to associate the signal with a set of parameters describing the statistical features of its wavelet
sub-band coefficients. A non-linear optimization algorithm is then applied
to associate these features with range-dependent sound speed profile in the
water column. This inversion method is restricted to cases where the range
dependency is of compact support. At the second stage a linear inversion
scheme based on modal arrivals identification and a first order perturbation
formula to associate sound speed differences with modal travel time perturbations is applied to fine tune the results obtained by the optimization
scheme. A second restriction of this stage is that mode identification is necessary. If this assumption is fulfilled the whole scheme may be applied in
ocean acoustic tomography for the retrieval of three-dimensional features,
combining inversion results at various slices.
10:00–10:15 Break
Invited Papers
10:15
2aAO6. Three-dimensional acoustics in basin scale propagation. Kevin D. Heaney (OASIS Inc., 11006 Clara Barton Dr., Fairfax Station, VA 22039, oceansound04@yahoo.com) and Richard L. Campbell (OASIS Inc., Seattle, U.S. Minor Outlying Islands)
Long-range, basin-scale acoustic propagation has long been considered deep water and well represented by the two-dimensional numerical solution (range/depth) of wave equation. Ocean acoustic tomography has even recently been demonstrated to be insensitive to
the three-dimensional affects of refraction and diffraction (Dushaw, JASA 2014). For frequencies below 50 Hz, where volume attenuation is negligible, the approximation that all propagation of significance is in the plane begins to break down. When examining very
long-range propagation in situations where the source/receiver are not specifically selected for open water paths, 3D effects can dominate. Seamounts and bathymetry rises cause both refraction away from the shallowing seafloor and diffraction behind sharp edges. In
2120
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2120
this paper a set of recent observations, many from the International Monitoring System (IMS) of the United Nations Comprehensive
Test Ban Treaty Organization (CTBTO) will be presented, demonstrating observations that are not well explained by Nx2D acoustic
propagation. The Peregrine PE model, a recent recoding of RAM in C, has been extended to include 3D split-step Pade propagation and
will be used to demonstrate how 3D acoustic propagation affects help explains some of the observations.
10:35
2aAO7. Sensitivity analysis of three-dimensional sound pressure fields in complex underwater environments. Ying-Tsong Lin
(Appl. Ocean Phys. and Eng., Woods Hole Oceanographic Inst., Bigelow 213, MS#11, WHOI, Woods Hole, MA 02543, ytlin@whoi.
edu)
2a TUE. AM
A sensitivity kernel for sound pressure variability due to variations of index of refraction is derived from a higher-order three-dimensional (3D) split-step parabolic-equation (PE) solution of the Helmholtz equation. In this study, the kernel is used to compute the acoustic sensitivity field between a source and a receiver in a 3D underwater environment, and to quantify how much of the medium change
can cause significant consequence on received acoustic signals. Using the chain rule, the dynamics of sensitivity fields can be connected
to the dynamics of ocean processes. This talk will present numerical examples of sound propagation in submarine canyons and continental slopes, where the ocean dynamics cause strong spatial and temporal variability in sound pressure. Using the sensitivity kernel technique, we can analyze the spatial distribution and the temporal evolution of the acoustic sensitivity fields in these geologically and
topographically complex environments. The paper will also discuss other applications of this sound pressure sensitivity kernel, including
uncertainty quantification of transmission loss prediction and adjoint models for 3D acoustic inversions. [Work supported by the ONR.]
10:55
2aAO8. Sensitivity analysis of the image source method to out-of-plane effects. Samuel Pinson (Laborat
orio de Vibraç~
oes e
Acustica, Universidade Federal de Santa Catarina, LVA Dept de Engenharia Mec^anica, UFSC, Bairro Trindade, Florian
opolis, SC
88040-900, Brazil, samuelpinson@yahoo.fr) and Charles W. Holland (Penn State Univ., State College, PA)
In the context of seafloor characterization, the image source method is a technique to estimate the sediment sound-speed profile from
broadband seafloor reflection data. Recently the method has been extended to treat non-parallel layering of the sediment stack. In using
the method with measured data, the estimated sound-speed profiles are observed to exhibit fluctuations. These fluctuations may be partially due to violation of several assumptions: (1) the layer interfaces are smooth with respect to the wavelength and (2) out-of-plane
effects are negligible. In order to better understand the impact these effects, the sensitivity of the image source method to roughness and
out-of-plane effects are examined.
Contributed Papers
11:15
11:30
2aAO9. Results of matched-field inversion in a three-dimensional oceanic environment ignoring horizontal refraction. Frederic Sturm (LMFA
(UMR 5509 ECL-UCBL1-INSA de Lyon), Ecole Centrale de Lyon, Ctr.
Acoustique, Ecole Centrale de Lyon, 36, Ave. Guy de Collongue, Ecully
69134, France, frederic.sturm@ec-lyon.fr) and Alexios Korakas (LabSTICC (UMR6285), ENSTA Bretagne, Brest Cedex 09, France)
2aAO10. Measurements of sea surface effects on the low-frequency
acoustic propagation in shallow water. Altan Turgut, Marshall H. Orr
(Acoust. Div., Naval Res. Lab, Acoust. Div., Code 7161, Washington, DC
20375, altan.turgut@nrl.navy.mil), and Jennifer L. Wylie (Fellowships
Office, National Res. Council, Washington, DC)
For some practical reasons, inverse problems in ocean acoustics are often based on 2-D modeling of sound propagation, hence ignoring 3-D propagation effects. However, the acoustic propagation in shallow-water
environments, such as the continental shelf, may be strongly affected by 3D effects, thus requiring 3-D modeling to be accounted for. In the present
talk, the feasibility and the limits of an inversion in fully 3-D oceanic environments assuming 2-D propagation are investigated. A simple matchedfield inversion procedure implemented in a Bayesian framework and based
on the exhaustive search of the parameter space is used. The study is first
carried out on a well-established wedge-like synthetic test case, which
exhibits well-known 3-D effects. Both synthetic data and replica are generated using a parabolic-equation-based code. This approach highlights the
relevance of using 2-D propagation models when inversions are performed
at relatively short ranges from the source. On the other hand, important mismatch occurs when inverting at farther ranges, demonstrating that the use of
fully 3-D forward models is required. Results of inversion on experimental
small-scale data, based on a subspace approach as suggested by the preliminary study made on the synthetic test case, are presented.
2121
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
In shallow water, spatial and temporal variability of the water column
often restricts accurate estimations of bottom properties from low-frequency
acoustic data, especially under highly active oceanographic conditions during the summer. These effects are reduced under winter conditions having a
more uniform sound-speed profile. However, during the RAGS03 winter
experiment, significant low-frequency (200–500 Hz) acoustic signal degradations have been observed on the New Jersey Shelf, especially in the presence of frequently occurring winter storms. Both in-plane and out-of-plane
propagation effects were observed on three moored VLAs and one bottommoored HLA. These effects were further analyzed using 3-D PE simulations
with inputs from a 3-D time-evolving surface gravity wave model. It is
shown that higher-order acoustic modes are highly scattered at high sea
states and out-of-plane propagation effects become important when surfacewave fronts are parallel to the acoustic propagation track. In addition, 3-D
propagation effects on the source localization and geoacoustic inversions
are investigated using the VLA data with/without the presence of winter
storms. [Work supported by ONR.]
168th Meeting: Acoustical Society of America
2121
11:45
found that the rate of pulse decay increases when the surface wave fronts
are perpendicular to the path of acoustic propagation and higher significant
wave height results in higher decay rates. Additionally, the effects from sea
surface roughness are found to vary with different waveguide parameters
including but not limited to sound speed profile, water depth, and seabed
properties. Of particular interest are the combined effects of sea bed properties and rough sea surfaces. It is shown that when clay like sediments are
present, higher-order modes are strongly attenuated and effects due to interaction with the rough sea surface are less pronounced. Finally, possible
influences of sea-state and 3D out-of-plane propagation effects on the
seabed characterization efforts will be discussed. [Work supported by
ONR.]
2aAO11. Effects of sea surface roughness on the mid-frequency acoustic
pulse decay in shallow water. Jennifer Wylie (National Res. Council, 6141
Edsall Rd., Apt. H, Alexandira, VA 22304, jennie.wylie@gmail.com) and
Altan Turgut (Acoust. Div., Naval Res. Lab., Washington, DC)
Recent and ongoing efforts to characterize sea bed parameters from
measured acoustic pulse decay have neglected the effects of sea surface
roughness. In this paper, these effects are investigated using a rough surface
version of RAMPE, RAMSURF, and random rough surface realizations,
calculated from a 2D JONSWAP sea surface spectrum with directional
spreading. Azimuthal dependence is investigated for sandy bottoms and
TUESDAY MORNING, 28 OCTOBER 2014
INDIANA A/B, 7:55 A.M. TO 12:10 P.M.
Session 2aBA
Biomedical Acoustics: Quantitative Ultrasound I
Michael Oelze, Cochair
UIUC, 405 N. Mathews, Urbana, IL 61801
Jonathan Mamou, Cochair
F. L. Lizzi Center for Biomedical Engineering, Riverside Research, 156 William St., 9th Floor, New York, NY 10038
Chair’s Introduction—7:55
Invited Papers
8:00
2aBA1. Myocardial tissue characterization: Myofiber-induced ultrasonic anisotropy. James G. Miller (Phys., Washington U Saint
Louis, Box 1105, 1 Brookings Dr., Saint Louis, MO 63130, james.g.miller@wustl.edu) and Mark R. Holland (Radiology and Imaging
Sci., Indiana Univ. School of Medicine, Indianapolis, IN)
One goal of this invited presentation is illustrate the capabilities of quantitative ultrasonic imaging (tissue characterization) to determine local myofiber orientation using techniques applicable to clinical echocardiographic imaging. Investigations carried out in our laboratory in the late 1970s were perhaps the first reported studies of the impact on the ultrasonic attenuation of the angle between the
incoming ultrasonic beam and the local myofiber orientation. In subsequent studies, we were able to show that the ultrasonic backscatter
exhibits a maximum and the ultrasonic attenuation exhibits a minimum when the sound beam is perpendicular to myofibers, whereas the
attenuation is maximum and the backscatter is minimum for parallel insonification. Results from our laboratory demonstrate three broad
areas of potential contribution derived from quantitative ultrasonic imaging and tissue characterization: (1) improved diagnosis and
patient management, such as monitoring alterations in regional myofiber alignment (for example, potentially in diseases such as hypertrophic cardiomyopathy), (2) improved echocardiographic imaging, such as reduced lateral wall dropout in short axis echocardiographic
images, and (3) improved understanding of myocardial physiology, such as contributing to a better understanding of myocardial twist
resulting from the layer-dependent helical configuration of cardiac myofibers. [NIH R21 HL106417.]
8:20
2aBA2. Quantitative ultrasound for diagnosing breast masses considering both diffuse and non-diffuse scatterers. James Zagzebski, Ivan Rosado-Mendez, Haidy Gerges-Naisef, and Timothy Hall (Medical Phys., Univ. of Wisconsin, 1111 Highland Ave., Rm. L1
1005, Madison, WI 53705, jazagzeb@wisc.edu)
Quantitative ultrasound augments conventional ultrasound information by providing parameters derived from scattering and attenuation properties of tissue. This presentation describes our work estimating attenuation (ATT) and backscatter coefficients (BSC), and
computing effective scatterer sizes (ESD) to differentiate benign from malignant breast masses. Radio-frequency echo data are obtained
from patients scheduled for biopsy of suspicious masses following an institutional IRB approved protocol. A Siemens S2000 equipped
with a linear array and recently a volume scanner transducer is employed. Echo signal power spectra are computed from the tissue and
from the same depth in a reference phantom having accurately measured acoustic properties. Ratios of the tissue-to-reference power
2122
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2122
spectra enable tissue ATT and BSC’s to be estimated. ESD’s are then computed by fitting BSC vs. frequency results to a size-dependent
scattering model. A heterogeneity index HDI expresses variability of the ESD over the tumor area. In preliminary data from 35 patients,
a Bayesian classifier incorporating ATT, ESD, and HDI successfully differentiated malignant masses from fibroadenomas. Future work
focuses on analysis methods when diffuse scattering and stationary signal conditions, implicitly assumed in the power spectra calculations, are not present. This approach tests for signal coherence and generates new parameters that characterize these scattering
conditions.
8:40
2aBA3. Quantitative ultrasound translates to human conditions. William O’Brien (Elec. and Comput. Eng., Univ. of Illinois, 405 N.
Mathews, Urbana, IL 61801, wdo@uiuc.edu)
2a TUE. AM
Two QUS studies will be discussed that demonstrate significant potential for translation to human conditions. One of the studies
deals with the early detection of spontaneous preterm birth (SPTB). In a cohort of 68 adult African American women, each agreed to
undergo up to five transvaginal ultrasound examinations for cervical ultrasonic attenuation (at 5 MHz) and cervical length between 20
and 36 weeks gestation (GA). At 21 weeks GA, the women who delivered preterm had a lower mean attenuation (1.0260.16 dB/cm
MHz) than the women delivering at term (1.3960.095 dB/cm MHz), p = 0.041. Cervical length at 21 weeks was not significantly different between groups. Attenuation risk of SPTB (1.2 dB/cm MHz threshold at 21 weeks): specificity = 83.3%, sensitivity = 65.4%. The
other QUS study deals with the early detection of nonalcoholic fatty liver disease (NAFLD). Liver attenuation (ATN) and backscattered
coefficients (BSC) were assessed at 3 MHz and compared to the liver MR-derived fat fraction (FF) in a cohort of 106 adult subjects. At
a 5% FF (for NAFLD, FF 5%), an ATN threshold of 0.78 dB/cm MHz provided a sensitivity of 89%, and specificity of 84%, whereas
a BSC threshold of 0.0028/cm-sr provided a sensitivity of 92% and specificity of 96%.
9:00
2aBA4. Quantitative-ultrasound detection of cancer in human lymph nodes based on support vector machines. Jonathan Mamou,
Daniel Rohrbach (F. L. Lizzi Ctr. for Biomedical Eng., Riverside Res., 156 William St., 9th Fl., New York, NY 10038, jmamou@rriusa.org), Alain Coron (Laboratoire d’Imagerie Biomedicale, Sorbonne Universites and UPMC Univ Paris 06 and CNRS and INSERM,
Paris, France), Emi Saegusa-Beecroft (Dept. of Surgery, Univ. of Hawaii and Kuakini Medical Ctr., Honolulu, HI), Thanh Minh Bui
(Laboratoire d’Imagerie Biomedicale, Sorbonne Universites and UPMC Univ Paris 06 and CNRS and INSERM, Paris, France), Michael
L. Oelze (BioAcoust. Res. Lab., Univ. of Illinois, Urbana-Champaign, IL), Eugene Yanagihara (Dept. of Surgery, Univ. of Hawaii and
Kuakini Medical Ctr., Honolulu, HI), Lori Bridal (Laboratoire d’Imagerie Biomedicale, Sorbonne Universites and UPMC Univ Paris 06
and CNRS and INSERM, Paris, France), Tadashi Yamaguchi (Ctr. for Frontier Medical Eng., Chiba Univ., Chiba, Japan), Junji Machi
(Dept. of Surgery, Univ. of Hawaii and Kuakini Medical Ctr., Honolulu, HI), and Ernest J. Feleppa (F. L. Lizzi Ctr. for Biomedical
Eng., Riverside Res., New York, NY)
Histological assessment of lymph nodes excised from cancer patients suffers from an unsatisfactory rate of false-negative determinations. We are evaluating high-frequency quantitative ultrasound (QUS) to detect metastatic regions in lymph nodes freshly excised from
cancer patients. Three-dimensional (3D) RF data were acquired from 289 lymph nodes of 82 colorectal-, 15 gastric-, and 70 breast-cancer patients with a custom scanner using a 26-MHz, single-element transducer. Following data acquisition, individual nodes underwent
step-sectioning at 50-mm to assure that no clinically significant cancer foci were missed. RF datasets were analyzed using 3D regions-ofinterest that were processed to yield 13 QUS estimates including spectral-based and envelope-statistics-based parameters. QUS estimates
are associated with tissue microstructure and are hypothesized to provide contrast between non-cancerous and cancerous regions. Leaveone-out classifications, ROC curves, and areas under the ROC (AUC) were used to compare the performance of support vector machines
(SVMs) and step-wise linear discriminant analyses (LDA). Results showed that SVM performance (AUC = 0.87) was superior to LDA
performance (AUC = 0.78). These results suggest that QUS methods may provide an effective tool to guide pathologists towards suspicious regions and also indicate that classification accuracy can be improved using sophisticated and robust classification tools. [Supported in part by NIH grant CA100183.]
9:20
2aBA5. Quantitative ultrasound assessment of tumor responses to chemotherapy using a time-integrated multi-parameter
approach. Hadi Tadayyon, Ali Sadeghi-Naini, Lakshmanan Sannachi, and Gregory Czarnota (Dept. of Medical Biophys., Univ. of Toronto, 2075 Bayview Ave., Toronto, ON M4N 3M5, Canada, gregory.czarnota@sunnybrook.ca)
Radiofrequency ultrasound data were collected from 60 breast cancer patients prior to treatment and at during the onset of their several-month treatment, using a clinical ultrasound scanner operating a ~7 MHz linear array probe. ACE, SAS, spectral, and BSC parameters were computed from 2 2 mm RF segments within the tumor region of interest (ROI) and averaged over all segments to obtain a
mean value for the ROI. The results were separated into two groups—responders and non-responders—based on the ultimate clinical/
pathologic response based on residual tumor size and tumor cellularity. Using a single parameter approach, the best prediction of
response was achieved using the ACE parameter (76% accuracy at week 1). In general, more favorable classifications were achieved
using spectral parameter combinations (82% accuracy at week 8), compared to BSC parameter combinations (73% accuracy). Using the
multi-parameter approach, the best prediction was achieved using the set [MBF, SS, SAS, ACE] and by combining week 1 QUS data
with week 4 QUS data to predict the response at week 4, providing accuracy as high as 91%. The proposed QUS method may potentially
provide early response information and guide cancer therapies on an individual patient basis.
2123
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2123
9:40
2aBA6. Quantitative ultrasound methods for uterine-cervical assessment. Timothy J. Hall (Medical Phys., Univ. of Wisconsin,
1005 WIMR, 1111 Highland Ave., Madison, WI 53705, tjhall@wisc.edu), Helen Feltovich (Medical Phys., Univ. of Wisconsin, Park
City, Utah), Lindsey C. Carlson, Quinton Guerrero, Ivan M. Rosado-Mendez, and Bin Huang (Medical Phys., Univ. of Wisconsin, Madison, WI)
The cervix is a remarkable organ. One of its tasks is to remain firm and “closed” (5 mm diameter cervical canal) prior to pregnancy.
Shortly after conception the cervix begins to soften through collagen remodeling and increased hydration. As the fetus reaches full-term
there is a profound breakdown in the collagen structure. At the end of this process, the cervix is as soft as warm butter and the cervical
canal has dilated to about 100 mm diameter. Errors in timing of this process are a cause for preterm birth, which has a cascade of lifethreatening consequences. Quantitative ultrasound is well-suited to monitoring these changes. We have demonstrated the ability to accurately assess the elastic properties and acoustic scattering properties (anisotropy in backscatter and attenuation) of the cervix in nonpregnant hysterectomy specimens and in third trimester pregnancy. We’ve shown that acoustic and mechanical properties vary along the
length of the cervix. When anisotropy and spatially variability are accounted for, there are clear differences in parameter values with
subtle differences in softening. We are corroborating acoustic observations with nonlinear optical microscopy imaging for a reality
check on underlying tissue structure. This presentation will provide an overview of this effort.
10:00–10:10 Break
10:10
2aBA7. Characterization of anisotropic media with shear waves. Matthew W. Urban, Sara Aristizabal, Bo Qiang, Carolina Amador
(Dept. of Physiol. and Biomedical Eng., Mayo Clinic College of Medicine, 200 First St. SW, Rochester, MN 55905, urban.matthew@
mayo.edu), John C. Brigham (Dept. of Civil and Environ. Eng., Dept. of BioEng., Univ. of Pittsburgh, Pittsburgh, PA), Randall R. Kinnick, Xiaoming Zhang, and James F. Greenleaf (Dept. of Physiol. and Biomedical Eng., Mayo Clinic College of Medicine, Rochester,
MN)
In conventional shear wave elastography materials are assumed to be linear, elastic, homogeneous, and isotropic. These assumptions
are important to account for in certain tissues because they are not always appropriate. Many tissues such as skeletal muscle, the kidney,
and the myocardium are anisotropic. Shear waves can be used to investigate the directionally dependent mechanical properties of anisotropic media. To study these tissues in a systematic way and to account for the effects of the anisotropic architecture, laboratory-based
phantoms are desirable. We will report on several phantom-based approaches for studying shear wave anisotropy, assuming that these
materials are transversely isotropic. Phantoms with embedded fibers were used to mimic anisotropic tissues. Homogeneous phantoms
were compressed to induce transverse isotropy according to the acoustoelastic phenomenon, which is related to nonlinear behavior of
the materials. The fractional anisotropy of these phantoms was quantified to compare with measurements made in soft tissues. In addition, soft tissues are also viscoelastic, and we have developed a method to model viscoelastic transversely isotropic materials with the finite element method (FEM). The viscoelastic property estimation from phantom experiments and FEM simulations will also be
discussed.
10:30
2aBA8. Applications of acoustic radiation force for quantitative elasticity evaluation of bladder, thyroid, and breast. Mostafa
Fatemi (Physiol. and Biomedical Eng., Mayo Clinic College of Medicine, 200 1st St. SW, Rochester, MN 55905, fatemi@mayo.edu)
Acoustic radiation force (ARF) provides a simple and yet non-invasive mechanism to induce a localized stress inside human body.
The response to this excitation is used to estimate the mechanical properties of the targeted tissue in vivo. This talk covers an overview
of three studies that use ARF for estimation of elastic properties of thyroid, breast, and the bladder in patients. The studies on thyroid
and breast were aimed at differentiating between malignant and benign nodules. The study on the bladder was aimed at indirect evaluation of bladder compliance; hence, only a global measurement was needed. The study on breast showed that 16 out of 18 benign masses
and 21 out of 25 malignant masses were correctly identified. The study on 9 thyroid patients with 7 benign and 2 malignant nodules
showed all malignant nodules were correctly classified and only 2 of the 7 benign nodules were misclassified. The bladder compliance
study revealed a high correlation between our method and independent clinical measurement of compliance (R-squared of 0.8–0.9). Further investigations on larger groups of patients are needed to fully evaluate the performances of the methods.
10:50
2aBA9. Multiband center-frequency estimation for robust speckle tracking applications. Emad S. Ebbini and Dalong Liu (Elec.
and Comput. Eng., Univ. of Minnesota, 200 Union St. SE, Minneapolis, MN 55455, ebbin001@umn.edu)
Speckle tracking is widely used for the detection and estimation of minute tissue motion and deformation with applications in elastography, shear-wave imaging, thermography, etc. The center frequency of the echo data within the tracking window is an important parameter in the estimation of the tissue displacement. Local variations in this quantity due to echo mixtures (specular and speckle
components) may produce a bias in the estimation of tissue displacement using correlation-based speckle tracking methods. We present
a new algorithm for estimation and tacking of the center frequency variation in pulse-echo ultrasound as a quantitative tissue property
and for robust speckle tracking applications. The algorithm employs multiband analysis in the determination of echo mixtures as a preprocessing step before the estimation of the center frequency map. This estimate, in turn, is used to improve the robustness of the displacement map produced by the correlation-based speckle tracking. The performance of the algorithm is demonstrated in two speckle
tracking applications of interest in medical ultrasound: (1) ultrasound thermography and (2) vascular wall imaging.
2124
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2124
11:10
2aBA10. Echo decorrelation imaging for quantification of tissue structural changes during ultrasound ablation. T. Douglas Mast,
Tyler R. Fosnight, Fong Ming Hooi, Ryan D. Keil, Swetha Subramanian, Anna S. Nagle (Biomedical Eng., Univ. of Cincinnati, 3938
Cardiovascular Res. Ctr., 231 Albert Sabin Way, Cincinnati, OH 45267-0586, doug.mast@uc.edu), Marepalli B. Rao (Environ. Health,
Univ. of Cincinnati, Cincinnati, OH), Yang Wang, Xiaoping Ren (Internal Medicine, Univ. of Cincinnati, Cincinnati, OH), Syed A.
Ahmad (Surgery, Univ. of Cincinnati, Cincinnati, OH), and Peter G. Barthe (Guided Therapy Systems/Ardent Sound, Mesa, AZ)
2a TUE. AM
Echo decorrelation imaging is a pulse-echo method that maps millisecond-scale changes in backscattered ultrasound signals, potentially providing real-time feedback during thermal ablation treatments. Decorrelation between echo signals from sequential image
frames is spatially mapped and temporally averaged, resulting in images of cumulative, heat-induced tissue changes. Theoretical analysis indicates that the mapped echo decorrelation parameter is equivalent to a spatial decoherence spectrum of the tissue reflectivity, and
also provides a method to compensate decorrelation artifacts caused by tissue motion and electronic noise. Results are presented from
experiments employing 64-element linear arrays that perform bulk thermal ablation, focal ablation, and pulse-echo imaging using the
same piezoelectric elements, ensuring co-registration of ablation and image planes. Decorrelation maps are shown to correlate with
ablated tissue histology, including vital staining to map heat-induced cell death, for both ex vivo ablation of bovine liver tissue and in
vivo ablation of rabbit liver with VX2 carcinoma. Receiver operating characteristic curve analysis shows that echo decorrelation predicts
local ablation with greater success than integrated backscatter imaging. Using artifact-compensated echo decorrelation maps, heatinginduced decoherence of tissue scattering media is assessed for ex vivo and in vivo ultrasound ablation by unfocused and focused beams.
11:30
2aBA11. Quantitative ultrasound imaging to monitor in vivo high-intensity ultrasound treatment. Goutam Ghoshal (Res. and Development, Acoust. MedSystems Inc., 208 Burwash Ave., Savoy, IL 61874, ghoshal2@gmail.com), Jeremy P. Kemmerer, Chandra Karunakaran, Rami Abuhabshah, Rita J. Miller, and Michael L. Oelze (Elec. and Comput. Eng., Univ. of Illinois at Urbana-Champaign,
Urbana, IL)
The success of any minimally invasive treatment procedure can be enhanced significantly if combined with a robust noninvasive
quantitative imaging modality. Quantitative ultrasound (QUS) imaging has been widely investigated for monitoring various treatment
responses such as chemotherapy and thermal therapy. Previously we have shown the feasibility of using spectral based quantitative ultrasound parameters to monitor high-intensity focused ultrasound (HIFU) treatment of in situ tumors [Ultrasonic Imaging, 2014]. In the
present study we examined the use the various QUS parameters to monitor HIFU treatment of an in vivo mouse mammary adenocarcinoma model. Spectral parameters in terms of the backscatter coefficient, integrated backscattered energy, attenuation coefficient, and
effective scatterer size and concentration were estimated from radiofrequency signals during the treatment. The characteristic of each parameter was compared to the temperature profile recorded by needle thermocouple inserted into the tumor a few millimeters away from
the focal zone of the intersecting HIFU and the imaging transducer beams. The changes in the QUS parameters during the HIFU treatment followed similar trends observed in the temperature readings recorded from the thermocouple. These results suggest that QUS
techniques have the potential to be used for non-invasive monitoring of HIFU exposure.
11:50
2aBA12. Rapid simulations of diagnostic ultrasound with multiple-zone receive beamforming. Pedro Nariyoshi and Robert
McGough (Dept. of Elec. and Comput. Eng., Michigan State Univ., 2120 Eng. Bldg., East Lansing, MI 48824, mcgough@egr.msu.edu)
Routines are under development in FOCUS, the “Fast Object-oriented C + + Ultrasound Simulator” (http://www.egr.msu.edu/~fultras-web), to accelerate B-mode image simulations by combining the fast nearfield method with time-space decomposition. The most
recent addition to the FOCUS simulation model implements receive beamforming in multiple zones. To demonstrate the rapid convergence of these simulations in the nearfield region, simulations of a 192 element linear array with an electronically translated 64 element
sub-aperture are evaluated for a transient excitation pulse with a center frequency of 3 MHz. The transducers in this simulated array are
5 mm high and 0.5133 mm wide with a 0.1 mm center to center spacing. The simulation is evaluated for a computer phantom with
100,000 scatterers. The same configuration is simulated in Field II (http://field-ii.dk), and the impulse response approach with a temporal
sampling rate of 1 GHz is used as reference. Simulations are evaluated for the entire B-mode image simulated with each approach. The
results show that, with sampling frequencies of 15 MHz and higher, FOCUS eliminates all of the numerical artifacts that appear in the
nearfield region of the B-mode image, whereas Field II requires much higher temporal sampling frequencies to obtain similar results.
[Supported in part by NIH Grant R01 EB012079.]
2125
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2125
TUESDAY MORNING, 28 OCTOBER 2014
MARRIOTT 6, 9:00 A.M. TO 11:00 A.M.
Session 2aED
Education in Acoustics: Undergraduate Research Exposition (Poster Session)
Uwe J. Hansen, Chair
Chemistry & Physics, Indiana State University, 64 Heritage Dr., Terre Haute, IN 47803-2374
All posters will be on display from 9:00 a.m. to 11:00 a.m. To allow contributors an opportunity to see other posters, contributors of
odd-numbered papers will be at their posters from 9:00 a.m. to 10:00 a.m. and contributors of even-numbered papers will be at their
posters from 10:00 a.m. to 11:00 a.m.
Contributed Papers
2aED1. Prediction of pressure distribution between the vocal folds using
Bernoulli’s equation. Alexandra Maddox, Liran Oren, Sid Khosla, and
Ephraim Gutmark (Univ. of Cincinnati, 3317 Bishop St., Apt. 312, Cincinnati, OH 45219, maddoxat@mail.uc.edu)
Determining the mechanisms of self-sustained oscillation of the vocal
folds requires characterization of intraglottal aerodynamics. Since most of
the intraglottal aerodynamics forces cannot be measured experimentally,
most of the current understanding of vocal fold vibration mechanism is
derived from analytical and computational models. Several of such studies
have used the Bernoulli’s equation in order to calculate the pressure distribution between the vibrating folds. In the current study, intraglottal pressure
measurements are taken in a hemilarynx model and are compared with pressure values that are computed form the Bernoulli’s equation. The hemilarynx model was made by removing one fold and having the remaining fold
vibrating against a metal plate. The plate was equipped with two pressure
ports located near the superior and inferior aspects of the fold. The results
show that pressure calculated using Bernoulli’s equation matched well with
the measured pressure waveform during the glottal opening phase and dissociated during the closing phase.
2aED2. Effects of room acoustics on subjective workload assessment
while performing dual tasks. Brenna N. Boyd, Zhao Peng, and Lily Wang
(Eng., Univ. of Nebraska at Lincoln, 11708 s 28th St., Bellevue, NE 68123,
bnboyd@unomaha.edu)
This investigation examines the subjective workload assessments of
individuals using the NASA Task Load Index (TLX), as they performed
speech comprehension tests under assorted room acoustic conditions. This
study was motivated due to the increasing diversity in US classrooms. Both
native and non-native English listeners participated, using speech comprehension test materials produced by native English speakers in the first phase
and by native Mandarin Chinese speakers in the second phase. The speech
materials were disseminated in an immersive listening environment to each
listener under 15 acoustic conditions, from combinations of background
noise level (three levels from RC-30, 40, and 50) and reverberation time
(five levels from 0.4 to 1.2 seconds). During each condition, participants
completed assorted speech comprehension tasks while also tracing a moving
dot for an adaptive rotor pursuit task. At the end of each acoustic condition,
listeners were asked to assess the perceived workload by completing the sixitem NASA TLX survey, e.g., mental demand, perceived performance,
effort, and frustration. Results indicate that (1) listeners’ workload assessments degraded as the acoustic conditions became more adverse, and (2) the
decrement in subjective assessment was greater for non-native listeners.
2126
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
2aED3. Analysis and virtual modification of the acoustics in the
Nebraska Wesleyan University campus theatre auditorium. Laura C.
Brill (Dept. of Phys., Nebraska Wesleyan Univ., 5000 St. Paul Ave, Lincoln,
NE 68504, lbrill@nebrwesleyan.edu), Matthew G. Blevins, and Lily M.
Wang (Durham School of Architectural Eng. and Construction, Univ. of
Nebraska-Lincoln, Omaha, NE)
NWU’s McDonald Theatre Auditorium is used for both musical and
non-musical performances. The acoustics of the space were analyzed in
order to determine whether the space could be modified to better fit its uses.
The acoustic characteristics of the room were obtained from impulse
responses using the methods established in ISO 3382-1 for measuring the
acoustic parameters of a performance space. A total of 22 source/receiver
pairs were used. The results indicate a need for increased reverberation in
the mid to high frequency ranges of 500–8000 Hz. The experimental results
were used to calibrate a virtual model of the space in ODEON acoustics
software. Materials in the model were then successfully modified to increase
reverberation time and eliminate unwanted flutter echoes to optimize the
acoustics to better suit the intended purposes of the space.
2aED4. The diffraction pattern associated with the transverse cusp
caustic. Carl Frederickson and Nicholas L. Frederickson (Phys. and Astronomy, Univ. of Central Arkansas, LSC 171, 201 Donaghey Ave., Conway,
AR 72035, nicholaslfrederickson@gmail.com)
New software
has been developed to evaluate the Pearcey function
Ð1
exp[6i(s4/4 + w2s2/2 + w1s)]ds. This describes the diffracP6(w1,w2)= (-1)
tion pattern of a transverse cusp caustic. Run-time comparisons between different coding environments will be presented. The caustic surface produced
by the reflection of a spherical wavefront from the surface given by h(x,y)
2
+ h23y will also be displayed.
= h21x + h2xy
2aED5. Architectural acoustical oddities. Zev C. Woodstock and Caroline
P. Lubert (Mathematics & Statistics, James Madison Univ., 301 Dixie Ave.,
Harrisonburg, VA 22801, lubertcp@jmu.edu)
The quad at James Madison University (Virginia, USA) exhibits an
uncommon, but not unique, acoustical oddity called Repetition Pitch. When
someone stands at certain places on the quad and makes a punctuated white
noise (claps, for example) a most unusual squeak is returned. This phenomenon only occurs at these specific places. A similar effect has been observed
in other locations, mostly notably Ursinus College (Pennsylvania, USA) and
the pyramid at Chichen Itza (Mexico). This talk will discuss Repetition
Pitch, as well as other interesting architectural acoustic phenomenon including the noisy animals in the caves at Arcy-sur-Cure (France), the early warning system at Golkonda Fort (Southern India) and the singing angels at
Wells Cathedral in the United Kingdom.
168th Meeting: Acoustical Society of America
2126
and size of the oral cavity in the vicinity of the sibilant constriction. Realtime three-dimensional ultrasound, palate impressions, acoustic recordings,
and electroglottography are brought to bear on these issues.
An impedance tube has been used to make measurements of the acoustic
impedance of porous samples. Porous with designed porosities and tortuosities have been produced using 3D printing. Measured impedances are compared to calculated values.
2aED11. Teaching acoustical interaction: An exploration of how teaching architectural acoustics to students spawns project-based learning.
Daniel Butko, Haven Hardage, and Michelle Oliphant (Architecture, The
Univ. of Oklahoma, 830 Van Vleet Oval, Norman, OK 73019, Haven.B.
Hardage-1@ou.edu)
2aED7. Stick bombs: A study of the speed at which a woven stick construction self-destructs. Scotty McKay and William Slaton (Phys. & Astronomy, The Univ. of Central Arkansas, 201 Donaghey Ave., Conway, AR
72034, SMCKAY2@uca.edu)
The language and methods of architecture typically evaluated through
small-scale models and drawings can be complemented by full-scale interactive constructs, augmenting learning through participatory, experiential,
and sometimes experimental means. Congruent with Constantin Brancusi’s
proclamation, “architecture is inhabitable sculpture,” opportunities to build
full-scale constructs introduce students to a series of challenges predicated
by structure, connections, safety, and a spirit of inquisition to learn from
human interaction. To educate and entertain through sensory design, undergraduate students designed and built an interactive intervention allowing
visual translation of acoustical impulses. The installation was developed and
calibrated upon the lively acoustics and outward campus display of the college’s gallery, employing excessive reverberation and resonance as a
method of visually demonstrating sound waves. People physically inhabiting the space were the participants and critics by real-time reaction to personal interaction. The learning process complemented studio-based instruction
through hands-on interaction with physical materials and elevated architectural education to a series of interactions with people. This paper documents
and celebrates the Interactive Synchronicity project as a teaching tool outside common studio project representation while enticing classmates, faculty, and complete strangers to interact with inhabitable space.
A stick bomb is created by weaving sticks together in a particular pattern. By changing the way the sticks are woven together, different types of
stick bombs are created. After the stick bomb is woven to the desired length,
one side of the stick bomb can be released causing it to rapidly begin tearing
itself apart in the form of a pulse that propagates down the weave. This
occurs due to a large amount of potential energy stored within the multitude
of bent sticks; however, the physics of this phenomena has not been studied
to the authors knowledge. The linear mass density of the stick bomb can be
changed by varying the tightness of the weave. Data on these stick bombs,
including video analysis to determine the pulse speed, will be presented.
2aED8. Three-dimensional printed acoustic mufflers and aeroacoustic
resonators. John Ferrier and William Slaton (Phys. & Astronomy, The
Univ. of Central Arkansas, 201 Donaghey Ave., Conway, AR 72034, jpferrierjr@gmail.com)
We explore and present the use of 3D printing technology to design,
construct, and test acoustic elements that could be used as a low-frequency
Helmholtz-resonator style muffler in a ventilation duct system. Acoustic elements such as these could be quickly prototyped, printed, and tested for any
noisy duct environment. These acoustic elements are tested with and without mean flow to characterize their sound absorption (and sound generation)
properties. It is found that at particular ranges of air flow speeds the simply
designed acoustic muffler acts as a site for aeroacoustic sound generation.
Measurement data and 3D model files with Python-scripting will be presented for several muffler designs. This work is supported by the Arkansas
Space Grant Consortium in collaboration with NASA’s Acoustics Office at
the Johnson Space Center.
2aED9. Determining elastic moduli of concrete using resonance. Gerard
Munyazikwiye and William Slaton (Phys. & Astronomy, The Univ. of Central
Arkansas, 201 Donaghey Ave., Conway, AR 72034, GMUNYAZIKWIYE1@
uca.edu)
The elastic moduli of rods of material can be determined by resonance
techniques. The torsional, longitudinal, and transverse resonance modes for
a rod of known mass and length can be measured experimentally. These resonance frequencies are related to the elastic properties of the material,
hence, by measuring these quantities the strength of the material can be
determined. Preliminary tests as proof of principle are conducted with metallic rods. Data and experimental techniques for determining the elastic
moduli for concrete using this procedure will be presented.
2aED10. Articulation of sibilant fricatives in Colombian Spanish. Alexandra Abell and Steven M. Lulich (Speech and Hearing Sci., Indiana Univ., 404
West Kirkwood Ave., Bloomington, IN 47404, alabell@indiana.edu)
Colombians constitute the largest South American population in the
United States at 909,000 (or 24% of the total South American population in
the U.S.), and Bogota, Colombia is the most populated area within the
Andean Highland region, yet relatively little is known about Colombian
Spanish speech production. The majority of previous studies of Colombian
phonetics have relied on perception and acoustic analysis. The present study
contributes to Colombian Spanish phonetics by investigating the articulation
of sibilant fricatives. In particular, the shape of the palate and tongue during
the production of sibilants is investigated in an attempt to quantify the shape
2127
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
2aED12. Palate shape and the central tongue groove. Coretta M. Talbert
(Speech and Hearing Sci., Univ. of Southern MS, 211 Glen Court, Jackson,
MS 39212, coretta.talbert@eagles.usm.edu) and Steven M. Lulich (Speech
and Hearing Sci., Indiana Univ., Bloomington, IN)
It is well known that the center of the tongue can be grooved so that it is
lower in the mouth than the lateral parts of the tongue, or it can bulge higher
than the lateral parts of the tongue. It has never been shown whether or how
this groove or bulge is related to the shape of the palate. In this study, we
investigated the shape and size of the palate for several speakers using digitized 3D laser-scans of palate impressions and measurements on the impression plasters themselves. The groove or bulge in the center of the tongue
was measured using real-time three-dimensional ultrasound. Pertinent findings will be presented concerning the relationship of the central groove/
bulge shape and size to the shape and size of the palate.
2aED13. Signal processing for velocity and range measurement using a
micromachined ultrasound transducer. Dominic Guri and Robert D.
White (Mech. Eng., Tufts Univ., 200 College Ave., Anderson 204, Medford,
MA 02155, dominic.guri@tufts.edu)
Signal processing techniques are under investigation for determination
of range and velocity information from MEMS based ultrasound transducers. The ideal technique will be real-time, result in high resolution and
accurate measurements, and operate successfully in noise. Doppler velocity
measurements were previously demonstrated using a MEMS cMUT array
(Shin et al., ASA Fall Meeting 2011, JASA 2013, Sens. Actuators A 2014).
The MEMS array has 168 nickel-on-glass capacitive ultrasound transducers
on a 1 cm die, and operates at 180 kHz in air. Post processing of the
received ultrasound demonstrated the ability to sense velocity using continuous wave (CW) Doppler at a range of up to 1.5 m. The first attempt at realtime processing using a frequency modulated continuous wave (FM/CW)
scheme was noise limited by the analog demodulation circuit. Further noise
analysis is ongoing to determine whether this scheme may be viable. Other
schemes under consideration include cross correlation chirp and single and
multi-frequency burst waveforms. Preliminary results from a single frequency burst showed that cross-correlation-based signal processing may
achieve acceptable range. The system is targeted at short range small robot
navigation tasks. Determination of surface roughness from scattering of the
reflected waves may also be possible.
168th Meeting: Acoustical Society of America
2127
2a TUE. AM
2aED6. Impedance tube measurements of printed porous materials.
Carl Frederickson and Forrest McDougal (Phys. and Astronomy, Univ. of
Central Arkansas, LSC 171, 201 Donaghey Ave., Conway, AR 72035,
FMCDOUGAL1@CUB.UCA.EDU)
2aED14. Investigation of a tongue-internal coordinate system for twodimensional ultrasound. Rebecca Pedro, Elizabeth Mazzocco (Speech and
Hearing Sci., Indiana Univ., 200 South Jordan Ave., Bloomington, IN
o (Dept. of Telecommunica47405, rebpedro@indiana.edu), Tamas G. Csap
tions and Media Informatics, Budapest Univ. of Technol. and Economics,
Budapest, Hungary), and Steven M. Lulich (Speech and Hearing Sci., Indiana Univ., Bloomington, IN)
In order to compare ultrasound recordings of tongue motion across utterances or across speakers, it is necessary to register the ultrasound images
with respect to a common frame of reference. Methods for doing this typically rely either (1) on fixing the position of the ultrasound transducer relative to the skull by means of a helmet or a similar device, or (2) re-aligning
the images by various means, such as optical tracking of head and transducer motion. These methods require sophisticated laboratory setups, and
are less conducive to fieldwork or other studies in which such methods are
impractical. In this study, we investigated the possibility of defining a rough
coordinate system for image registration based on anatomical properties of
the tongue itself. This coordinate system is anchored to the lower-jaw rather
than the skull, but may potentially be transformed into an approximately
skull-relative coordinate system by integrating video recordings of jaw
motion.
2aED15. The effect of finite impedance ground reflections on horizontal
full-scale rocket motor firings. Samuel Hord, Tracianne B. Neilsen, and
Kent L. Gee (Dept. of Phys. and Astronomy, Brigham Young Univ., 737 N
600 E #103, Provo, UT 84606, samuel.hord@gmail.com)
Ground reflections have a significant impact on the propagation of sound
from a horizontal rocket firing. The impedance of the ground relies strongly
on effective flow resistivity of the surface and determines the frequencies at
which interference nulls occur. For a given location, a softer ground, with
lower effective flow resistivity, shifts the location of interference nulls to
lower frequencies than expected for a harder ground. The difference in the
spectral shapes from two horizontal firings of GEM-60 rocket motors, over
snowy ground, clearly shows this effect and has been modeled. Because of
the extended nature of high energy launch vehicles, the exhaust plume is
modeled as a partially correlated line source, with distribution parameters
chosen to match the recorded data sets as best as possible. Different flow resistivity values yield reasonable comparisons to the results of horizontal
GEM-60 test firings.
2aED16. Palate-related constraints on sibilant production in three
dimensions. Sarah Janssen and Steven M. Lulich (Speech and Hearing Sci.,
Indiana Univ., 200 South Jordan Ave., Bloomington, IN 47405,
sejansse14@gmail.com)
Most studies of speech articulation are limited to a single plane, typically the midsagittal plane, although coronal planes are also used. Singleplane data have been undeniably useful in improving our understanding of
speech production, but for many acoustic and aerodynamic processes, a
knowledge of 3D vocal tract shapes is essential. In this study, we used palate
impressions to investigate variations in the 3D structure of the palates of
several individuals, and we used real-time 3D ultrasound to image the
tongue surface during sibilant production by the same individuals. Our analysis focused on the degree to which tongue shapes during sibilant productions are substantially similar or different between individuals with different
palate shapes and sizes.
2aED17. The evaluation of impulse response testing in low signal-tonoise ratio environments. Hannah D. Knorr (Audio Arts and Acoust.,
Columbia College Chicago, 134 San Carlos Rd, Address, Minooka, IL
60447, hknorr13@gmail.com), Jay Bleifnick (Audio Arts and Acoust., Columbia College Chicago, Schiller Park, IL), Andrew M. Hulva, and Dominique J. Cheenne (Audio Arts and Acoust., Columbia College Chicago,
Chicago, IL)
Impulse testing is used by industry professionals to test many parameters
of room acoustics, including the energy decay, frequency response, time
response, etc. Current testing software makes this process as streamlined as
2128
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
possible, but generally must be utilized in quiet environments to yield high
signal-to-ratios and more precise results. However, many real world situations cannot conform to the necessary standards needed for reliable data.
This study tests various methods of impulse responses in background noise
environments in an attempt to find the most reliable procedure for spaces
with a high ambient noise levels. Additionally, extreme situations will be
evaluated and a method will be derived to correct for the systematic error
attributed to high background noise levels.
2aED18. Comparison of palate impressions and palate casts from threedimensional laser-scanned digital models. Michelle Tebout and Steven M.
Lulich (Speech and Hearing Sci., Indiana Univ., 200 South Jordan Ave.,
Bloomington, IN 47405, mtebout@imail.iu.edu)
Palate impressions and their casts in plaster are negatives of each other.
While plaster casts are the standard for palate measurements and data preservation, making such casts can be time-consuming and messy. We
hypothesized that measurements from 3D laser-scanned palate impressions
are negligibly different from equivalent measurements from 3D laserscanned palate casts. If true, this would allow the step of setting impressions
in plaster to be skipped in future research. This poster presents the results of
our study.
2aED19. The analysis of sound wave scattering using a firefighter’s Personal Alert Safety System signal propagating through a localized region
of fire. Andrew L. Broda (Phys. Dept., U.S. Naval Acad., 572 C Holloway
Rd., Chauvenet Hall Rm. 295, Annapolis, MD 21402), Chase J. Rudisill
(Phys. Dept., U.S. Naval Acad., Harwood, MD), Nathan D. Smith (Phys.
Dept., U.S. Naval Acad., Davidsonville, MD), Matthew K. Schrader, and
Murray S. Korman (Phys. Dept., U.S. Naval Acad., Annapolis, MD, korman@usna.edu)
Firefighting is quite clearly a dangerous and risk-filled job. To combat
these dangers and risks, firefighters wear a (National Fire Protection
Agency, NFPA 2007 edition of the 1982 standard) Personal Alert Safety
System (PASS) that will sound a loud alarm if it detects (for example) the
lack of movement of a firefighter. However, firefighters have experienced
difficulty locating the source of these alarm chirps (95 dBA around 3 kHz)
in a burning building. The project goal is to determine the effect of pockets
of varying temperatures of air in a burning building on the sound waves produced by a PASS device. Sound scattering experiments performed with a
vertical heated air circular jet plume (anechoic chamber) and with a wood
fire plume from burning cylindrical containers (Anne Arundel Fire Department’s Training Facility) suggest that from Snell’s Law, sound rays refract
around such pockets of warmer air surrounded by less warmer ambient air
due to changes in the sound speed with temperature through the medium.
Real-time and spectral measurements of 2.7 kHz CW sound scattering
(using a microphone) exhibit some attenuation and considerable amplitude
and frequency modulation. This research may suggest future experiments
and effective modifications of the current PASS system.
2aED20. New phased array models for fast nearfield pressure simulations. Kenneth Stewart and Robert McGough (Dept. of Elec. and Comput.
Eng., Michigan State Univ., East Lansing, MI, stewa584@msu.edu)
FOCUS, the “Fast Object-oriented C + + Ultrasound Simulator,” is free
software that rapidly and accurately models therapeutic and
diagnostic ultrasound with the fast nearfield method, time-space decomposition, and the angular-spectrum approach. FOCUS presently supports arrays
of circular, rectangular, and spherically focused transducers arranged in flat
planar, spherically focused, and cylindrically focused geometries. Excellent
results are obtained with all of these array geometries in FOCUS for simulations of continuous-wave and transient excitations, and new array geometries are needed for B-mode simulations that are presently under
development. These new array geometries also require new data structures
that describe the electrical connectivity of the arrays. Efforts to develop
these new features in FOCUS are underway, and results obtained with these
new array geometries will be presented. Other new features for FOCUS will
also be demonstrated. [Supported in part by NIH Grant R01 EB012079.]
MATLAB-based
168th Meeting: Acoustical Society of America
2128
2aED21. Nonlinear scattering of crossed focused ultrasonic beams in
the presence of turbulence generated behind a model deep vein thrombosis using an orifice plate set in a thin tube. Daniel Fisher and Murray S.
Korman (Phys. Dept., U.S. Naval Acad., 572 C Holloway Rd., Chauvenet
Hall Rm. 295, Annapolis, MD 21402, korman@usna.edu)
An orifice plate (modeling a “blockage” in a deep vein thrombosis
DVT) creates turbulent flow in a downstream region of a submerged polyethylene tube (1.6 mm thick, diameter 4 cm and overall length 40 cm). In
the absence of the orifice plate, the water flow is laminar. The orifice plate is
mechanically secured between two 20 cm tube sections connected by a
union. The union allows a plate with an orifice to be slid into the union providing a concentric orifice plate that can obstruct the flow causing vorticity
and turbulent flow downstream. A set of orifice plates (3 mm thick) are used
(one at a time) to conveniently obstruct the tube flow with a different radius
compared to the inner wall tube radius. The nonlinear scattering at the sum
frequency (f+ = 3.8 MHz), from mutually perpendicular spherical focused
beams (f1 = 1.8 MHz and f2 = 2.0 MHz) is used to correlate the Doppler
shift, spectral, and intensity as a function of the orifice plate size in an effort
to correlate the blockage with the amount of nonlinear scattering. In the absence of turbulence in the overlap region, there is virtually no scattering.
Therefore, a slight blockage is detectable.
2aED23. Effects of sustainable and traditional building systems on
indoor environmental quality and occupant perceptions. Joshua J. Roberts and Lauren M. Ronsse (Audio Arts and Acoust., Columbia College Chicago, 4363 N. Kenmore Ave., Apt. #205, Chicago, IL 60613, joshua.
roberts@loop.colum.edu)
This study examines the effects of both sustainable and traditional building systems on the indoor environmental quality (IEQ) and occupant perceptions in an open-plan office floor of a high-rise building located in Chicago,
IL. The office evaluated has sustainable daylighting features as well as a
more traditional variable air volume mechanical system. Different measurement locations and techniques are investigated to quantify the indoor environmental conditions (i.e., acoustics, lighting, and thermal conditions)
experienced by the building occupants. The occupant perceptions of the
indoor environmental conditions are assessed via survey questionnaires
administered to the building occupants. The relationships between the IEQ
measured in the office and the occupant perceptions are assessed.
2aED22. Analysis of acoustic data acquisition instrumentation for
underwater blast dredging. Brenton Wallin, Alex Stott, James Hill, Timothy Nohara, Ryan Fullan, Jon Morasutti, Brad Clark, Alexander Binder, and
Michael Gardner (Ocean Eng., Univ. of Rhode Island, 30 Summit Ave.,
Narragansett, RI 02882, brentwallin@my.uri.edu)
A team of seniors from the University of Rhode Island were tasked with
analyzing the acoustic data and evaluating the data acquisition systems used
in Pacific Northwest National Laboratories’ (PNNL) study of blast dredging
in the Columbia River. Throughout the semester, the students learned about
TUESDAY MORNING, 28 OCTOBER 2014
MARRIOTT 9/10, 8:00 A.M. TO 12:15 P.M.
Session 2aID
Archives and History and Engineering Acoustics: Historical Transducers
Steven L. Garrett, Chair
Grad. Prog. in Acoustics, Penn State, Applied Research Lab, P. O. Box 30, State College, PA 16804-0030
Chair’s Introduction—8:00
Invited Papers
8:05
2aID1. 75th Anniversary of the Shure Unidyne microphone. Michael S. Pettersen (Applications Eng., Shure Inc., 5800 W. Touhy
Ave., Niles, IL 60714, pettersen_michael@shure.com)
2014 marks the 75th anniversary of the Shure model 55 microphone. Introduced in 1939 and still being manufactured today, the
Shure Unidyne was the first unidirectional microphone using a single dynamic element. The presentation provides an overview of the
Unidyne’s unique position in the history of 20th century broadcast, politics, and entertainment, plus the amazing story of Benjamin
Bauer, a 24 year old immigrant from the Ukraine who invented the Unidyne and earned his first of over 100 patents for audio technology. Rare Unidyne artifacts from the Shure Archive will be on display after the presentation, including prototypes fabricated by Ben
Bauer.
2129
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2129
2a TUE. AM
the unique acoustic signatures of confined underwater blasts and the necessary specifications of systems used to record them. PNNL used two data acquisition systems. One was a tourmaline underwater blast sensor system
created by PCB Piezotronics. The second was a hydrophone system using a
Teledyne TC 4040 hydrophone, a Dytran inline charge amplifier, and a signal conditioner built for the blast sensor system. The students concluded
that the data from the blast sensor system was reliable because the system
was built by the company for this specific application and there were calibration sheets showing the system worked properly. The hydrophone data
was deemed unreliable because components were orientated in an unusual
manner that lead to improper data acquisition. A class of URI graduate students built a new hydrophone system that accurately recorded underwater
dredge blasts performed in New York Harbor. This system is a fraction of
the price of the blast sensor system.
8:25
2aID2. Ribbon microphones. Wesley L. Dooley (Eng., Audio Eng. Assoc., 1029 North Allen Ave, Pasadena, CA 91104, wes@ribbonmics.com)
The ribbon microphone was invented by Dr. Walter Schottky who described it in German Patent 434855C, issued December 21, 1924 to
Siemens & Halske (S&H) in Berlin. An earlier “Electro-Dynamic Loudspeaker” Patent which Schottky had written with Dr. Erwin Gerlach
described a compliant, lightweight, and ribbed aluminum membrane whose thinnest dimension was at right angles to a strong magnetic field.
Passing an audio frequency current through this membrane causes it to move and create sound vibrations. The December Patent describes
how this design functions either as a loudspeaker or a microphone. A 1930 S&H patent for ribbon microphone improvements describes how
they use internal resonant and ported chambers to extend frequency response past 4 kHz. RCA dramatically advanced ribbon microphone performance in 1931. They opened the ribbon to free air to create a consistent, air-damped, low-distortion, figure-eight with smooth 30–10,000
Hz response. RCA ribbon microphones became the performance leader for cinema, broadcast, live sound and recording. Their 20–30,000 Hz
RCA 44B and BX was manufactured from 1936 to 1955. It is the oldest design still used every day at major studios. Ribbon microphones are
increasingly used for contemporary recordings. Come hear why ribbon microphones, like phonograph records, are relevant to quality sound.
8:45
2aID3. Iconic microphonic moments in historic vocal recordings. Alexander U. Case (Sound Recording Technol., Univ. of Massachusetts Lowell, 35 Wilder St., Ste. 3, Lowell, MA 01854, alex@fermata.biz)
Microphone selection—the strategic pairing of microphone make and model with each sound to be recorded—is one of the most important decisions a sound engineer must make. The technical specifications of the microphone identify which transducers are capable of
functioning properly for any given recording task, but the ultimate decision is a creative one. The goal is for the performance capabilities
of the microphone to not only address any practical recording session challenges, but also flatter the sound of the instrument, whether in
pursuit of palpable realism or a fictionalized new timbre. The creative decision is informed, in part, by demonstrated success in prior
recordings, the most important of which are described for that essential pop music instrument: the voice.
9:05
2aID4. The WE 640AA condenser microphone. Gary W. Elko (mh Acoust., 25A Summit Ave., Summit, NJ 07901, gwe@mhacoustics.com)
In 1916 Edward Wente, working for Western Electric an AT&T subsidiary, invented a microphone that was the foundation of the
modern condenser microphone. Wente’s early condenser microphone designs continued to be developed until Western Electric produced
the WE 361 in 1924 followed by the Model 394 condenser microphone in 1926. Western Electric used the WE 394 microphone as part
of the “Master Reference System” to rate audio transmission quality of the telephone network. The WE 394 was too large for some measurement purposes so in 1932 Bell Labs engineers H. Harrison and P. Flanders designed a smaller version. The diaphragm had a diameter of 0.6 in. However, this design proved too difficult to manufacture and F. Romanow, also at Bell Labs, designed the 640A “1 in.”
microphone in 1932. Years later it was discovered that the 640A sensitivity varied by almost 6 dB from -650 C to 250 C. To reduce the
thermal sensitivity, Bell Labs engineers M. Hawley and P. Olmstead carefully changed some of the 640A materials. The modified microphone was designated as the 640AA, which became the worldwide standard microphone for measuring sound pressure. This talk will
describe some more details of the history of the 640AA microphone.
9:25
2aID5. Reciprocity calibration of condenser microphones. Leo L. Beranek (Retired, 10 Longwood Dr., Westwood, MA 02090, beranekleo@ieee.org)
The theory of reciprocity began with Lord Rayleigh and was first well stated by S. Ballantine (1929). The first detailed use of the reciprocity theory for the calibration of microphones was by R. K. Cook (1940). At the wartime Electro-Acoustic Laboratory, at Harvard
University, the need arose to calibrate a large number of Western Electric 640-AA condenser microphones. A reciprocity apparatus was
developed that connected the two microphones with an optimum shaped cavity that included a means for introducing hydrogen or helium to extend the frequency range. The apparatus was published by A. L. Dimattia and F. M. Wiener (1946). A number of things
resulted. The Harvard group, in 1941, found that the international standard of sound pressure was off by 1.2 dB—that standard was
maintained by the French Telephone Company and the Bell Telephone Laboratories and was based on measurements made with Thermophones. This difference was brought to the attention of those organizations and the reciprocity method of calibration was subsequently adopted by them resulting in the proper standard of sound pressure adopted around 1942. The one-inch condenser microphone
has subsequently become the worldwide standard for precision measurement of sound field pressures.
9:45–10:00 Break
10:00
2aID6. Electret microphones. James E. West (ECE, Johns Hopkins Univ., 3400 N. Charles St., Barton Hall 105, Baltimore, MD
21218, jimwest@jhu.edu)
For nearly 40 years, condenser electret microphones have been the transducer of choice in most every area of acoustics including telephony, professional applications, hearing aids, and toys. More than 2 billion electret microphones are produced annually, primarily for
the communications and entertainment markets. E. C. Wente invented the condenser microphone in 1917 at Bell Labs while searching
for a replacement for the carbon microphone used in telephones; however, the necessary few hundred volt bias rendered the condenser
microphone unusable in telephony, but its acoustical characteristics were welcomed in professional and measurement applications. Permanently charged polymers (electrets) provided the necessary few hundred-volt bias, thus simplifying the mechanical and electrical
requirements for the condenser microphone and making it suitable for integration into the modern telephone. The introduction of inexpensive condenser microphones with matching frequency, phase, and impedance characteristics opened research opportunities for multiple microphone arrays. Array technology developed at Bell Labs will be presented in this talk.
2130
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2130
10:20
2aID7. Meucci’s telephone transducers. Angelo J. Campanella (Acculab, Campanella Assoc., 3201 Ridgewood Dr., Ohio, Hilliard,
OH 43026, a.campanella@att.net)
Antonio Meucci (1809–1889) developed variable reluctance transducers from 1854 to 1876 and beyond after noticing that he could
hear voice sounds from paddle electrodes while he participated in electrotherapy of a migraine patient around 1844 while in Havana,
Cuba. He immigrated to Staten Island, NY, in 1850 and continued experimenting to develop a telephone. He found better success from
electromagnetics using materials evolved from the telegraph developed by Morse, as well as a non-metal diaphragm with an iron tab,
iron bars and a horseshoe shape. Artifacts from his residence on Staten Island are presently on display at the museum of his life on Staten
Island from 1850 to his death. Those artifacts, thought until now to be only models, were found to be wired and still operative. Tests
were performed in July, 2011. Their electrical resistance is that expected for wire wound variable reluctance transducers. Voice signals
were produced without any externally supplied operating current. At least one transducer was found to be also operable as a receiver and
was driven to produce voice sounds to the ear. Meucci’s life and works will be discussed and these test results will be demonstrated
including recordings from voice tests.
2a TUE. AM
10:40
2aID8. The Fessenden Oscillator: The first sonar transducer. Thomas R. Howarth and Geoffrey R. Moss (U.S. Navy, 1176 Howell
St, B1346 R404A, Newport, RI 02841, thomas.howarth@navy.mil)
When the RMS Titanic sunk in 1912, there was a call placed forth by ship owners for inventors to offer solutions for ship collision
avoidance methods. Canadian born inventor Reginald A. Fessenden answered this call while working at the former Boston Submarine
Signal Company with the invention and development of the first modern transducer used in a sonar. The Fessenden oscillator was an
edge clamped circular metal with a radiating head facing the water on one side while the interior side had a copper tube attached that
moved in and out of a fixed magnetic coil. The coil consisted of a direct-current (DC) winding to provide a magnetic field polarization
and an alternating-current (AC) coil winding to induce the current into the copper tube and thus translate the magnetic field polarization
to the radiating plate with vibrations that translated from the radiating head to the water medium. The prototype and early model versions operated at 540 Hz. Later developments included adaptation of this same transducer for use in underwater communications, obstacle avoidance with WW I retrofits onto British submarines for both transmitting and receiving applications including mine detection.
This presentation will discuss design details including a modern numerical modelling effort.
11:00
2aID9. Historical review of underwater acoustic cylindrical transducer development in Russia for sonar arrays. Boris Aronov
(ATMC/ECE, Univ. of Massachusetts Dartmouth, Needam, MA) and David A. Brown (ATMC/ECE, Univ. of Massachusetts Dartmouth, 151 Martine St., Fall River, MA 02723, dbAcoustics@cox.net)
Beginning with the introduction of piezoelectric ceramics in the 1950’s, underwater acoustics transducer development for active sonar arrays proceeded in different directions in Russia (formerly USSR) than in the United States (US). The main sonar arrays in Russia
were equipped with cylindrical transducers, whereas in the US, the implementation was most often made with extensional bar transducers of the classic Tonpilz design. The presentation focuses on the underlying objectives and humane factors that shaped the preference towards the widespread application of baffled cylindrical transducers for arrays in Russia, the history of their development, and
contributions to theory of the transducers made by the pioneering developers.
11:20
2aID10. The phonodeik: Measuring sound pressure before electroacoustic transducers. Stephen C. Thompson (Graduate Program
in Acoust., Penn State Univ., N-249 Millennium Sci. Complex, University Park, PA 16802, sct12@psu.edu)
The modern ability to visualize sound pressure waveforms using electroacoustic transducers began with the development of the vacuum tube amplifier, and has steadily improved as better electrical amplification devices have become available. Before electoral amplification was available; however, a significant body of acoustic pressure measurements had been made using the phonodeik, a device
developed by Dayton C. Miller in the first decade of the twentieth century. The phonodeik employs acoustomechanical transduction to
rotate a small mirror that reflects an optical beam to visualize the pressure waveform. This presentation will review the device and some
of the discoveries made with it.
11:40
2aID11. A transducer not to be ignored: The siren. Julian D. Maynard (Phys., Penn State Univ., 104 Davey Lab, Box 231, University
Park, PA 16802, maynard@phys.psu.edu)
An historic transducer to which one should pay attention is the siren. While its early application was as a source for a musical instrument, the siren soon became the transducer of choice for long-range audible warning because of its high intensity and recognizable tone.
The components defining the siren include a solid stator and rotor, each with periodic apertures, and a compressed fluid (usually air but
could be other fluids). With the rotor rotating in close proximity to the stator, and the resulting opening and closing of passageways
through the apertures for the compressed fluid results in periodic sound waves in the surrounding fluid; usually a horn is used to enhance
the radiation efficiency. The high potential energy of the compressed fluid permits high intensity sound. Some sirens which received scientific study include that of R. Clark Jones (1946), a 50 horsepower siren with an efficiency of about 70%, and that of C. H. Allen and I.
Rudnick (1947), capable of ultrasonic frequencies and described as a “supersonic death ray” in the news media. Some design considerations, performance results, and applications for these sirens will be presented.
12:00–12:15 Panel Discussion
2131
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2131
TUESDAY MORNING, 28 OCTOBER 2014
SANTA FE, 9:00 A.M. TO 11:40 A.M.
Session 2aMU
Musical Acoustics: Piano Acoustics
Nicholas Giordano, Chair
Physics, College of Sciences and Mathematics, Auburn University, Auburn, AL 36849
Invited Papers
9:00
2aMU1. The slippery path from piano key to string. Stephen Birkett (Systems Design Eng., Univ. of Waterloo, 250 University Ave.,
Waterloo, ON N2L 3G1, Canada, sbirkett@uwaterloo.ca)
Everything that contributes to the excitation of a piano string, from key input to hammer–string interaction, is both deterministic and
consistently repeatable. Sequences of identical experimental trials give results that are indistinguishable. The simplicity of this behavior
contrasts with the elusive goal of predicting input–output response and the extreme difficulty of accurate physical characterization. The
nature and complexity of the mechanisms and material properties involved, as well as the sensitivity of their parameterization, place serious obstacles in the way of the usual investigative tools. This paper discusses and illustrates the limitations of modeling and simulation
as applied to this problem, and the special considerations required for meaningful experimentation.
9:25
2aMU2. Coupling between transverse and longitudinal waves in piano strings. Nikki Etchenique, Samantha Collin, and Thomas R.
Moore (Dept. of Phys., Rollins College, 1000 Holt Ave., Winter Park, FL 32789, netchenique@rollins.edu)
It is known that longitudinal waves in piano strings noticeably contribute to the characteristic sound of the instrument. These waves
can be induced by directly exciting the motion with a longitudinal component of the piano hammer, or by the stretching of the string
associated with the transverse displacement. Longitudinal waves that are induced by the transverse motion of the string can occur at frequencies other than the longitudinal resonance frequencies, and the amplitude of the waves produced in this way are believed to vary
quadratically with the amplitude of the transverse motion. We present the results of an experimental investigation that demonstrates the
quadratic relationship between the magnitude of the longitudinal waves and the magnitude of the transverse displacement for steadystate, low-amplitude excitation. However, this relationship is only approximately correct under normal playing conditions.
9:50
2aMU3. Microphone array measurements, high-speed camera recordings, and geometrical finite-differences physical modeling
of the grand piano. Rolf Bader, Florian Pfeifle, and Niko Plath (Inst. of Musicology, Univ. of Hamburg, Neue Rabenstr. 13, Hamburg
20354, Germany, R_Bader@t-online.de)
Microphone array measurements of a grand piano soundboard show similarities and differences between eigenmodes and forced oscillation patterns when playing notes on the instrument. During transients the driving point of the string shows enhanced energy radiation, still not as prominent as with the harpsichord. Lower frequencies are radiated stronger on the larger side of the soundboard wing
shape, while higher frequencies are radiated stronger on the smaller side. A separate region at the larger part of the wing shape, caused
by geometrical boundary conditions has a distinctly separate radiation behavior. High-speed camera recordings of the strings show
energy transfer between strings of the same note. In physical models including hammer, strings, bridge, and soundboard the hammer
movement is crucially necessary to produce a typical piano sound. Different bridge designs and bridge models are compared enhancing
inharmonic sound components due to longitudinal-transversal coupling of the strings at the bridge.
10:15–10:35 Break
10:35
2aMU4. Adjusting the soundboard’s modal parameters without mechanical change: A modal active control approach. Adrien
Mamou-Mani (IRCAM, 1 Pl. Stravinsky, Paris 75004, France, adrien.mamou-mani@ircam.fr)
How do modes of soundboards affect the playability and the sound of string instruments? This talk will investigate this question
experimentally, using modal active control. After identifying modal parameters of a structure, modal active control allows the adjustments of modal frequency and damping thanks to a feedback loop, without any mechanical changes. The potential of this approach for
musical acoustics research will be presented for three different instruments: a simplified piano, a guitar, and a cello. The effects of modal
active control of soundboards will be illustrated on attack, amplitude of sound partials, sound duration, playability, and “wolf tone”
production.
2132
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2132
11:00
2aMU5. Modeling the influence of the piano hammer shank flexibility on the sound. Juliette Chabassier (Magique 3D, Inria, 200
Ave. de la vieille tour, Talence 33400, France, juliette.chabassier@inria.fr)
A nonlinear model for a vibrating Timoshenko beam in non-forced unknown rotation is derived from the virtual work principle
applied to a system of beam with mass at the end. The system represents a flexible piano hammer shank coupled to a hammer head. A
novel energy-based numerical scheme is then provided and coupled to a global energy-preserving numerical solution for the whole piano
(strings, soundboard, and sound propagation in the air). The obtained numerical simulations show that the pianistic touch clearly influences the spectrum of the piano sound of equally loud isolated notes. These differences do not come from a possible shock excitation on
the structure, nor from a changing impact point, nor a “longitudinal rubbing motion” on the string, since neither of these features are
modeled in our study.
11:25
expensive. Instead, this paper proposes that the current key of the music is
both a good summary of past notes and a good prediction of future notes,
which can facilitate adaptive tuning. A method is proposed that uses a hidden Markov model to detect the current key of the music and compute optimal frequencies of notes based on the current key. In addition, a specialized
online machine learning method that enforces symmetry among diatonic
keys is presented, which can potentially adapt the model for different genres
of music. The algorithm can operate in real time, is responsive to the notes
played, and is applicable to various electronic instruments, such as MIDI
pianos. This paper also presents comparisons between this proposed tuning
system and conventional tuning systems.
2aMU6. Real-time tonal self-adaptive tuning for electronic instruments.
Yijie Wang and Timothy Y. Hsu (School of Music, Georgia Inst. of Technol., 950 Marietta St. NW Apt 7303, Atlanta, GA 30318, yijiewang@
gatech.edu)
A fixed tuning system cannot achieve just intonation on all intervals. A
better approximation of just intonation is possible if the frequencies of notes
are allowed to vary. Adaptive tuning is a class of methods that adjusts the
frequencies of notes dynamically in order to maximize musical consonance.
However, finding the optimal frequencies of notes directly based on some
definition of consonance has shown to be difficult and computationally
TUESDAY MORNING, 28 OCTOBER 2014
MARRIOTT 3/4, 9:25 A.M. TO 11:35 A.M.
Session 2aNSa
Noise and Psychological and Physiological Acoustics: New Frontiers in Hearing Protection I
William J. Murphy, Cochair
Hearing Loss Prevention Team, Centers for Disease Control and Prevention, National Institute for Occupational Safety and
Health, 1090 Tusculum Ave., Mailstop C-27, Cincinnati, OH 45226-1998
Elliott H. Berger, Cochair
Occupational Health & Environmental Safety Division, 3M, 7911, Zionsville Rd., Indianapolis, IN 46268-1650
Chair’s Introduction—9:25
Invited Papers
9:30
2aNSa1. How long are inexperienced-subjects “naive” for ANSI S12.6? Hilary Gallagher, Richard L. McKinley (Battlespace Acoust.
Branch, Air Force Res. Lab., 2610 Seventh St., Bldg. 441, Wright-Patterson AFB, OH 45433, richard.mckinley.1@us.af.mil), and Melissa A. Theis (ORISE, Air Force Res. Lab., Wright-Patterson AFB, OH)
ANSI S12.6-2008 describes the methods for measuring the real-ear attenuation of hearing protectors. Method A, trained-subject fit,
was intended to describe the capabilities of the devices fitted by thoroughly trained users while Method B, inexperienced-subject fit, was
intended to approximate the protection that can be attained by groups of informed users in workplace hearing conservation programs.
Inexperienced subjects are no longer considered “na€ıve” according to ANSI S12.6 after 12 or more sessions measuring the attenuation
of earplugs or semi-insert devices. However, an inexperienced subject that has received high quality video instructions may no longer be
considered “na€ıve” or “inexperienced” even after just one session. AFRL conducted an ANSI S12.6-2008 Method B study to determine
what effect, if any, high quality instructions had on the performance of na€ıve or inexperienced subjects and the number of trials where
the subject could still be considered na€ıve or inexperienced. This experiment used ten subjects who completed three ANSI S12.6
2133
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2133
2a TUE. AM
Contributed Paper
measurements using the A-B-A training order and another 10 subjects completed the study using the B-A-B training order (A = high
quality video instructions, B = short “earplug pillow-pack” written instructions). The attenuation results will be discussed and the implications for ANSI S12.6.
9:50
2aNSa2. Evaluation of variability in real-ear attenuation testing using a unique database—35 years of data from a single laboratory. Elliott H. Berger and Ronald W. Kieper (Personal Safety Div., 3M, 7911 Zionsville Rd., Indianapolis, IN 46268, elliott.berger@
mmm.com)
The gold standard in measuring hearing protector attenuation since the late 1950s has been real-ear attenuation at threshold (REAT).
Though well understood and standardized both in the U. S. (ANSI S3.19-1974 and ANSI S12.6-2008) and internationally (ISO 48691:1990), and known to provide valid and reliable estimates of protection for the test panel being evaluated, an area that is not clearly
defined is the variability of the test measurements within a given laboratory. The test standards do provide estimates of uncertainty, both
within and between laboratories, based on limited test data and interlaboratory studies, but thus far no published within-laboratory data
over numerous tests and years have been available to provide empirical support for variability statements. This paper provides information from a one-of-a-kind database from a single laboratory that has conducted nearly 2500 studies over a period of 35 years in a single
facility, managed by the same director (the lead author). Repeat test data on a controlled set of samples of a foam earplug, a premolded
earplug, and two different earmuffs, with one of the data sets comprising 25 repeat tests over that 35-year period, will be used to demonstrate the inherent variability of this type of human-subject testing.
10:10
2aNSa3. Sound field uncertainty budget for real-ear attenuation at threshold measurement per ANSI S12.6 standards. Jeremie
Voix and Celine Lapotre (Ecole
de technologie superieure, Universite du Quebec, 1100 Notre-Dame Ouest, Montreal, QC H3C 1K3,
Canada, jeremie.voix@etsmtl.ca)
In many national and international standards, the attenuation of Hearing Protection Devices is rated according to a psychophysical
method called Real-Ear Attenuation at Threshold (REAT), which averages on a group of test-subjects the difference between the open
and occluded auditory thresholds. In ANSI S12.6 standard, these REAT tests are conducted in a diffuse sound field in which sound uniformity and directionality are assessed by two objective microphone measurements. While the ANSI S12.6 standard defines these two
criteria, it does not link the microphone measurements to the actual variation of sound pressure level at the eardrum that may originate
from natural head movements during testing. This presentation examines this issue with detailed measurements conducted in an ANSI
S12.6-compliant audiometric booth using an Artificial Test Fixture (ATF). The sound pressure level variations were recorded for movements of the ATF along the three main spatial axes and two rotation planes. From these measured variations and different head movements hypothetical scenarios, various sound field uncertainty budgets were computed. These findings will be discussed in order to
eventually include them for uncertainty budget in a revised version of the ANSI S12.6 standard.
10:30
2aNSa4. Estimating effective noise dose when using hearing protection: Differences between ANSI S12.68 calculations and the
auditory response measured with temporary threshold shifts. Hilary L. Gallagher, Richard L. McKinley (Battlespace Acoust., Air
Force Res. Lab., AFRL/711HPW/RHCB, 2610 Seventh St, Wright-Patterson AFB, OH 45433-7901, hilary.gallagher.1@us.af.mil), Elizabeth A. McKenna (Ball Aerosp. and Technologies, Air Force Res. Lab., Wright-Patterson AFB, OH), and Mellisa A. Theis (ORISE,
Air Force Res. Lab., Wright-Patterson AFB, OH)
ANSI S12.6 describes the methods for measuring the real-ear attenuation at threshold of hearing protectors. ANSI S12.68 describes
the methods of estimating the effective A-weighted sound pressure levels when hearing protectors are worn. In theory, the auditory
response, as measured by temporary threshold shifts (TTS), to an unoccluded ear noise exposure and an equivalent occluded ear noise
exposure should produce similar behavioral results. In a series of studies conducted at the Air Force Research Laboratory, human subjects were exposed to continuous noise with and without hearing protection. Ambient noise levels during the occluded ear exposures
were determined using ANSI S12.6 and ANSI S12.68. These equivalent noise exposures as determined by the ANSI S12.68 “gold standard” octave-band method produced significantly different auditory responses as measured with TTS. The methods and results from this
study will be presented.
Contributed Papers
10:50
2aNSa5. Fit-testing, training, and timing—How long does it take to fittest hearing protectors? Taichi Murata (Environ. Health Sci., Univ. of
Michigan, School of Public Health, 1090 Tusculum Ave., Mailstop C-27,
Cincinnati, OH 45226, ygo7@cdc.gov), Christa L. Themann, David C.
Byrne, and William J. Murphy (Hearing Loss Prevention Team, Centers for
Disease Control and Prevention, National Inst. for Occupational Safety and
Health, Cincinnati, OH)
Hearing protector fit-testing is a Best Practice for hearing loss prevention programs and is gaining acceptance among US employers. Fit-testing
quantifies hearing protector attenuation achieved by individual workers and
ensures that workers properly fit and receive adequate protection from their
protectors. Employers may be reluctant to conduct fit-testing because of
2134
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
expenses associated with worker time away from the job, personnel to
administer the testing, and acquisition of a fit-test system. During field and
laboratory studies conducted by the National Institute for Occupational
Safety and Health (NIOSH), timing data for the fit-test process with the
NIOSH HPD Well-FitTM system were analyzed. For workers completely
na€ıve to fit-testing, the tests were completed within 15–20 minutes. Unoccluded test times were less than 4 minutes and occluded tests required less
than 3 minutes. A significant learning effect was seen for the psychoacoustic
method of adjustment used by HPD Well-Fit, explaining the shorter test
times as subjects progressed through the unoccluded and occluded conditions. Most of the workers required about 5 minutes of training time. Test
times and attenuations were tester-dependent, indicating the need to provide
training to staff administering fit-tests in the workplace.
168th Meeting: Acoustical Society of America
2134
11:20
2aNSa6. Intra-subject fit variability using field microphone-in-real-ear
attenuation measurement for foam, pre-molded and custom molded
earplugs. Jeremie Voix (Ecole
de technologie superieure, Universite du
Quebec, 1100 Notre-Dame Ouest, Montreal, QC H3C 1K3, Canada, jere
mie.voix@etsmtl.ca), Cecile Le Cocq (Ecole
de technologie superieure,
Universite du Quebec, Montreal, QC, Canada), and Elliott H. Berger
(E•A•RCAL Lab, 3M Personal Safety Div., Indianapolis, IN)
2aNSa7. A new perceptive method to measure active insertion loss of
active noise canceling headsets or hearing protectors by matching the
timbre of two audio signals. Remi Poncot and Pierre Guiu (Parrot S.A.
France, 15 rue de montreuil, Paris 75011, France, poncotremi@gmail.
com)
Attenuation of passive hearing protectors is assessed either by the Real
Ear Attenuation at Threshold subjective method or by objective Measurements In the Real Ear. For Active Noise Cancelling headsets both methods
do not practically apply. Alternative subjective methods based on loudness
balance and masked hearing threshold techniques were proposed. However,
they led to unmatched results with objective measurements at low frequency, diverging in either direction. Additionally, they are relatively long
as frequency points of interest are measured one after the other. This paper
presents a novel subjective method based on timbre matching, which has the
originality of involving other perceptive mechanisms than the previous ones
did. The attenuation performance of ANC headsets is rated by the change in
pressure level of eight harmonics when the active noise reduction functionality is switched on. All harmonics are played at once, and their levels are
adjusted by the test subject until he perceives the same timbre both in passive and active modes. A test was carried out by a panel of people in diffuse
noise field conditions to assess the performance of personal consumer headphones. Early results show that the method is as repeatable as MIRE and
lead to close results.
In recent years, the arrival of several field attenuation estimation systems
(FAES) on the industrial marketplace have enabled better assessment of
hearing protection in real-life noise environments. FAES measure the individual attenuation of a given hearing protection device (HPD) as fitted by
the end-user, but FAES enable predictions based only on measurements
taken over a few minutes and do not account for what may occur later in the
field over months or years as the earplug may be fitted slightly differently
over time. This paper will use the field microphone-in-real-ear (F-MIRE)
measurement technique to study in the laboratory how consistently a subject
can fit and refit an HPD. A new metric, the intra-subject fit variability, will
be introduced and quantified for three different earplugs (roll-down foam,
premolded and custom molded), as fitted by two types of test subjects (experienced and inexperienced). This paper will present the experimental process used and statistical calculations performed to quantify intra-subject fit
variability. As well, data collected from two different laboratories will be
contrasted and reviewed as to the impact of trained versus untrained test
subjects.
TUESDAY MORNING, 28 OCTOBER 2014
INDIANA E, 8:15 A.M. TO 11:20 A.M.
Session 2aNSb
Noise and Structural Acoustics and Vibration: Launch Vehicle Acoustics I
Kent L. Gee, Cochair
Brigham Young University, N243 ESC, Provo, UT 84602
Seiji Tsutsumi, Cochair
JEDI Center, JAXA, 3-1-1 Yoshinodai, Chuuou, Sagamihara 252-5210, Japan
Chair’s Introduction—8:15
Invited Papers
8:20
2aNSb1. Inclusion of source extent and coherence in a finite-impedance ground reflection model with atmospheric turbulence.
Kent L. Gee and Tracianne B. Neilsen (Dept. of Phys. and Astronomy, Brigham Young Univ., N243 ESC, Provo, UT 84602, kentgee@
byu.edu)
Acoustic data collected in static rocket tests are typically influenced by ground reflections. Furthermore, the partial coherence of the
ground interaction due to atmospheric turbulence can play a significant role for larger propagation distances. Because the rocket plume
is an extended radiator whose directionality is the result of significant source correlation, assessment of the impact of ground reflections
in the data must include these effects. In this paper, a finite impedance-ground, single-source interference approach [G. A. Daigle, J.
Acoust. Soc. Am. 65, 45–49 (1979)] that incorporates both amplitude and phase variations due to turbulence is extended to distributions
of correlated monopoles. The theory for obtaining the mean-square pressure from multiple correlated sources in the presence of atmospheric turbulence is described. The effects of source correlation and extent, ground effective flow resistivity, and turbulence parameters
are examined in terms differences in relative sound pressure level across a realistic parameter space. Finally, the model prediction is
compared favorably against data from horizontal firings of large solid rocket motors. [Work supported by NASA MSFC and Blue Ridge
Research and Consulting, LLC.]
2135
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2135
2a TUE. AM
11:05
8:40
2aNSb2. Estimation of acoustic loads on a launch vehicle fairing. Mir Md M. Morshed (Dept. of Mech. Eng., Jubail Univ. College,
Jubail Industrial City, Jubail 10074, Saudi Arabia, morshedm@ucj.edu.sa), Colin H. Hansen, and Anthony C. Zander (School of Mech.
Eng., The Univ. of Adelaide, Adelaide, SA, Australia)
During the launch of space vehicles, there is a large external excitation generated by acoustic and structural vibration. This is due to
acoustic pressure fluctuations on the vehicle fairing caused by the engine exhaust gases. This external excitation drives the fairing structure and produces large acoustic pressure fluctuations inside the fairing cavity. The acoustic pressure fluctuations not only produce high
noise levels inside the cavity but also cause damage such as structural fatigue, and damage to, or destruction of, the payload inside the
fairing. This is an important problem because one trend of the aerospace industry is to use composite materials for the construction of
launch vehicle fairings, resulted in large-scale weight reductions of launch vehicles, but increased the noise transmission inside the fairing. This work investigates the nature of the external acoustic pressure distribution on a representative small launch vehicle fairing during liftoff. The acoustic pressure acting on a representative small launch vehicle fairing was estimated from the complex acoustic field
generated by the rocket exhaust during liftoff using a non-unique source allocation technique which considered acoustic sources along
the rocket engine exhaust flow. Numerical and analytical results for the acoustic loads on the fairing agree well.
9:00
2aNSb3. Prediction of acoustic environments from horizontal rocket firings. Clothilde Giacomoni and Janice Houston (NASA/
MSFC, NASA Marshall Space Flight Ctr., Bldg 4203, Cube 3128, Msfc, AL 35812, clothilde.b.giacomoni@nasa.gov)
In recent years, advances in research and engineering have led to more powerful launch vehicles which yield acoustic environments
potentially destructive to the vehicle or surrounding structures. Therefore, it has become increasingly important to be able to predict the
acoustic environments created by these vehicles in order to avoid structural and/or component failure. The current industry standard technique for predicting launch-induced acoustic environments was developed by Eldred in the early 1970s. Recent work has shown Eldred’s
technique to be inaccurate for current state-of-the-art launch vehicles. Due to the high cost of full-scale and even sub-scale rocket experiments, very little rocket noise data is available. Much of the work thought to be applicable to rocket noise has been done with heated jets.
A model to predict the acoustic environment due to a launch vehicle in the far-field was created. This was done using five sets of horizontally fired rocket data, obtained between 2008 and 2012. Through scaling analysis, it is shown that liquid and solid rocket motors exhibit
similar spectra at similar amplitudes. This model is accurate for these five data sets within 5 dB of the measured data.
9:20
2aNSb4. Acoustics research of propulsion systems. Ximing Gao (NASA Marshall Space Flight Ctr., Atlanta, Georgia) and Janice
Houston (NASA Marshall Space Flight Ctr., 650 S. 43rd St., Boulder, Colorado 80305, janice.d.houston@nasa.gov)
The liftoff phase induces high acoustic loading over a broad frequency range for a launch vehicle. These external acoustic environments are used in the prediction of the internal vibration responses of the vehicle and components. Present liftoff vehicle acoustic environment prediction methods utilize stationary data from previously conducted hold-down tests to generate 1/3 octave band Sound
Pressure Level (SPL) spectra. In an effort to update the accuracy and quality of liftoff acoustic loading predictions, non-stationary flight
data from the Ares I-X were processed in PC-Signal in two flight phases: simulated hold-down and liftoff. In conjunction, the Prediction
of Acoustic Vehicle Environments (PAVE) program was developed in MATLAB to allow for efficient predictions of sound pressure levels
(SPLs) as a function of station number along the vehicle using semi-empirical methods. This consisted of generating the Dimensionless
Spectrum Function (DSF) and Dimensionless Source Location (DSL) curves from the Ares I-X flight data. These are then used in the
MATLAB program to generate the 1/3 octave band SPL spectra. Concluding results show major differences in SPLs between the holddown test data and the processed Ares I-X flight data making the Ares I-X flight data more practical for future vehicle acoustic environment predictions.
9:40
2aNSb5. Acoustics associated with liquid rocket propulsion testing. Daniel C. Allgood (NASA SSC, Bldg. 3225, Stennis Space Ctr.,
MS 39529, Daniel.C.Allgood@nasa.gov)
Ground testing of liquid rocket engines is a necessary step towards building reliable launch vehicles. NASA Stennis Space Center
has a long history of performing both developmental and certification testing of liquid propulsion systems. During these test programs,
the propulsion test article, test stand infrastructure and the surrounding community can all be exposed to significant levels of acoustic
energy for extended periods of time. In order to ensure the safety of both personnel and equipment, predictions of these acoustic environments are conducted on a routine basis. This presentation will provide an overview of some recent examples in which acoustic analysis
has been performed. Validation of these predictions will be shown by comparing the predictions to acoustic data acquired during smalland full-scale engine hot-fire testing. Applications of semi-empirical and advanced computational techniques will be reviewed for both
sea-level and altitude test facilities.
10:00–10:20 Break
2136
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2136
10:20
2aNSb6. Post-flight acoustic analysis of Epsilon launch vehicle at lift-off. Seiji Tsutsumi (JAXA’s Eng. Digital Innovation Ctr.,
JAXA, 3-1-1 Yoshinodai, Chuuou, Sagamihara, Kanagawa 252-5210, Japan, tsutsumi.seiji@jaxa.jp), Kyoichi Ui (Space Transportation
Mission Directorate, JAXA, Tsukuba, Japan), Tatsuya Ishii (Inst. of Aeronautical Technol., JAXA, Chofu, Japan), Shinichiro Tokudome
(Inst. of Space and Aeronautical Sci., JAXA, Sagamihara, Japan), and Kei Wada (Tokyo Office, Sci. Service Inc., Chuuou-ku, Japan)
Acoustic level both inside and outside the fairing is measured at the first Epsilon Launch Vehicle (Epsilon-1). The obtained data
shows time-varying fluctuation due to the ascent of the vehicle. Equivalent stationary duration for such non-stationary flight data is
determined based on the procedure described in NASA HDBK-7005. The launch pad used by the former M-V launcher is modified for
the Epsilon based on the Computational Fluid Dynamics (CFD) and 1/42-scale model tests. Although the launch pad is compact and any
water injection system is not installed, 10 dB reduction in overall sound pressure level (OASPL) is achieved due to the modification for
the Epsilon, comparing with the M-V. Acoustic level inside the fairing satisfies the design requirement. Acoustic design of the launch
pad developed here is revealed to be effective. Prediction of the acoustics level based on the Computational Fluid Dynamics (CFD) and
subscale testing is also investigated by comparing with the flight measurement.
2a TUE. AM
10:40
2aNSb7. Jet noise-based diagnosis of combustion instability in solid rocket motors. Hunki Lee, Taeyoung Park, Won-Suk Ohm
(Yonsei Univ., 50 Yonsei-ro, Seodaemun-gu, Seoul 120-749, South Korea, ohm@yonsei.ac.kr), and Dohyung Lee (Agency for Defense
Development, Daejeon, South Korea)
Diagnosis of combustion instability in a solid rocket motor usually involves in-situ measurements of pressure in the combustor, a
harsh environment that poses challenges in instrumentation and measurement. This paper explores the possibility of remote diagnosis of
combustion instability based on far-field measurements of rocket jet noise. Because of the large pressure oscillations associated with
combustion instability, the wave process in the combustor has many characteristic features of nonlinear acoustics such as shocks and
limit cycles. Thus the remote detection and characterization of instability can be performed by listening for the tell-tale signs of the combustor nonlinear acoustics, buried in the jet noise. Of particular interest is the choice of nonlinear acoustic measure (e.g., among skewness, bispectra, and Howell-Morfey Q/S) that best brings out the acoustic signature of instability from the jet noise data. Efficacy of
each measure is judged against the static test data of two tactical motors (one stable, the other unstable).
11:00
2aNSb8. Some recent experimental results concerning turbulent coanda wall jets. Caroline P. Lubert (Mathematics & Statistics,
James Madison Univ., 301 Dixie Ave., Harrisonburg, VA 22801, lubertcp@jmu.edu)
The Coanda effect is the tendency of a stream of fluid to stay attached to a convex surface, rather than follow a straight line in its
original direction. As a result, in such jets mixing takes place between the jet and the ambient air as soon as the jet issues from its exit
nozzle, causing air to be entrained. This air-jet mixture adheres to the nearby surface. Whilst devices employing the Coanda effect usually offer substantial flow deflection, and enhanced turbulence levels and entrainment compared with conventional jet flows, these prospective advantages are generally accompanied by significant disadvantages including a considerable increase in associated noise levels
and jet breakaway. Generally, the reasons for these issues are not well understood and thus the full potential offered by the Coanda effect
is yet to be realized. The development of a model for predicting the noise emitted by three-dimensional flows over Coanda surfaces
would suggest ways in which the noise could be reduced or attenuated. In this paper, the results of recent experiments on a 3-D turbulent
Coanda wall jet are presented. They include the relationship of SPL, shock cell distribution and breakaway to various flow parameters,
and predictions of the jet boundary.
2137
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2137
TUESDAY MORNING, 28 OCTOBER 2014
INDIANA C/D, 8:30 A.M. TO 11:30 A.M.
Session 2aPA
Physical Acoustics: Outdoor Sound Propagation
Kai Ming Li, Cochair
Mechanical Engineering, Purdue University, 140 South Martin Jischke, West Lafayette, IN 47907-2031
Shahram Taherzadeh, Cochair
Engineering & Innovation, The Open University, Walton Hall, Milton Keynes MK7 6AA, United Kingdom
Contributed Papers
8:30
9:00
2aPA1. On the inversion of sound fields above a locally reacting ground
for direct impedance deduction. Kai Ming Li and Bao N. Tong (Mech.
Eng., Purdue Univ., 177 South Russel St., West Lafayette, IN 47907-2099,
mmkmli@purdue.edu)
2aPA3. Wavelet-like models for random media in wave propagation
simulations. D. Keith Wilson (Cold Regions Res. and Eng. Lab., U.S. Army
Engineer Res. and Dev. Ctr., 72 Lyme Rd., Hanover, NH 03755-1290,
D.Keith.Wilson@usace.army.mil), Chris L. Pettit (Aerosp. Eng. Dept., U.S.
Naval Acad., Annapolis, MD), and Sergey N. Vecherin (Cold Regions Res.
and Eng. Lab., U.S. Army Engineer Res. and Dev. Ctr., Hanover, NH)
A complex root-finding algorithm is typically used to deduce the acoustic impedance of a locally reacting ground by inverting the measured sound
fields. However, there is an issue of uniquely determining the impedance
from a measurement of an acoustic transfer function. The boundary loss factor F, which is a complex function, is the source of this ambiguity. It is associated with the spherical wave reflection coefficient Q for the reflected
sound field. These two functions are dependent on a complex parameter
known as the numerical distance w. The inversion of F leading to the multiple solutions of w can be identified as the root cause of the problem. To
resolve this ambiguity, the zeroes and saddle points of F are determined for
a given source/receiver geometry and a known acoustic impedance. They
are used to establish the basins containing all plausible solutions. The topography of Q is further examined in the complex w-plane. A method for identifying the family of solutions and selecting the physically meaningful
branch is proposed. Validation is provided by using numerical simulations
as well as the experimentally data. The error and uncertainties in the
deduced impedance are quantified.
8:45
2aPA2. An improved method for direct impedance deduction of a
locally reacting ground. Bao N. Tong and Kai Ming Li (Mech. Eng., Purdue Univ., 177 South Russel St., West Lafayette, IN 47907-2099, bntong@
purdue.edu)
An accurate deduction of the acoustic impedance of a locally reacting
ground depends on a precise measurement of sound fields at short-ranges.
However, measurement uncertainties exists in both the magnitude and the
phase of the acoustic transfer function. By using the standard method, accurate determination of the acoustic impedance can be difficult when the
measured phases become unreliable in many outdoor conditions. An
improved technique, which relies only on the magnitude information, has
been developed. A minimum of two measurements at two source/receiver
configurations are needed to determine the acoustic impedance. Even in the
absence of measurement uncertainties, a more careful analysis suggests that
a third independent measurement is often needed to give an accurate solution. When experimental errors are inevitably introduced, a selection of
optimal geometry becomes necessary to reduce the sensitivity of the
deduced impedance to small variations in the data. A graphical method is
provided which offers greater insight into the deduction of impedance and a
downhill simplex algorithm has been developed to automate the procedure.
Physical constraints are applied to limit the search region and to eliminate
the rogue solutions. Several case studies using indoor and outdoor data are
presented to validate the proposed technique.
2138
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Simulations of wave propagation and scattering in random media are
often performed by synthesizing the media from Fourier modes, in which
the phases are randomized and the amplitudes tailored to provide a prescribed spectrum. Although this approach is computationally efficient, it
cannot capture organization and intermittency in random media, which
impacts higher-order statistical properties. As an alternative, we formulate a
cascade model involving distributions of wavelet-like objects (quasi-wavelets or QWs). The QW model is constructed in a self-similar fashion, with
the sizes, amplitudes, and numbers of offspring objects occurring at a constant ratios between generations. The objects are randomly distributed in
space according to a Poisson process. The QW model is formulated in static
(time-invariant), steady-state, and non-steady versions. Many diverse natural and man-made environments can be synthesized, including turbulence,
porous media, rock distributions, urban buildings, and vegetation. The synthesized media can then be used in simulations of wave propagation and
scattering.
9:15
2aPA4. Space-time correlation of acoustic signals in a turbulent atmosphere. Vladimir E. Ostashev, D. Keith Wilson (U.S. Army Engineer Res.
and Development Ctr., 72 Lyme Rd., Hanover, NH 03755, vladimir.ostashev@colorado.edu), Sandra Collier (U.S. Army Res. Lab., Adelphi, MD),
and Sylvain Cheinet (French-German Res. Inst. of Saint-Louis, Saint-Louis,
France)
Scattering by atmospheric turbulence diminishes the correlation, in both
space and time, of acoustic signals. This decorrelation subsequently impacts
beamforming, averaging, and other techniques for enhancing signal-to-noise
ratio. Space-time correlation can be measured directly with a phased microphone array. In this paper, a general theory for the space-time correlation
function is presented. The atmospheric turbulence is modeled using the von
Karman spatial spectra of temperature and wind velocity fluctuations and
locally frozen turbulence (i.e., the Taylor’s frozen turbulence hypothesis
with convection velocity fluctuations). The theory developed is employed to
calculate and analyze the spatial and temporal correlation of acoustic signals
for typical regimes of an unstable atmospheric boundary layer, such as
mostly cloudy or sunny conditions with light, moderate, or strong wind. The
results obtained are compared with available experimental data.
168th Meeting: Acoustical Society of America
2138
10:30
2aPA5. Characterization of wind noise by the boundary layer meteorology. Gregory W. Lyons and Nathan E. Murray (National Ctr. for Physical
Acoust., The Univ. of MS, 1 Coliseum Dr., University, MS 38677, gwlyons@go.olemiss.edu)
2aPA8. An investigation of wind-induced and acoustic-induced ground
motions. Vahid Naderyan, Craig J. Hickey, and Richard Raspet (National
Ctr. for Physical Acoust. and Dept. of Phys. and Astronomy, Univ. of MS,
NCPA, 1 Coliseum Dr.,, University, MS 38677, vnaderya@go.olemiss.
edu)
The fluctuations in pressure generated by turbulent motions of the
atmospheric boundary layer are a principal noise source in outdoor acoustic
measurements. The mechanics of wind noise involve not only stagnation
pressure fluctuations at the sensor, but also shearing and self-interaction of
turbulence throughout the flow, particularly at low frequencies. The contributions of these mechanisms can be described by the boundary-layer meteorology. An experiment was conducted at the National Wind Institute’s
200-meter meteorological tower, located outside Lubbock, Texas in the
Llano Estacado region. For two days, a 44-element 400-meter diameter
array of unscreened NCPA-UMX infrasound sensors recorded wind noise
continuously, while the tower and a Doppler SODAR measured vertical profiles of the boundary layer. Analysis of the fluctuating pressure with the meteorological data shows that the statistical structure of wind noise depends
on both mean velocity distribution and buoyant stability. The root-meansquare pressure exhibits distinct scalings for stable and unstable stratification. Normalization of the pressure power spectral density depends on the
outer scales. In stable conditions, the kurtosis of the wind noise increases
with Reynolds number. Measures of noise intermittency are explored with
respect to the meteorology.
9:45
2aPA6. Statistical moments for wideband acoustic signal propagation
through a turbulent atmosphere. Jericho E. Cain (US Army Res. Lab.,
1200 East West Hwy, Apt. 422, Silver Spring, MD 20910, jericho.cain@
gmail.com), Sandra L. Collier (US Army Res. Lab., Adelphi, MD),
Vladimir E. Ostashev, and David K. Wilson (U.S. Army Engineer Res. and
Development Ctr., Hanover, NH)
Developing methods for managing noise propagation, sound localization, sound classification, and for designing novel acoustic remote sensing
methods of the atmosphere requires a detailed understanding of the impact
that atmospheric turbulence has on acoustic propagation. In particular,
knowledge of the statistical moments of the sound field is needed. The first
statistical moment corresponds to the coherent part of the sound field and it
is need in beamforming applications. The second moment enables analysis
of the mean intensity of a pulse in a turbulent atmosphere. Numerical solutions to a set of recently derived closed form equations for the first and second order statistical moments of a wideband acoustic signal propagating in
a turbulent atmosphere with spatial fluctuations in the wind and temperature
fields are presented for typical regimes of the atmospheric boundary layer.
10:00–10:15 Break
10:15
2aPA7. Analysis of wind noise reduction by semi-porous fabric domes.
Sandra L. Collier (U.S. Army Res. Lab., 2800 Powder Mill Rd., RDRLCIE-S, Adelphi, MD 20783-1197, sandra.l.collier4.civ@mail.mil), Richard
Raspet (National Ctr. for Physical Acoust., Univ. of MS, University, MS),
John M. Noble, W. C. Kirkpatrick Alberts (U.S. Army Res. Lab., Adelphi,
MD), and Jeremy Webster (National Ctr. for Physical Acoust., Univ. of MS,
University, MS)
For low frequency acoustics, the wind noise contributions due to turbulence may be divided into turbulence–sensor, turbulence–turbulence, and
turbulence–mean shear interactions. Here, we investigate the use of a semiporous fabric dome for wind noise reduction in the infrasound region. Comparisons are made between experimental data and theoretical predictions
from a wind noise model [Raspet, Webster, and Naderyan, J. Acoust. Soc.
Am. 135, 2381 (2014)] that accounts for contributions from the three turbulence interactions.
2139
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Wind noise at low frequency is a problem in seismic surveys, which
reduces seismic image clarity. In order to find a solution for this problem,
we investigated the driving pressure perturbations on the ground surface
associated with wind-induced ground motions. The ground surface pressure
and shear stress at the air–ground interface were used to predict the displacement amplitudes of the horizontal and vertical ground motions as a
function of depth. The measurements were acquired at a site having a flat
terrain and low seismic ambient noise under windy conditions. Multiple triaxial geophones were deployed at different depths to study the induced
ground velocity as a function of depth. The measurements show that the
wind excites horizontal components more than vertical component on the
above ground geophone due to direct interaction with the geophone. For
geophones buried flush with the ground surface and at various depths below
the ground, the vertical components of the velocity are greater than the horizontal components. There is a very small decrease in velocity with depth.
The results are compared to acoustic-ground coupling case. [This work is
supported by USDA under award 58-6408-1-608.]
10:45
2aPA9. Using an electro-magnetic analog to study acoustic scattering in
a forest. Michelle E. Swearingen (US Army ERDC, Construction Eng. Res.
Lab., P.O. Box 9005, Champaign, IL 61826, michelle.e.swearingen@usace.
army.mil) and Donald G. Albert (US Army ERDC, Hanover, NH)
Using scale models can be a convenient method for investigating multiple scattering in complex environments, such as a forest. However, the
increased attenuation with increasing frequency limits the propagation distances available for such models. An electromagnetic analog is an alternative way to study multiple scattering from rigid objects, such as tree trunks.
This analog does not suffer from the intrinsic attenuation and allows for
investigation of a larger effective area. In this presentation, the results from
a 1:50 scale electromagnetic analog are compared to full-scale data collected in a forest. Further tests investigate propagation along multiple paths
through a random configuration of aluminum cylinders representing trees.
Special considerations and anticipated range of applicability of this analog
method are discussed.
11:00
2aPA10. Modeling of sound scattering by an obstacle located below a
hardbacked rigid porous medium. Yiming Wang and Kai Ming Li (Mech.
Eng., Purdue Univ., 177 South Russel St., West Lafayette, IN 47907-2031,
mmkmli@purdue.edu)
The boundary integral equation (BIE) formulation takes advantage of
the well-known Green’s function for the sound fields above a plane interface. It can then lead to a simplified numerical solution known as the boundary element method (BEM) that enables an accurate computation of sound
fields above the plane interface with the presence of obstacles of complex
shapes. The current study is motivated by the need to explore the acoustical
characteristics of a layer of sound absorption materials embedded with
equally spaced rigid inserts. In principle, this problem may be solved by a
standard finite element program but it is found more efficient to use the BIE
approach by discretizing only the boundary surfaces of the obstacles within
the medium. The formulation is facilitated by using accurate Green’s functions for computing the sound fields above and within a layer of rigid porous
medium. This paper reports a preliminary study to model the scattering of
sound by an obstacle placed within the layered rigid porous medium. The
two-dimensional Green’s functions will be derived and used for the development of a BEM model for computing the sound field above and within the
rigid porous medium due to the presence of an arbitrarily shaped obstacle.
168th Meeting: Acoustical Society of America
2139
2a TUE. AM
9:30
11:15
the one dimensional analytical and numerical solution of a finite channel
response between two semi-infinite planes. The branch integrals representing the reflection coefficient is implemented to evaluate the pressure amplitude of the boundary effect. The approach addresses the validation of
application of geometric image sources for finite boundaries. Consequently,
the 3D extension of the problem for a closed cavity is also investigated.
2aPA11. Analysis of the Green’s function for a duct and cavity using
geometric image sources. Ambika Bhatta, Charles Thompson, and Kavitha
Chandra (Univ. of Massachusetts Lowell, 1 University Ave., Lowell, MA
01854, ambika_bhatta@student.uml.edu)
The presented work investigates the solution for pressure response of a
point source in a two dimensional waveguide. The methodology is based on
TUESDAY MORNING, 28 OCTOBER 2014
MARRIOTT 1/2, 8:00 A.M. TO 10:00 A.M.
Session 2aSAa
Structural Acoustics and Vibration and Noise: Computational Methods in Structural Acoustics and
Vibration
Robert M. Koch, Cochair
Chief Technology Office, Naval Undersea Warfare Center, Code 1176 Howell Street, Bldg. 1346/4, Code 01CTO,
Newport, RI 02841-1708
Matthew Kamrath, Cochair
Acoustics, Pennsylvania State University, 717 Shady Ridge Road, Hutchinson, MN 55350
Invited Papers
8:00
2aSAa1. A radical technology for modeling target scattering. David Burnett (Naval Surface Warfare Ctr., Code CD10, 110 Vernon
Ave., Panama City, FL 32407, david.s.burnett@navy.mil)
NSWC PCD has developed a high-fidelity 3-D finite-element (FE) modeling system that computes acoustic color templates (target
strength vs. frequency and aspect angle) of single or multiple realistic objects (e.g., target + clutter) in littoral environments. High-fidelity means that 3-D physics is used in all solids and fluids, including even thin shells, so that solutions include not only all propagating
waves but also all evanescent waves, the latter critically affecting the former. Although novel modeling techniques have accelerated the
code by several orders of magnitude, NSWC PCD is now implementing a radically different FE technology, e.g., one thin-shell element
spanning 90 of a cylindrical shell. It preserves all the 3-D physics but promises to accelerate the code another two to three orders of
magnitude. The talk will briefly review the existing system and then describe the new technology.
8:20
2aSAa2. Faster frequency sweep methods for structural vibration and acoustics analyses. Kuangcheng Wu (Ship Survivability,
Newport News ShipBldg., 202 Schembri Dr., Yorktown, VA 23693, kc.wu@hii-nns.com) and Vincent Nguyen (Ship Survivability,
Newport News ShipBldg., Newport News, VA)
The design of large, complex structures typically requires knowledge of the mode shape and forced response near major resonances
to ensure deflection, vibration, and the resulting stress are kept below acceptable levels, and to guide design changes where necessary.
Finite element analysis (FEA) is commonly used to predict Frequency Response Functions (FRF) of the structure. However, as the complexity and detail of the structure grows, the system matrices, and the computational resources needed to solve them, get large. Furthermore, the need to use small frequency steps to accurately capture the resonant response peaks can drive up the number of FRF
calculations required. Thus, the FRF calculation can be computationally expensive for large structural systems. Several approaches have
been proposed that can significantly accelerate the overall process by approximating the frequency dependent response. Approximation
approaches based on Krylov Galerkin Projection (KGP) and Pade calculate the forced response at only a few frequencies, then use the
response and its derivatives to reconstruct the FRF in-between the selected direct calculation points. This paper first validates the two
approaches with analytic solutions for a simply supported plate, and then benchmarks several numerical examples to demonstrate the accuracy and efficiency of the new approximate methods.
2140
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2140
8:40
2aSAa3. Waves in continua with extreme microstructures. Paul E. Barbone (Mech. Eng., Boston Univ., 110 Cummington St., Boston, MA 02215, barbone@bu.edu)
The effective properties of a material may be generally defined as those that describe the limiting case where the wavelength of propagation is infinite compared to the characteristic scale of the microstructure. Generally, the limit of vanishingly small microstructural
scale in a heterogeneous elastic medium results in an effective homogeneous medium that is again elastic. We show that for materials
with extreme microstructures, the limiting effective medium can be quite exotic, including polar materials, or multiphase continuum.
These continuum models naturally give rise to unusual effective properties including negative or anisotropic mass. Though unusual,
these properties have straightforward interpretations in terms of the laws of classical mechanics. Finally, we discuss wave propagation
in these structures and find dispersion curves with multiple branches.
9:00
2aSAa4. A comparison of perfectly matched layers and infinite elements
for exterior Helmholtz problems. Gregory Bunting (Computational Solid
Mech. and Structural Dynam., Sandia National Labs., 709 Palomas Dr. NE,
Albuquerque, NM 87108, bunting.gregory@gmail.com), Arun Prakash
(Lyles School of Civil Eng., Purdue Univ., West Lafayette, IN), and
Timothy Walsh (Computational Solid Mech. and Structural Dynam., Sandia
National Labs., West Lafyette, IN)
Perfectly matched layers and infinite elements are commonly used for finite element simulations of acoustic waves on unbounded domains. Both
involve a volumetric discretization around the periphery of an acoustic
mesh, which itself surrounds a structure or domain of interest. Infinite elements have been a popular choice for these problems since the 1970s. Perfectly matched layers are a more recent technology that is gaining
popularity due to ease of implementation and effectiveness as an absorbing
boundary condition. In this study, we present massively parallel implementations of these two techniques, and compare their performance on a set of
representative structural–acoustic problems on exterior domains. We examine the conditioning of the linear systems generated by the two techniques
by examining the number of Krylov-iterations needed for convergence to a
fixed solver tolerance. We also examine the effects of PML parameters,
exterior boundary conditions, and quadrature rules on the accuracy of the
solution. [Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of
Lockheed martin Corporation, for the U.S. Department of Energy’s National
Nuclear Security Administration under contract DE-AC04-94AL850000.]
the effect of tire rotational speed on the natural frequencies of these various
modes types will also be discussed.
9:30
2aSAa6. Simulating sound absorption in porous material with the lattice Boltzmann method. Andrey R. da Silva (Ctr. for Mobility Eng., Fedopolis,
eral Univ. of Santa Catarina, Rua Monsenhor Topp, 173, Florian
Santa Catarina 88020-500, Brazil, andrey.rs@ufsc.br), Paulo Mareze, and
Eric Brand~ao (Structure and Civil Eng., Federal Univ. of Santa Maria, Santa
Maria, RS, Brazil)
The development of porous materials that are able to absorb sound in
specific frequency bands has been an important challenge in the acoustic
research. Thus, the development new numerical techniques that allow one to
correctly capture the mechanisms of sound absorption can be seen as an important step to developing new materials. In this work, the lattice Boltzmann
method is used to predict the sound absorption coefficient in porous material
with straight porous structure. Six configurations of porous material were
investigated, involving different thickness and porosity values. A very good
agreement was found between the numerical results and those obtained by
the analytical model provided in the literature. The results suggest that the
lattice Boltzmann model can be a powerful alternative to simulating viscous
sound absorption, particularly due to its reduced computational effort when
compared to traditional numerical methods.
9:45
2aSAa5. Improved model for coupled structural–acoustic modes of
tires. Rui Cao, Nicholas Sakamoto, and J. S. Bolton (Ray W. Herrick Labs.,
School of Mech. Eng., Purdue Univ., 177 S. Russell St., West Lafayette, IN
47907-2099, cao101@purdue.edu)
2aSAa7. Energy flow models for the out-of-plane vibration of horizontally curved beams. Hyun-Gwon Kil (Dept. of Mech. Eng., Univ. of
Suwon, 17, Wauan-gil, Bongdam-eup, Hwaseong-si, Gyeonggi-do 445-743,
South Korea, hgkil@suwon.ac.kr), Seonghoon Seo (Noise & Vib. CAE
Team, Hyundai Motor Co., Hwaseong-si, Gyeonggi-do, South Korea), SukYoon Hong (Dept. of Naval Architecture and Ocean Eng., Seoul National
Univ., Seoul, South Korea), and Chan Lee (Dept. of Mech. Eng., Univ. of
Suwon, Hwaseong-si, Gyeonggi-do, South Korea)
Experimental measurements of tire tread band vibration have provided
direct evidence that higher order structural-acoustic modes exist in tires, not
just the well-know fundamental mode. These modes display both circumferential and radial pressure variations. The theory governing these modes has
thus been investigated. A brief recapitulation of the previously-presented
coupled tire-acoustical model based on a tensioned membrane approach will
be given, and then an improved tire-acoustical model with a ring-like shape
will be introduced. In the latter model, the effects of flexural and circumferential stiffness are considered, as is the role of curvature in coupling the various wave types. This improved model accounts for propagating in-plane
vibration in addition to the essentially structure-borne flexural wave and the
essentially airborne longitudinal wave accounted for in the previous model.
The longitudinal structure-borne wave “cuts on” at the tire’s circumferential
ring frequency. Explicit solutions for the structural and acoustical modes
will be given in the form of dispersion relations. The latter results will be
compared with measured dispersion relations, and the features associated
primarily with the higher order acoustic modes will be highlighted. Finally,
The purpose of this work is to develop energy flow models to predict the
out-of-plane vibration of horizontally curved beams in the mid- and highfrequency range. The dispersion relations of waves are approximately separated into relations to the propagation of flexural waves and torsional waves
generating the out-of-plane vibration of the horizontally curved beams with
Kirchhoff-Love hypotheses. The energy flow models are based on the
energy governing equations for the flexural waves and the torsional waves
propagating in the curved beams. Those equations are driven to predict the
time- and locally space-averaged energy density and intensity in the curved
beams. Total values for the energy density and the intensity as well as contributions of each type of waves on those values are predicted. A verification
of the energy flow models for the out-of-plane vibration of the horizontally
curved beams is performed by comparing the energy flow solutions for the
energy density and the intensity with analytical solutions evaluated using
the wave propagation approach. The comparison shows that the energy flow
models can be effectively used to predict the out-of-plane vibration of the
horizontally curved beams in the mid- and high-frequency range.
9:15
2141
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2141
2a TUE. AM
Contributed Papers
TUESDAY MORNING, 28 OCTOBER 2014
MARRIOTT 1/2, 10:30 A.M. TO 11:40 A.M.
Session 2aSAb
Structural Acoustics and Vibration and Noise: Vehicle Interior Noise
Sean F. Wu, Chair
Mechanical Engineering, Wayne State University, 5050 Anthony Wayne Drive, College of Engineering Building, Rm. 2133,
Detroit, MI 48202
Chair’s Introduction—10:30
Invited Papers
10:35
2aSAb1. Structural–acoustic optimization of a pressurized, ribbed aircraft panel. Micah R. Shepherd and Stephen A. Hambric
(Appl. Res. Lab, Penn State Univ., PO Box 30, Mailstop 3220B, State College, PA 16801, mrs30@psu.edu)
A method to reduce the noise radiated by a ribbed, aircraft panel excited by turbulent boundary layer flow is presented. To compute
the structural-acoustic response, a modal approach based on finite element/boundary element analysis was coupled to a turbulent boundary flow forcing function. A static pressure load was also applied to the panel to simulate cabin pressurization during flight. The radiated
sound power was then minimized by optimizing the horizontal and vertical rib location and rib cross section using an evolutionary
search algorithm. Nearly 10 dB of reduction was achieved by pushing the ribs to the edge of the panel, thus lowering the modal amplitudes excited by the forcing function. A static constraint was then included in the procedure using a low-frequency dynamic calculation
to approximate the static response. The constraint limited the amount of reduction that was achieved by the optimizer.
11:00
2aSAb2. Extending interior near-field acoustic holography to visualize three-dimensional objective parameters of sound quality.
Huancai Lu (Mech. Eng., Zhejiang Univ. of Technol., 3649 Glenwood Ave., Windsor, ON N9E 2Y6, Canada, huancailu@zjut.edu.cn)
It is essential to understand that the ultimate goal of interior noise control is to improve the sound quality inside the vehicle, rather
than to suppress the sound pressure level. Therefore, the vehicle interior sound source localization and identification should be based on
the contributions of sound sources to the subjective and/or objective parameters of sound quality at targeted points, such as driver’s ear
positions. This talk introduces the visualization of three-dimensional objective parameters of sound quality based on interior near-field
acoustic holography (NAH). The methodology of mapping three-dimensional sound pressure distribution, which is reconstructed based
on interior NAH, to three-dimensional loudness is presented. The mathematical model of loudness developed by ANSI standard is discussed. The numerical interior sound field, which is generated by vibrating enclosure with known boundary conditions, is employed to
validate the methodology. In addition, the accuracy of reconstruction of loudness distribution is examined with ANSI standard and digital head. It is shown that the results of sound source localization based on three-dimensional loudness distribution are different from the
ones based on interior NAH.
Contributed Paper
11:25
2aSAb3. A comparative analysis of the Chicago Transit Authority’s
Red Line railcars. Chris S. Nottoli (Riverbank Acoust. Labs., 1145 Walter,
Lemont, IL 60439, cnottoli18@gmail.com)
A noise study was conducted on Chicago Transit Authority’s Red Line
railcars to assess the differences in interior sound pressure level between the
5000 series railcars and its predecessor, the 2400 series. The study took into
account potential variability associated with a rider’s location in the railcars,
above ground, and subway segments (between stations), and surveyed the
2142
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
opinion of everyday Red Line riders as pertaining to perceived noise. The
test data demonstrated a 3–6 dB noise reduction in ongoing CTA
renovations between new rapid transit cars and their predecessors. Location
on the train influenced Leq(A) measurements as reflections from adjacent
railcars induced higher noise levels. The new railcars also proved effective
in noise reduction throughout the subway segments as the averaged Leq(A)
deviated 1 dB from above ground rail stations. Additionally, this study
included an online survey that revealed a possible disconnect between
traditional methods of objective noise measurement and subjective noise
ratings.
168th Meeting: Acoustical Society of America
2142
TUESDAY MORNING, 28 OCTOBER 2014
MARRIOTT 5, 8:00 A.M. TO 12:00 NOON
Session 2aSC
Speech Communication: Speech Production and Articulation (Poster Session)
Sam Tilsen, Chair
Cornell University, 203 Morrill Hall, Ithaca, NY 14853
Contributed Papers
2aSC1. Tongue motion characteristics during vowel production in older
children and adults. Jennell Vick, Michelle Foye (Psychol. Sci., Case
Western Reserve Univ., 11635 Euclid Ave., Cleveland, OH 44106, jennell@case.edu), Nolan Schreiber, Greg Lee (Elec. Eng. Comput. Sci., Case
Western Reserve Univ., Cleveland, OH), and Rebecca Mental (Psychol.
Sci., Case Western Reserve Univ., Cleveland, OH)
This study examined tongue movements in consonant-vowel-consonant
sequences drawn from real words in phrases as produced by 36 older children (three male and three female talkers at each age from 10 to 15 years)
and 36 adults. Movements of four points on the tongue were tracked at 400
Hz using the Wave Electromagnetic Speech Research System (NDI, Waterloo, ON, CA). The four points were tongue tip (TT; 1 cm from tip on midline), tongue body (TB; 3 cm from tip on midline), tongue right (TR; 2 cm
from tip on right lateral edge), and tongue left (TR; 2 cm from tip on left lateral edge). The phrases produced included the vowels /i/, /I/, /ae/, and /u/ in
words (i.e., “see,” sit,” cat,” and “zoo”). Movement measures included 3D
distance, peak and average speed, and duration of vowel opening and closing strokes. The horizontal curvature of the tongue was calculated at the trajectory speed minimum associated with the vowel production using a leastsquares quadratic fit of the TR, TB, and TL positional coordinates. Symmetry of TR and TL vertical position was also calculated. Within-group comparisons were made between vowels and between-group comparisons were
made between children and adults.
2aSC2. Experimental evaluation of the constant tongue volume hypothesis. Zisis Iason Skordilis, Vikram Ramanarayanan (Signal Anal. and Interpretation Lab., Dept. of Elec. Eng., Univ. of Southern California, 3710
McClintock Ave., RTH 320, Los Angeles, CA 90089, skordili@usc.edu),
Louis Goldstein (Dept. of Linguist, Univ. of Southern California, Los
Angeles, CA), and Shrikanth S. Narayanan (Signal Anal. and Interpretation
Lab., Dept. of Elec. Eng., Univ. of Southern California, Los Angeles, CA)
The human tongue is considered to be a muscular hydrostat (Kier and
Smith, 1985). As such, it is considered to be incompressible. This constant
volume hypothesis has been incorporated in various mathematical models
of the tongue, which attempt to provide insights into its dynamics (e.g., Levine et al., 2005). However, to the best of our knowledge, this hypothesis has
not been experimentally validated for the human tongue during actual
speech production. In this work, we attempt an experimental evaluation of
the constant tongue volume hypothesis. To this end, volumetric structural
Magnetic Resonance Imaging (MRI) was used. A database consisting of 3D
MRI images of subjects articulating continuants was considered. The subjects sustained contextualized vowels and fricatives (e.g., IY in “beet,” F in
“afa”) for 8 seconds in order for the 3D geometry to be collected. To segment the tongue and estimate its volume, we explored watershed (Meyer
and Beucher, 1990) and region growing (Adams and Bischof, 1994) techniques. Tongue volume was estimated for each lingual posture for each
2143
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
subject. Intra-subject tongue volume variation was examined to determine if
there is sufficient statistical evidence for the validity of the constant volume
hypothesis. [Work supported by NIH and a USC Viterbi Graduate School
Ph.D. fellowship.]
2aSC3. A physical figure model of tongue muscles. Makoto J. Hirayama
(Faculty of Information Sci. and Technol., Osaka Inst. of Technol., 1-79-1
Kitayama, Hirakata 573-0196, Japan, mako@is.oit.ac.jp)
To help understanding tongue shape and motions, a physical figure
model of tongue muscles using viscoelastic material of urethane rubber gel
were made by improving previous models. Compare to previous shape
tongue models that had been made and presented, the new model is constructed from tongue body (consisting of Transversus linguae, Verticalis linguae, Longitudinalis linguae superior, and Longitudinalis linguae inferior),
and individual extrinsic tongue muscles (consisting of Genioglossus anterior, Genio glossus posterior, Hyoglossus, Styloglossus, and Palatoglossus)
parts. Therefore, each muscle’s shape, starting and ending points, and relation to other muscles and organs inside mouth are more understandable than
previous ones. As the model is made from viscoelastic material similar to
human skin, reshaping and moving tongue are possible by pulling or pushing some parts of the tongue muscle by hand, that is, tongue shape and
motion simulations by hand can be done. The proposed model is useful for
speech science education or a future speaking robot using realistic speech
mechanism.
2aSC4. Tongue width at rest versus tongue width during speech: A comparison of native and non-native speakers. Sunao Kanada and Ian Wilson
(CLR Phonet. Lab, Univ. of Aizu, Tsuruga, Ikki machi, Aizuwkamatsu,
Fukushima 965-8580, Japan, m5181137@u-aizu.ac.jp)
Most pronunciation researchers do not focus on the coronal view. However, it is also important to observe because the tongue is hydrostatic. We
believe that some pronunciation differences between native speakers and second-language (L2) speakers could be due to differences in the coronal plane.
Understanding these differences could be a key to L2 learning and modeling.
It may be beneficial for pedagogical purposes and the results of this research
may contribute to the improvement of pronunciation of L2 English speakers.
An interesting way to look at native and L2 articulation differences is through
the pre-speech posture and inter-speech posture (ISP—rest position between
sentences). In this research, we compare native speakers to L2 speakers. We
measure how different those postures are from the median position of the
tongue during speech. We focus on movement of a side tongue marker in the
coronal plane, and we normalize for speaker size. We found that the mean
distance from pre-speech posture to speech posture is shorter for native English speakers (0.95 mm) than for non-native English speakers (1.62 mm). So,
native speakers are more efficient in their pre-speech posture. Results will
also be shown for distances from ISP to speech posture.
168th Meeting: Acoustical Society of America
2143
2a TUE. AM
All posters will be on display from 8:00 a.m. to 12:00 noon. To allow contributors an opportunity to see other posters, contributors of
odd-numbered papers will be at their posters from 8:00 a.m. to 10:00 a.m. and authors of even-numbered papers will be at their posters
from 10:00 a.m. to 12:00 noon.
2aSC5. Intraglottal velocity and pressure measurements in a hemilarynx model. Liran Oren, Sid Khosla (Otolaryngol., Univ. of Cincinnati, PO
Box 670528, Cincinnati, OH 45267, orenl@ucmail.uc.edu), and Ephraim
Gutmark (Aerosp. Eng., Univ. of Cincinnati, Cincinnati, OH)
Determining the mechanisms of self-sustained oscillation of the vocal
folds requires characterization of intraglottal aerodynamics. Since most of
the intraglottal aerodynamics forces cannot be measured in a tissue model
of the larynx, most of the current understanding of vocal fold vibration
mechanism is derived from mechanical, analytical, and computational models. In the current study, intraglottal pressure measurements are taken in a
hemilarynx model and are compared with pressure values that are computed
form simultaneous velocity measurements. The results show that significant
negative pressure is formed near the superior aspect of the folds during closing, which is in agreement with previous measurements in a hemilarynx
model. Intraglottal velocity measurements show that the flow near the superior aspect separates from the glottal wall during closing and may develop
into a vortex, which further augments the magnitude of the negative pressure. The intraglottal pressure distributions are computed by solving the
pressure Poisson equation using the velocity field measurements and show
good agreement with the pressure measurements. The match between the
pressure computations and the pressure measurements validates the technique, which was also used in previous study to estimate the intraglottal
pressure distribution in a full larynx model.
2aSC6. Ultrasound study of diaphragm motion during tidal breathing
and speaking. Steven M. Lulich, Marguerite Bonadies (Speech and Hearing
Sci., Indiana Univ., 4789 N White River Dr., Bloomington, IN 47404, slulich@indiana.edu), Meredith D. Lulich (Southern Indiana Physicians, Indiana Univ. Health, Bloomington, IN), and Robert H. Withnell (Speech and
Hearing Sci., Indiana Univ., Bloomington, IN)
Studies of speech breathing by Ladefoged and colleagues (in the 1950s
and 1960s), and by Hixon and colleagues (in the 1970s, 1980s, and 1990s)
have substantially contributed to our understanding of respiratory mechanics
during speech. Even so, speech breathing is not well understood when contrasted with phonation, articulation, and acoustics. In particular, diaphragm
involvement in speech breathing has previously been inferred from inductive plethysmography and EMG, but it has never been directly investigated.
In this case study, we investigated diaphragm motion in a healthy adult
male during tidal breathing and conversational speech using real-time 3D
ultrasound. Calibrated inductive plethysmographic data were recorded
simultaneously for comparison with previous studies and in order to relate
lung volumes directly to diaphragm motion.
2aSC7. A gestural account of Mandarin tone sandhi. Hao Yi and Sam
Tilsen (Dept. of Linguist, Cornell Univ., 315-7 Summerhill Ln., Ithaca, NY
14850, hy433@cornell.edu)
Recently tones have been analyzed as articulatory gestures, which may
be coordinated with segmental gestures. Our data from electromagnetic
articulometry (EMA) show that purported neutralized phonological contrast
can nonetheless exhibit coordinative difference. We develop a model based
on gestural coupling to account for observed patterns. Mandarin Third Tone
Sandhi (e.g., Tone3 ! T3S /_ Tone3) is perceptually neutralizing in that the
sandhi output (T3S) shares great similarity with Tone2. Despite both tones
having rising pitch contours, there exist subtle acoustic differences. However, the difference in underlying representation between T3S and Tone2
remains unclear. By presenting evidence from the alignment pattern
between tones and segments, we show that the acoustic differences between
Tone2 and T3S arises out of the difference in gestural organizations. The
temporal lag between the initiation of the Vowel gesture and that of Tone
gesture in T3S is shorter than that in Tone2. We further argue that underlying Tone3 is the source of incomplete neutralization between the Tone2 and
T3S. That is, despite the surface similarity, T3S is stored in the mental lexicon as Tone3.
2144
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
2aSC8. A real-time MRI investigation of anticipatory posturing in prepared responses. Sam Tilsen (Linguist, Cornell Univ., 203 Morrill Hall,
Ithaca, NY 14853, tilsen@cornell.edu), Pascal Spincemaille (Radiology,
Cornell Weill Medical College, New York, NY), Bo Xu (Biomedical Eng.,
Cornell Univ., New York, NY), Peter Doerschuk (Biomedical Eng., Cornell
Univ., Ithaca, NY), Wenming Luh (Human Ecology, Cornell Univ., Ithaca,
NY), Robin Karlin, Hao Yi (Linguist, Cornell Univ., Ithaca, NY), and Yi
Wang (Biomedical Eng., Cornell Univ., Ithaca, NY)
Speakers can anticipatorily configure their vocal tracts in order to facilitate the production of an upcoming vocal response. We find that this anticipatory articulation results in decoherence of articulatory movements that are
otherwise coordinated; moreover, speakers differ in the strategies they
employ for response anticipation. Real-time MRI images were acquired
from eight native English speakers performing a consonant-vowel response
task; the task was embedded in a 2 2 design, which manipulated preparation (whether speakers were informed of the target response prior to a gosignal) and postural constraint (whether the response was preceded by a prolonged vowel). Analyses of pre-response articulatory postures show that all
speakers exhibited anticipatory posturing of the tongue root in unconstrained responses. Some exhibited interactions between preparation and
constraint, such that anticipatory posturing was more extensive in preparedvs. unprepared-unconstrained responses. Cross-speaker variation was also
observed in anticipatory posturing of the velum: some speakers raised the
velum in anticipation of non-nasal responses, while others failed to do so.
The results show that models of speech production must be flexible enough
to allow for gestures to be executed individually, and that speakers differ in
the strategies they employ for response initiation.
2aSC9. An airflow examination of the Czech trills. Ekaterina Komova
(East Asian Lang. and Cultures, Columbia Univ., New York, NY) and Phil
Howson (The Univ. of Toronto, 644B-60 Harbord St., Toronto, ON
M5S3L1, Canada, phil.howson@mail.utoronto.ca)
Previous studies have suggested that there is a difference between the
Czech trills /r/ and /r/ with respect to the airflow required to produce each
trill. This study examines this question using an airflow meter. Five speakers
of Czech produced /r/ and /r/ in the real words rad “order,” parat “talon,”
tvar “face,” rad “like,” parada “great,” and tvar “shape.” Airflow data were
recorded using Macquirer. The data indicate a higher airflow during the production of /r/ compared to /r/. /r/ was produced with approximately 3 l/s
more than /r/. The increased airflow is necessary to cross the boundary of
laminar flow into turbulent flow and supports previous findings that /r/ is
produced with breathy voice, which facilities trilling during frication. The
data also suggests that one of the factors that makes the plain trill /r/ difficult
to produce is that the airflow required to produce a sonorous trill is tightly
constrained. The boundaries between trill production and the production of
frication are only a few l/s apart and thus requires careful management of
the laryngeal mechanisms, which control airflow.
?
?
?
?
?
2aSC10. Comparison of tidal breathing and reiterant speech breathing
using whole body plethysmography. Marguerite Bonadies, Robert H. Withnell, and Steven M. Lulich (Speech and Hearing Sci., Indiana Univ., 505 W
Lava Way, Apt. C, Bloomington, IN 47404, mcbonadi@umail.iu.edu)
Classic research in the field of speech breathing has found differences in
the characteristics of breathing patterns between speech respiration and tidal
breathing. Though much research has been done on speech breathing mechanisms, relatively little research has been done using the whole body plethysmograph. In this study, we sought to examine differences and similarities
between tidal respiration and breathing in reiterant speech using measures
obtained through whole-body plethysmography. We hypothesize that there
are not significant differences between pulmonary measures in tidal respiration and in speech breathing. This study involves tidal breathing on a spirometer attached to the whole-body plethysmograph followed by reiterant speech
using the syllable /da/ while reading the first part of The Rainbow Passage.
Experimental measures include compression volumes during both breathing
tasks, and absolute lung volumes as determined from the spirometer and calibrated whole-body plethysmograph. These are compared with the pulmonary
subdivisions obtained from pulmonary function tests, including vital capacity,
functional residual capacity, and total lung volume.
168th Meeting: Acoustical Society of America
2144
It has been previously suggested that fricative production is marked by a
longer glottal opening as compared to sonorous segments. The present study
uses electroglottography (EGG) and acoustic measurements to test this hypothesis by examining the activity of the vocal cords during the articulation
of fricative and sonorant segments of English and Sorbian. An English and a
Sorbian speakers’ extended individual productions of the phonemes /s, z, S, Z,
m, n, r, l, a/ and each phoneme in the context #Ca were recorded. The open
quotient was calculated using MATLAB. H1-H2 measures were taken at 5% into
the vowel following each C and at 50% into the vowel. The results indicate
that the glottis is open longer during the production of fricatives than for sonorous segments. Furthermore, the glottis is slightly more open for the production of nasals and liquids than it is for vowels. These results suggest that a
longer glottal opening facilitates the increased airflow required to produce frication. This contrasts previous analyses which suggested that frication is primarily achieved through a tightened constriction. While a tighter constriction
may be necessary, the increased airflow velocity produced by a longer glottal
opening is critical for the production of frication.
2aSC12. SIPMI: Superimposing palatal profile from maxillary impression onto midsagittal articulographic data. Wei-rong Chen and Yuehchin Chang (Graduate Inst. of Linguist, National Tsing Hua Univ., 2F-5,
No. 62, Ln. 408, Zhong-hua Rd., Zhubei City, Hsinchu County-302,
Taiwan, waitlong75@gmail.com)
Palatal traces reconstructed by current advanced technologies of realtime mid-sagittal articulatory tracking (e.g., EMA, ultrasound, rtMRI, etc.)
are mostly in low-resolution and lack concrete anatomical/orthodontic reference points as firm articulatory landmarks for determining places of articulation. The present study proposes a method of superimposing a physical
palatal profile extracted from maxillary impression, onto mid-sagittal articulatory data. The whole palatal/dental profile is first obtained from performing an alginate maxillary impression, and a plaster maxillary mold is made
from the impression. Then, the mold is either (1) cut into halves for handtracing or (2) 3D-scanned to extract a high resolution mid-sagittal palatal
line. The mid-sagittal palatal line made from maxillary mold is further subdivided into articulatory zones, following definitions of articulatory landmarks in the literature (e.g., Catford 1988), by referring to anatomical/orthodontic landmarks imprinted on the mold. Lastly, the high-resolution,
articulatorily divided palatal line can be superimposed, by using modified
Iterative Closet Point (ICP) algorithm, onto the reconstructed, low-resolution palatal traces in the real-time mid-sagittal articulatory data, so that
clearly divided places of articulation on palate can be visualized with articulatory movements. Evaluation results show that both hand-tracing and 3Dscanned palatal profiles yield accurate superimpositions and satisfactory
visualizations of place of articulation in our EMA data.
2aSC13. Waveform morphology of pre-speech brain electrical potentials. Silas Smith and Al Yonovitz (Dept. of Commun. Sci. and Disord., The
Univ. of Montana, The Univ of Montana, Missoula, MT 59812, silas.
smith@umconnect.umt.edu)
The inter- and intra-subject variations of the cortical responses before the
initiation of speech were recorded. These evoked potentials were obtained at
a sufficient sample rate that both slow negative waves as well as faster neurogenic signals were obtained. The marking point for determining the pre-event
time epoch has been an EMG source. The data are typically acquired off-line
and later averaged. This research uses a vocal signal as the marking point,
and displays in real time the event-related potential. Subjects were 12 males
and females. Electrodes were recorded with a silver–silver chloride electrodes
positioned at Cz and using the earlobes as reference and ground. A biological
preamplifier was used to amplify the weak bioelectric signals 100,000 times.
Each time epoch was sampled at 20,000 samples/sec. The frequency response
of these amplifiers had a high-pass of 0.1 Hz and a low-pass of 3 kHz. One
second of these signals were averaged for 100 trials just prior to the subject
initiation of the word “pool.” Electrical brain potentials have proven to be
extremely useful for diagnosis, treatment, and research in the auditory system,
and are expected to be of equal importance for the speech system.
2145
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
2aSC14. Acoustic correlates of bilingualism: Relating phonetic production to language experience and attitudes. Wai Ling Law (Linguist, Purdue Univ., Beering Hall, 00 North University St., West Lafayette, IN 47907,
wlaw@purdue.edu) and Alexander L. Francis (Speech, Lang. & Hearing
Sci., Purdue Univ., West Lafayette, IN)
Researchers tend to quantify degree of bilingualism according to agerelated factors such as age of acquisition (Flege, et al. 1999, Yeni-Komshian, et al. 2000). However, previous research suggests that bilinguals may
also show different degrees of accent and patterns of phonetic interaction
between their first language (L1) and second language (L2) as a result of
factors such as the quantity and quality of L2 input (Flege & Liu, 2001),
amount of L1 vs. L2 use (Flege, et al. 1999), and attitude toward each
language (Moyer, 2007). The goal of this study is to identify gradient properties of speech production that can be related to gradient language experience and attitudes in a bilingual population that is relatively homogeneous
in terms of age-related factors. Native Cantonese-English bilinguals living
in Hong Kong produced near homophones in both languages under conditions emphasizing one language or the other on different days. Acoustic
phonetic variables related to phonological inventory differences between
the two languages, including lexical tone/stress, syllable length, nasality, fricative manner and voicing, release of stop, voice onset time, and vowel
quality and length, will be quantified and compared to results from a
detailed survey of individual speakers’ experience and attitudes toward the
two languages.
2aSC15. Dialectal variation in affricate place of articulation in Korean.
Yoonjnung Kang (Ctr. for French and Linguist, Univ. of Toronto Scarborough, 1265 Military Trail, HW314, Toronto, ON M1C 1A4, Canada, yoonjung.kang@utoronto.ca), Sungwoo Han (Dept. of Korean Lang. and Lit.,
Inha Univ., Incheon, South Korea), Alexei Kochetov (Dept. of Linguist,
Univ. of Toronto, Toronto, ON, Canada), and Eunjong Kong (Dept. of English, Korea Aerosp. Univ., Goyang, South Korea)
The place of articulation (POA) of Korean affricates has been a topic of
much discussion in Korean linguistics. The traditional view is that the affricates were dental in the 15th century and then changed to a posterior coronal
place in most dialects of Korean but the anterior articulation is retained in
major dialects of North Korea, most notably Phyengan and Yukjin. However, recent instrumental studies on Seoul Korean and some impressionistic
descriptions of North Korean dialects cast doubt on the validity of this traditional view. Our study examines the POA of /c/ (lenis affricate) and /s/ (anterior fricative) before /a/ in Seoul Korean (26 younger and 32 older
speakers) and in two North Korean varieties, as spoken by ethnic Koreans in
China (14 Phyengan and 21 Yukjin speakers). The centre of gravity of the
frication noise of /c/ and /s/ was examined. The results show that in both
North Korean varieties, both sibilants are produced as anterior coronal and
comparable in their POA. In Seoul Korean, while the POA contrast shows a
significant interaction with age and gender, the affricate is consistently and
substantially more posterior than the anterior fricative across all speaker
groups. The results support the traditional description.
2aSC16. An articulatory study of high vowels in Mandarin produced by
native and non-native speakers. Chenhuei Wu (Dept. of Chinese Lang.
and Lit., National Hsinchu Univ. of Education, No. 521, Nanda Rd, Hsinchu
300, Taiwan, chenhueiwu@gmail.com), Weirong Chen (Graduate Inst. of
Linguist, National Tsing-hua Univ., Hsinchu, Taiwan), and Chilin Shih
(Dept. of Linguist, Univ. of Illinois at Urbana-Champaign, Urbana, IL)
This paper examined the articulatory properties of high vowels [i], [y],
and [u] in Mandarin produced by four Taiwanese Mandarin native speakers
and four English-speaking Chinese learners (L2 learners) with an Electromagnetic Articulagroph AG500. The articulatory positions of the tongue top
(TT), the tongue body (TB), the tongue dorsal (TD), and the lips were investigated. The TT, TB, and TD of [y] produced by the L2 learner were further
back than that by the native. In addition, the TD of [y] by the L2 learners
was higher than the native. Further comparison found that the tongue positions of [y] was similar to [u] in L2 production. Regarding to the lip positions, the [y] and [u] were more protruded that [i] in the native production,
while there is no difference among these three vowels in the L2 production.
168th Meeting: Acoustical Society of America
2145
2a TUE. AM
2aSC11. An electroglottography examination of fricative and sonorous
segments. Phil Howson (The Univ. of Toronto, 644B-60 Harbord St.,
Toronto, ON M5S3L1, Canada, phil.howson@mail.utoronto.ca)
The findings suggested that most of the L2 learner were not aware that the
lingual target [y] should be very similar to [i] but the lip articulators of [y]
are more protruded than [i]. Some L2 learners pronounce [y] more like a
diphthong [iu] rather than a monophthong.
2aSC17. Production and perception training of /r l/ with native Japanese speakers. Anna M. Schmidt (School of Speech Path. & Aud., Kent
State Univ., A104 MSP, Kent, OH 444242, aschmidt@kent.edu)
Visual feedback with electropalatometry was used to teach accurate /r/
and /l/ to a native Japanese speaker. Perceptual differentiation of the phonemes did not improve. A new perceptual training protocol was developed
and tested.
2aSC18. Production of a non-phonemic variant in a second language:
Acoustic analysis of Japanese speakers’ production of American English flap. Mafuyu Kitahara (School of Law, Waseda Univ., 1-6-1 Nishiwaseda, Shinjuku-ku, Tokyo 1698050, Japan, kitahara@waseda.jp), Keiichi
Tajima (Dept. of Psych., Hosei Univ., Tokyo, Japan), and Kiyoko
Yoneyama (Dept. of English Lang., Daito Bunka Univ., Tokyo, Japan)
Second-language (L2) learners need to learn the sound system of an L2
so that they can distinguish L2 words. However, it is also instructive to learn
non-phonemic, allophonic variations, particularly if learners want to sound
native-like. The production of intervocalic /t d/ as an alveolar flap is a prime
example of a non-phonemic variation that is salient in American English
and presumably noticeable to many L2 learners. Yet, how well such nonphonemic variants are learned by L2 learners is a relatively under-explored
subject. In the present study, Japanese learners’ production of alveolar flaps
was investigated, to clarify how well learners can learn the phonetic environments in which flapping tends to occur, and how L2 experience affects
their performance. Native Japanese speakers who had lived in North
America for various lengths of time read a list of words and phrases that
contained a potentially flappable stop, embedded in a carrier sentence.
Preliminary results indicated that the rate of flapping varied considerably
across different words and phrases and across speakers. Furthermore, acoustic parameters such as flap closure duration produced by some speakers
showed intermediate values between native-like flaps and regular stops, suggesting that flapping is a gradient phenomenon. [Work supported by JSPS.]
2146
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
2aSC19. A comparison of speaking rate consistency in native and nonnative speakers of English. Melissa M. Baese-Berk (Linguist, Univ. of Oregon, 1290 University of Oregon, Eugene, OR 97403, mbaesebe@uoregon.
edu) and Tuuli Morrill (Linguist, George Mason Univ., Fairfax, VA)
Non-native speech differs from native speech in many ways, including
overall longer durations and slower speech rates (Guion et al., 2000). Speaking rate also influences how listeners perceive speech, including perceived
fluency of non-native speakers (Munro & Derwing, 1998). However, it is
unclear what aspects of non-native speech and speaking rate might influence
perceived fluency. It is possible that in addition to differences in mean
speaking rate, there may be differences in the consistency of speaking rate
within and across utterances. In the current study, we use production data to
examine speaking rate in native and non-native speakers of English, and ask
whether native and non-native speakers differ in the consistency of their
speaking rate across and within utterances. We examined a corpus of read
speech, including isolated sentences and longer narrative passages. Specifically, we test whether the overall slower speech rate of non-native speakers
is coupled with an inconsistent speech rate that may result in less predictability in the produced speech signal.
2aSC20. Relative distances among English front vowels produced by
Korean and American speakers. Byunggon Yang (English Education,
Pusan National Univ., 30 Changjundong Keumjunggu, Pusan 609-735,
South Korea, bgyang@pusan.ac.kr)
This study examined the relative distances among English front vowels
in a message produced by 47 Korean and American speakers from an internet speech archive in order to provide better pronunciation skills for Korean
English learners. The Euclidean distances in the vowel space of F1 and F2
were measured among the front vowel pairs. The first vowel pair [i-E] was
set as the reference from which the relative distances of the other two vowel
pairs were measured in percent in order to compare the vowel sounds among
speakers of different vocal tract lengths. Results show that F1 values of the
front vowels produced by the Korean and American speakers increased
from the high front vowel to the low front vowel with differences among the
groups. The Korean speakers generally produced the front vowels with
smaller jaw openings than the American speakers did. Second, the relative
distance of the high front vowel pair [i-I] showed a significant difference
between the Korean and American speakers while that of the low front
vowel pair [E-æ] showed a non-significant difference. Finally, the Korean
speakers in the higher proficiency level produced the front vowels with
higher F1 values than those in the lower proficiency level.
168th Meeting: Acoustical Society of America
2146
TUESDAY MORNING, 28 OCTOBER 2014
INDIANA F, 8:00 A.M. TO 11:45 A.M.
Session 2aUW
Underwater Acoustics: Signal Processing and Ambient Noise
Jorge E. Quijano, Chair
University of Victoria, 3800 Finnerty Road, A405, Victoria, BC V8P 5C2, Canada
8:00
8:30
2aUW1. Moving source localization and tracking based on data. Tsih C.
Yang (Inst. of Undersea Technol., National Sun Yat-sen Univ., 70 Lien Hai
Rd., Kaohsiung 80404, Taiwan, tsihyang@gmail.com)
2aUW3. Test for eigenspace stationarity applied to multi-rate adaptive
beamformer. Jorge E. Quijano (School of Earth and Ocean Sci., Univ. of
Victoria, Bob Wright Ctr. A405, 3800 Finnerty Rd., Victoria, BC V8P 5C2,
Canada, jorgess39@hotmail.com) and Lisa M. Zurk (Elec. and Comput.
Eng. Dept., Portland State Univ., Portland, OR)
Matched field processing (MFP) was introduced sometimes ago for
source localization based on the replica field for a hypothesized source location that best matches the acoustic data received on a vertical line array
(VLA). A data-based matched-mode source localization method is introduced in this paper for a moving source, using mode wavenumbers and
depth functions estimated directly from the data, without requiring any environmental acoustic information and assuming any propagation model to calculate the replica field. The method is in theory free of the environmental
mismatch problem since the mode replicas are estimated from the same data
used to localize the source. Besides the estimation error due to the approximations made in deriving the data-based algorithms, the method has some
inherent drawbacks: (1) it uses a smaller number of modes than theoretically
possible, since some modes are not resolved in the measurements, and (2)
the depth search is limited to the depth covered by the receivers. Using
simulated data, it is found that the performance degradation due to the above
approximation/limitation is marginal compared with the original matchedmode source localization method. Certain aspects of the proposed method
have previously been tested against data. The key issues are discussed in
this paper.
Array processing in the presence of moving targets is challenging since
the number of stationary data snapshots required for estimation of the data
covariance are limited. For experimental scenarios that include a combination of fast maneuvering loud interferers and quiet targets, the multi-rate
adaptive beamformer (MRABF) can mitigate the effect of non-stationarity.
In MRABF, the eigenspace associated to loud interferers is first estimated
and removed, followed by application of adaptive beamforming techniques
to the remaining, less variable, “target” subspace. Selection of the number
of snapshots used for estimation of the interferer eigenspace is crucial in the
operation of MRABF, since too few snapshots result in poor eigenspace estimation, while too many snapshots result in leakage of non-stationary interferer effects into the target subspace. In this work an eigenvector-based test
for data stationarity, recently developed in the context of very large arrays
with snapshot deficiency, is used as a quantitative method to select the optimal number of snapshots for the estimation of the non-stationary eigenspace. The approach is demonstrated with simulated and experimental data
from the Shallow Water Array Performance (SWAP) experiment.
8:15
8:45
2aUW2. Simultaneous localization of multiple vocalizing humpback
whale calls in an ocean waveguide with a single horizontal array using
the array invariant. Zheng Gong, Sunwoong Lee (Mech. Eng., Massachusetts Inst. of Technol., 5-435, 77 Massachusetts Ave., Cambridge, MA
02139, zgong@mit.edu), Purnima Ratilal (Elec. and Comput. Eng., Northeastern Univ., Boston, MA), and Nicholas C. Makris (Mech. Eng., Massachusetts Inst. of Technol., Cambridge, MA)
2aUW4. Design of a coprime array for the North Elba sea trial. Vaibhav
Chavali, Kathleen E. Wage (Elec. and Comput. Eng., George Mason Univ.,
4307 Ramona Dr., Apt. # H, Fairfax, VA 22030, vchavali@gmu.edu), and
John R. Buck (Elec. and Comput. Eng., Univ. of Massachusetts Dartmouth,
Dartmouth, MA)
The array invariant method, previously derived for instantaneous range
and bearing estimation of a broadband impulsive source in a horizontally
stratified ocean waveguide [Lee and Makris, J. Acoust. Soc. Am. 119, 336–
351 (2006)], is generalized to instantaneously and simultaneously localize
multiple uncorrelated broadband noise sources that not necessarily impulsive in the time domain. In an ideal Pekeris waveguide, we theoretically
show that source range and bearing can be instantaneously obtained from
beam-time migration lines measured with a horizontal array through range
and bearing dependent differences that arise between modal group speeds
along the array. We also show that this theory is approximately valid in a
horizontally stratified ocean waveguide. A transform, similar to the Radon
transform, is employed to enable simultaneous localization of multiple
uncorrelated broadband noise sources without ambiguity using the array
invariant method. The method is now applied to humpback whale vocalization data from the Gulf of Maine 2006 Experiment for humpback whale
ranges up to tens of kilometers, where it is shown that accurate bearing and
range estimation of multiple vocalizing humpback whales can be simultaneously made with little computational effort.
2147
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Vaidyanathan and Pal [IEEE Trans. Signal Process. 2011] proposed the
use of Coprime Sensor Arrays (CSAs) to sample spatial fields using fewer
elements than a Uniform Line Array (ULA) spanning the same aperture. A
CSA consists of two interleaved uniform subarrays that are undersampled
by coprime factors M and N. The subarrays are processed independently
and then their scanned responses are multiplied to obtain a unaliased output.
Although the CSA achieves resolution comparable to that of a fully populated ULA, the CSA beampattern has higher sidelobes. Adhikari et al.
[Proc. ICASSP, 2013] showed that extending the subarrays and applying
spatial tapers could reduce CSA sidelobes. This paper considers the problem
of designing a CSA for the North Elba Sea Trial described by Gingras
[SACLANT Tech. Report, 1994]. The experimental dataset consists of
receptions recorded by a 48-element vertical ULA in a shallow water environment for two different source frequencies: 170 Hz and 335 Hz. This paper considers all possible coprime subsamplings for this array and selects
the configuration that provides the best tradeoff between number of sensors
and performance. Results are shown for both simulated and experimental
data. [Work supported by ONR Basic Research Challenge Program.]
168th Meeting: Acoustical Society of America
2147
2a TUE. AM
Contributed Papers
9:00
9:45
2aUW5. Localization of a high frequency source in a shallow ocean sound
channel using frequency-difference matched field processing. Brian
Worthmann (Appl. Phys., Univ. of Michigan, 3385 Oakwood St., Ann Arbor,
MI 48104, bworthma@umich.edu), H. C. Song (Marine Physical Lab.,
Scripps Inst. for Oceanogr., Univ. of California - San Diego, La Jolla, CA),
and David R. Dowling (Mech. Eng., Univ. of Michigan, Ann Arbor, MI)
2aUW8. Space-time block code with equalization technology for underwater acoustic channels. Chunhui Wang, Xueli Sheng, Lina Fan, Jia Lu,
and Weijia Dong (Sci. and Technol. on Underwater Acoust. Lab., College
of Underwater Acoust. Engineering, Harbin Eng. Univ., Harbin 150001,
China, 740443619@qq.com)
Matched field processing (MFP) is an established technique for locating
remote acoustic sources in known environments. Unfortunately, environment-to-propagation model mismatch prevents successful application of
MFP in many circumstances, especially those involving high frequency signals. For beamforming applications, this problem was found to be mitigated
through the use of a nonlinear array-signal-processing technique called frequency difference beamforming (Abadi et al. 2012). Building on that work,
this nonlinear technique was extended to MFP, where Bartlett ambiguity
surfaces were calculated at frequencies two orders of magnitude lower than
the propagated signal, where the detrimental effects of environmental mismatch are much reduced. In the Kauai Acomms MURI 2011 (KAM11)
experiment, underwater signals of frequency 11.2 kHz to 32.8 kHz were
broadcast 3 km through a 106-m-deep shallow-ocean sound channel and
were recorded by a sparse 16-element vertical array. Using the ray-tracing
code Bellhop as the propagation model, frequency difference MFP was performed, and some degree of success was found in localizing the high frequency source. In this presentation, the frequency difference MFP technique
is explained, and comparisons of this nonlinear MFP technique with conventional Bartlett MFP using both simulations and KAM11 experimental data
are provided. [Sponsored by the Office of Naval Research.]
9:15
2aUW6. Transarctic acoustic telemetry. Hee-Chun Song (SIO, UCSD,
9500 Gilman Dr., La Jolla, CA 92093-0238, hcsong@mpl.ucsd.edu), Peter
Mikhalvesky (Leidos Holdings, Inc., Arlington, VA), and Arthur Baggeroer
(Mech. Eng., MIT, Cambridge, MA)
On April 9 and 13, 1999, two Arctic Climate Observation using Underwater Sound (ACOUS) tomography signals were transmitted from a 20.5-Hz
acoustic source moored at the Franz Victoria Strait to an 8-element, 525-m
vertical array at ice camp APLIS in the Chukchi Sea at a distance of approximately 2720 km. The transmitted signal was a 20-min long, 255-digit msequence that can be treated as a binary-phase shift-keying communication
signal with a data rate of 2 bits/s. The almost error-free performance using either spatial diversity (three elements) for a single transmission or temporal diversity (two transmissions) with a single element demonstrates the feasibility
of ice-covered trans-Arctic acoustic communications.
9:30
2aUW7. Performance of adaptive multichannel decision-feedback
equalization in the simulated underwater acoustic channel. Xueli Sheng,
Lina Fan (Sci. and Technol. on Underwater Acoust. Lab., Harbin Eng.
Univ., Harbin Eng. University Shuisheng Bldg. 803, Nantong St. 145, Harbin, Heilongjiang 150001, China, shengxueli@aliyun.com), Aijun Song,
and Mohsen Badiey (College of Earth, Ocean, and Environment, Univ. of
Delaware, Newark, DE)
Adaptive multichannel decision feedback equalization [M. Stojanovic, J.
Catipovic, and J. G. Proakis, J. Acoust. Soc. Am. 94, 1621–1631 (1993)] is
widely adopted to address the severe inter-symbol interference encountered
in the underwater acoustic communication channel. In this presentation, its
performance will be reported in the simulated communication channel provided by a ray-based acoustic model, for different ocean conditions and
source-receiver geometries. The ray model uses the Rayleigh parameter to
prescribe the sea surface effects on the acoustic signal. It also supports different types of sediment. The ray model output has been compared with the
experimental data and shows comparable results in transmission loss. We
will also compare against the performance of multichannel decision feedback equalization supported by existing ray models, for example,
BELLHOP.
2148
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
In order to combat the effects of multipath interference and fading for
underwater acoustic (UWA) channels, this paper investigates a scheme of
the combination of space-time block code (STBC) and equalization technology. STBC is used in this scheme to reduce the effects of fading for the
UWA channels, then equalization technology is used in this scheme to mitigate intersymbol interference. The performance of the scheme is analyzed
with Alamouti space-time coding for the UWA channels. Our simulations
indicate that the combination of STBC and equalization technology provides
lower bit error rates.
10:00–10:15 Break
10:15
2aUW9. Robust focusing in time-reversal mirror with a virtual source
array. Gi Hoon Byun and Jea Soo Kim (Ocean Eng., Korea Maritime and
Ocean Univ., Dongsam 2-dong, Yeongdo-gu, Busan, Korea, Busan, South
Korea, knitpia77@gmail.com)
The effectiveness of Time-Reversal (TR) focusing has been demonstrated in various fields of ocean acoustics. In TR focusing, a probe source
is required for a coherent acoustic focus at the original probe source location. Recently, the need of a probe source has been partially relaxed by
introduction of the concept of a Virtual Source Array (VSA) [S. C. Walker,
Philippe Roux, and W. A. Kuperman, J. Acoust. Soc. Am. 125(6), 3828–
3834 (2009)]. In this study, Adaptive Time-Reversal Mirror (ATRM) based
on multiple constraint method [J. S. Kim, H. C. Song, and W. A. Kuperman,
J. Acoust. Soc. Am. 109(5), 1817–1825 (2001)] and Singular Value Decomposition (SVD) method are applied to a VSA for robust focusing. The numerical simulation results are presented and discussed.
10:30
2aUW10. Wind generated ocean noise in deep sea. Fenghua Li and
Jingyan Wang (State Key Lab. of Acoust., Inst. of Acoust., CAS, No. 21
Beisihuanxi Rd., Beijing 100190, China, lfh@mail.ioa.ac.cn)
Ocean noise is an important topic in underwater acoustics, which has
been paid much attention in last decades. Ocean noise sources may consist
of wind, biological sources, ships, earthquakes and so on. This paper discusses measurements of the ocean noise intensity in deep sea during strong
wind periods. During the experiment, shipping density is small enough and
the wind generated noise is believed to be the dominated effect in the
observed frequency range. The analyses of the recoded noise data reveal
that the wind generated noise source has a strong dependence on the wind
speed and frequency. Based on the data, a wind generated noise source
model is presented. [Work supported by National Natural Science Foundation of China, Grant No. 11125420.]
10:45
2aUW11. Ocean ambient noise in the North Atlantic during 1966 and
2013–2014. Ana Sirovic, Sean M. Wiggins, John A. Hildebrand (Scripps
Inst. of Oceanogr., UCSD, 9500 Gilman Dr. MC 0205, La Jolla, CA 920930205, asirovic@ucsd.edu), and Mark A. McDonald (Whale Acoust., Bellvue, CO)
Low-frequency ocean ambient noise has been increasing in many parts
of the worlds’ oceans as a result of increased shipping. Calibrated passive
acoustic recordings were collected from June 2013 to March 2014 on the
south side of Bermuda in the North Atlantic, at a location where ambient
noise data were collected in 1966. Monthly and hourly mean power spectra
(15–1000 Hz) were calculated, in addition to skewness, kurtosis, and percentile distributions. Average spectrum levels at 40 Hz, representing shipping noise, ranged from 78 to 80 dB re: 1 mPa2/Hz, with a peak in March
and minimum in July and August. Values recorded during this recent period
were similar to those recorded during 1966. This is different from trends
168th Meeting: Acoustical Society of America
2148
11:00
2aUW12. Adaptive passive fathometer processing of surface-generated
noise received by Nested array. Junghun Kim and Jee W. Choi (Marine
Sci. and Convergent Technol., Hanyang Univ., 1271 Sa-3-dong, Ansan 426791, South Korea, Kimjh0927@hanyang.ac.kr)
Recently, a passive fathometer technique using surface-generated ambient noise has been applied to the estimate of bottom profile. This technique
performs the beamforming of ambient noise received by a vertical line array
to estimate the sub-bottom layer structure as well as water depth. In the previous works, the surface noise signal processing was performed with equally
spaced line arrays and the main topic of the research was the comparison of
the results estimated using several beamforming techniques. In this talk, the
results estimated from the ambient noise received by the Nested vertical line
array (called POEMS) which consists of the total 24-elments and four subbands are presented. The measurements were made on the eastern coast
(East Sea) of Korea. Four kinds of beamforming algorithms are applied to
each sub-band and also, nested array processing combining each sub-band
signal was performed to obtain the best result. The results are compared to
the bottom profiles from the chirp sonar. [This research was supported by
the Agency for Defense Development, Korea.]
11:15
2aUW13. Feasibility of low-frequency acoustic thermometry using deep
ocean ambient noise in the Atlantic, Pacific, and Indian Oceans. Katherine
F. Woolfe and Karim G. Sabra (Mech. Eng., Georgia Inst. of Technol., 672
Brookline St SW, Atlanta, GA 30310, katherine.woolfe@gmail.com)
Previous work has demonstrated the feasibility of passive acoustic thermometry using coherent processing of low frequency ambient noise (1–40
Hz) recorded on triangular hydrophones arrays spaced ~130 km and located
in the deep sound channel. These triangular arrays are part of hydroacoustic
stations of the International Monitoring System operated by the
2149
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Comprehensive Nuclear Test Ban Treaty Organization (Woolfe et al., J.
Acoust. Soc. Am. 134, 3983). To understand how passive thermometry
could potentially be extended to ocean basin scales, we present a comprehensive study of the coherent components of low-frequency ambient noise
recorded on five hydroacoustic stations located Atlantic, Pacific, and Indian
Oceans. The frequency dependence and seasonal variability of the spatial
coherence and directionality of the low-frequency ambient noise were systematically examined at each of the tested site locations. Overall, a dominant coherent component of the low-frequency noise was found to be
caused by seasonal ice-breaking events at the poles for test sites that have
line-of-sight paths to polar ice. These findings could be used to guide the
placement of hydrophone arrays over the globe for future long-range passive
acoustic thermometry experiments.
11:30
2aUW14. Ambient noise in the Arctic Ocean measured with a drifting
vertical line array. Peter F. Worcester, Matthew A. Dzieciuch (Scripps
Inst. of Oceanogr., Univ. of California, San Diego, 9500 Gilman Dr., 0225,
La Jolla, CA 92093-0225, pworcester@ucsd.edu), John A. Colosi (Dept. of
Oceanogr., Naval Postgrad. School, Monterey, CA), and John N. Kemp
(Woods Hole Oceanographic Inst., Woods Hole, MA)
In mid-April 2013, a Distributed Vertical Line Array (DVLA) with 22
hydrophone modules over a 600-m aperture immediately below the subsurface float was moored near the North Pole. The top ten hydrophones were
spaced 14.5 m apart. The distances between the remaining hydrophones
increased geometrically with depth. Temperature and salinity were measured by thermistors in the hydrophone modules and ten Sea-Bird MicroCATs. The mooring parted just above the anchor shortly after deployment
and subsequently drifted slowly south toward Fram Strait until it was recovered in mid-September 2013. The DVLA recorded low-frequency ambient
noise (1953.125 samples per second) for 108 minutes six days per week.
Previously reported noise levels in the Arctic are highly variable, with periods of low noise when the wind is low and the ice is stable and periods of
high noise associated with pressure ridging. The Arctic is currently undergoing dramatic changes, including reductions in the extent and thickness of
the ice cover, the amount of multiyear ice, and the size of the ice keels. The
ambient noise data collected as the DVLA drifted will test the hypothesis
that these changes result in longer and more frequent periods of low noise
conditions than experienced in the past.
168th Meeting: Acoustical Society of America
2149
2a TUE. AM
observed in the Northern Pacific, where ocean ambient noise has been
increasing; however, the location of this monitoring site was exposed only
to shipping lanes to the south of Bermuda. At frequencies dominated by
wind and waves (500 Hz), noise levels ranged from 55 to 66 dB re: 1 mPa2/Hz,
indicating low sea states (2–3) prevailed during the summer, and higher sea
states (4–5) during the winter. Seasonally important contribution to ambient
sound also came from marine mammals, such as blue and fin whales.
TUESDAY AFTERNOON, 28 OCTOBER 2014
MARRIOTT 7/8, 1:00 P.M. TO 4:30 P.M.
Session 2pAA
Architectural Acoustics and Engineering Acoustics: Architectural Acoustics and Audio II
K. Anthony Hoover, Cochair
McKay Conant Hoover, 5655 Lindero Canyon Road, Suite 325, Westlake Village, CA 91362
Alexander U. Case, Cochair
Sound Recording Technology, University of Massachusetts Lowell, 35 Wilder St., Suite 3, Lowell, MA 01854
Invited Papers
1:00
2pAA1. Defining home recording spaces. Sebastian Otero (Acustic-O, Laurel 14, San Pedro Martir, Tlalpan, Mexico, D.F. 14650,
Mexico, sebastian@acustic-o.com)
The idea of home recording has been widely used through out the audio and acoustics community for some time. The effort and
investment put into these projects fluctuate in such a wide spectrum that there is no clear way to unify the concept of “home studio,”
making it difficult for acoustical consultants and clients to reach an understanding on each other project goals. This paper analyses different spaces which vary in terms of privacy, comfort, size, audio quality, budget, type of materials, acoustic treatments, types of projects developed and equipment, but which can all be called “home recording spaces,” in order to develop a more specific classification of
these environments.
1:20
2pAA2. Vibrato parameterization. James W. Beauchamp (School of Music and Elec. & Comput. Eng., Univ. of Illinois at UrbanaChampaign, 1002 Eliot Dr., Urbana, IL 61801-6824, jwbeauch@illinois.edu)
In an effort to improve the quality of synthetic vibrato many musical instrument tones with vibrato have been analyzed and
frequency-vs-time curves have been parameterized in terms of a time-varying offset and a time-varying vibrato depth. Results for variable mean F0 and instrument are presented. Whereas vocal vibrato appears to sweep out the resonance characteristic of the vocal tract,
as shown by amplitude-vs-frequency curves for the superposition of a range of harmonics, amplitude-vs-frequency curves for instruments are dominated by hysteresis effects that obscure their interpretation in terms of resonance characteristics. Nevertheless, there is a
strong correlation between harmonic amplitude and frequency modulations. An effort is being made to parameterize this effect in order
to provide efficient and expressive synthesis of vibrato tones with independent control of vibrato rate and tone duration.
1:40
2pAA3. Get real: Improving acoustic environments in video games. Yuri Lysoivanov (Recording Arts, Tribeca Flashpoint Media
Arts Acad., 28 N. Clark St. Ste. 500, Chicago, IL 60602, yuri.lysoivanov@tfa.edu)
As processing power grows the push for realism in video games continues to expand. However, techniques for generating realistic
acoustic environments in games have often been limited. Using examples from major releases, this presentation will take a historical perspective on interactive environment design, discuss current methods for modeling acoustic environments in games and suggest specific
cases where acoustic expertise can provide an added layer to the interactive experience.
2:00
2pAA4. Applications of telematic mixing consoles in networked audio for musical performance, spatial audio research, and
sound installations. David J. Samson and Jonas Braasch (Rensselaer Polytechnic Inst., 1521 6th Ave., Apt. 303, Troy, NY 12180, samsod2@rpi.edu)
In today’s technologically driven world, the ability to connect across great distance via Internet Protocol is more important than
ever. As the technology evolves, so does the art and science that relies upon it for collaboration and growth. Developing the state of the
art system for flexible and efficient routing of networked audio provides a platform for experimental musicians, researchers, and artists
to create freely without the restrictions imposed by traditional telepresence. Building on previous development and testing of a telematic
mixing console while addressing critical issues with the current platform and current practice, the console allows for the integration of
high-quality networked audio into computer assisted virtual environments (CAVE systems), sound and art installations, and other audio
driven research projects. Through user study, beta testing, and integration into virtual audio environments, the console has evolved to
meet the demand for power and flexibility critical to multi-site collaboration with high-quality networked audio. Areas of concern
addressed in development are computational efficiency, system latency, routing architecture, and results of continued user study.
2150
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2150
2:20
2pAA5. Twenty years of electronic architecture in the Hilbert Circle Theatre. Paul Scarbrough (Akustiks, 93 North Main St., South
Norwalk, CT 06854, pscarbrough@akustiks.com) and Steve Barbar (E-coustic Systems, Belmont, MA)
In 1984, the Circle Theatre underwent a major renovation, transforming the original 3000 + seat venue into a 1780 seat hall with
reclaimed internal volume dedicated to a new lobby and an orchestra rehearsal space. In 1996, a LARES acoustic enhancement system
replaced the original electronic architecture system, and has been used in every performance since that time. We will discuss details of
the renovation, the incorporation of the electronic architecture with other acoustical treatments, system performance over time, and plans
for the future.
2:40
2pAA6. Equalization and compression—Friends or foes? Alexander U. Case (Sound Recording Technol., Univ. of Massachusetts
Lowell, 35 Wilder St., Ste. 3, Lowell, MA 01854, alex@fermata.biz)
These two essential signal processors have overlapping capabilities. Tuning a sound system for any function requires complementary
interaction between equalization and compression. The timbral impact of compression is indirect, and can be counterintuitive. A deeper
understanding of compression parameters, particularly attack and release, clarifies the connection between compression and tone and
makes coordination with equalization more productive.
3:00–3:15 Break
3:15
3:45
2pAA7. Analysis of room acoustical characteristics by plane wave
decomposition using spherical microphone arrays. Jin Yong Jeon,
Muhammad Imran, and Hansol Lim (Dept. of Architectural Eng., Hanyang
Univ., 17 Haengdang-dong, Seongdong-gu, Seoul, 133791, South Korea,
jyjeon@hanyang.ac.kr)
2pAA9. A case study of a high end residential condominium building
acoustical design and field performance testing. Erik J. Ryerson and Tom
Rafferty (Acoust., Shen Milsom & Wilke, LLC, 2 North Riverside Plaza,
Ste. 1460, Chicago, IL 60606, eryerson@smwllc.com)
The room acoustical characteristics have been investigated in temporal
and spatial structures of room impulse responses (IRs) at different audience
positions in real halls. The spherical microphone array of 32-channel is used
for measurements process. Specular and diffusive reflections in IRs have
been visualized in temporal domain with sound-field decomposition analysis. For plane wave decomposition, the spherical harmonics are used. The
beamforming technique is also employed to make directional measurements
and for the spatio-temporal characterization of sound field. The directional
measurements by beamforming are performed for producing impulse
responses for the different directions to characterize the sound. From the
estimation of spatial characterization, the reflective surfaces of the hall are
indicated as responsible for specular and diffusive reflections.
3:30
2pAA8. Comparing the acoustical nature of a compressed earth block
residence to a traditional wood-framed residence. Daniel Butko (Architecture, The Univ. of Oklahoma, 830 Van Vleet Oval, Norman, OK 73019,
butko@ou.edu)
Various lost, misunderstood, or abandoned materials and methods
throughout history can serve as viable options in today’s impasse of nature
and mankind. Similar to the 19th century resurgence of concrete, there is a
developing interest in earth as an architectural material capable of dealing
with unexpected fluctuations and rising climate changes. Studying the
acoustical nature of earthen construction can also serve as a method of
application beyond aesthetics and thermal comfort. Innovations using Compressed Earth Block (CEB) have been developed and researched over the
past few decades and recently the collaborative focus for a collaborative
team of faculty and students at a NAAB accredited College of Architecture,
an ABET accredited College of Engineering, and a local chapter of Habitat
for Humanity. The multidisciplinary research project resulted in the design
and simultaneous construction of both a CEB residence and a conventionally wood-framed version of equal layout, area, volume, apertures, and roof
structure on adjacent sites to prove the structural, thermal, economical, and
acoustical value of CEB as a viable residential building material. This paper
defines acoustical measurements of both residences such as STC, OITC, TL,
NC, FFT, frequency responses, and background noise levels prior to
occupancy.
2151
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
A high end multi-owner condominium building complex consists of 314
units configured in a central tower with a 39-story central core, as well as 21
and 30 story side towers. A series of project specific acoustical design considerations related to condominium unit horizontal and vertical acoustical
separation as well as background noise control for building HVAC systems
were developed for the project construction documents and later field tested
to confirm conformance with the acoustical design criteria. This paper
presents the results of these building wide field tests as well as a discussion
of pitfalls encountered during design, construction, and post-construction.
4:00
2pAA10. Innovative ways to make cross laminated timber panels
sound-absorptive. Banda Logawa and Murray Hodgson (Mech. Eng., Univ.
of Br. Columbia, 2160-2260 West Mall, Vancouver, BC, Canada, logawa_
b@yahoo.com)
Cross Laminated Timber (CLT) panels typically consist of several glued
layers of wooden boards with orthogonally alternating directions. This
cross-laminating process allows CLT panels to be used as load-bearing plate
elements similar to concrete slabs. However, they are very sound-reflective,
which can lead to concerns about acoustics. Growing interest in applications
of CLT panels as building materials in North America has initiated much
current research on their acoustical properties. This project is aimed at
investigating ways to improve the sound-absorption characteristics of the
panels by integrating arrays of Helmholtz-resonator (HR) absorbers into the
panels and establishing design guidelines for CLT-HR absorber panels for
various room-acoustical applications. To design the new prototype panels,
several efforts have been made to measure and analyze the sound-absorption
characteristics of the exposed CLT surfaces in multiple buildings in British
Columbia, investigate suitable methods and locations to measure both normal and random incidence sound absorption characteristics, study the current manufacturing method of CLT panels, create acoustic models of CLTHR absorber panels with various shapes and dimensions, and evaluate the
sound absorption performance of prototype panels. This paper will report
progress on this work.
168th Meeting: Acoustical Society of America
2151
2p TUE. PM
Contributed Papers
4:15
2pAA11. Investigate the persistence of sound frequencies Indoor television decors. Mohsen Karami (Dept. of Media Eng., IRIB Univ., No. 8, Dahmetry 4th Alley, Bahar Ave., Kermanshah, Kermanshah 6718839497, Iran,
mohsenkarami.ir@gmail.com)
changes in the frequency of sound energy absorption occurs. To address this
issue, the pink noise playback with B&K2260 device, standard time
ISO3382, reverberation time in IRIB channels programs twelve decors was
measured in various studios. Survey shows values obtained in all the decor,
the persistence of high frequencies and this effect occurred regardless of the
decor’s shape and the studio is.
It seems to add to the decor of the studio and make a half-closed spaces
and reduce the absorption of waves hitting the studio sound and lasting
TUESDAY AFTERNOON, 28 OCTOBER 2014
LINCOLN, 1:25 P.M. TO 5:00 P.M.
Session 2pAB
Animal Bioacoustics: Topics in Animal Bioacoustics II
Cynthia F. Moss, Chair
Psychological and Brain Sci., Johns Hopkins Univ., 3400 N. Charles St., Ames Hall 200B, Baltimore, MD 21218
Chair’s Introduction—1:25
Contributed Papers
1:30
2pAB1. Amplitude shifts in the cochlear microphonics of Mongolian
gerbils created by noise exposure. Shin Kawai and Hiroshi Riquimaroux
(Life and Medical Sci., Doshisha Univ., 1-3 Miyakotani, Tatara, Kyotanabe
610-0321, Japan, hrikimar@mail.doshisha.ac.jp)
The Mongolian gerbil (Meriones unguiculatus) was used to evaluate
effects of intense noise exposure on functions of the hair cells. Cochlear
microphonics (CM) served an index to show functions of the hair cells. The
purpose of this study was to verify which frequency was most damaged by
noise exposure and examine relationships between the frequency and the
animal’s behaviors. We measured growth and recovery of the temporal
shifts in amplitude of CM. The CM was recorded from the round window.
Test stimuli used were tone bursts (1–45 kHz in half octave step), with duration of 50 ms (5 ms rise/fall times). The subject was exposed to broadband
noise (0.5 to 60 kHz) at 90 dB SPL for 5 minutes. Threshold shifts were
measured for the testing tone bursts from immediately after the exposure up
to 120 minutes after the exposure. Findings showed that reduction in CM
amplitude was observed after the noise exposure. Especially, large reduction
was produced in a frequency range of 22.4 kHz. However, little reduction
was observed around a frequency range of 4 kHz.
1:45
2pAB2. Detection of fish calls by using the small underwater sound recorder. Ikuo Matsuo (Tohoku Gakuin Univ., Tenjinzawa 2-1-1, Izumi-ku,
Sendai 9813193, Japan, matsuo@cs.tohoku-gakuin.ac.jp), Tomohito Imaizumi, and Tmonari Akamatsu (National Res. Inst. of Fisheries Eng., Fisheries Res. Agency, Kamisu, Japan)
Passive acoustic monitoring has been widely used for the survey of marine mammals. This method can be applied for any sounding creatures in
the ocean. Many fish, including croaker, grunt, and snapper, produce species-specific low-frequency sounds associated with courtship and spawning
behavior in chorus groups. In this paper, the acoustic data accumulated by
an autonomous small underwater recorder were used for the sound detection
analysis. The recorder was set on the sea floor off the coast of Chosi in
2152
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Japan (35 400 5500 N, 140 490 1400 E). The observed signals include not only
target fish calls (white croaker) but also calls of another marine life and
noises of vessels. We tried to extract the target fish calls out of the sounds.
First, recordings were processed by bandpass filter (400–2400 Hz) to eliminate low frequency noise contamination. Second, a low frequency filter
applied to extract envelope of the waveform and identified high intensity
sound units, which are possibly fish calls. Third, parameter tuning has been
conducted to fit the detection of target fish call using absolute received intensity and duration. In this method, 28614 fish calls could be detected from
the observed signals during 130 hours Comparing with manually identified
fish call, correct detection and false alarm were 0.88 and 0.03, respectively.
[This work was supported by CREST, JST.]
2:00
2pAB3. Changes in note order stereotypy during learning in two species
of songbird, measured with automatic note classification. Benjamin N.
Taft (Landmark Acoust. LLC, 1301 Cleveland Ave., Racine, WI 53405,
ben.taft@landmarkacoustics.com)
In addition to mastering the task of performing the individual notes of a
song, many songbirds must also learn to produce each note in a stereotyped
order. As a bird practices its new song, it may perform thousands of bouts,
providing a rich source of information about how note phonology and note
type order change during learning. A combination of acoustic landmark
descriptions, neural network and hierarchical clustering classifiers, and Markov models of note order made it possible to measure note order stereotypy
in two species of songbird. Captive swamp sparrows (Melospiza melodia,
11 birds, 92063 notes/bird), and wild tree swallows (Tachycineta bicolor, 18
birds, 448 syllables/bird) were recorded song development. The predictability of swamp sparrow note order showed significant increase during the
month-long recording period (F1,162 = 9977, p < 0.001). Note order stereotypy in tree swallows also increased by a significant amount over a monthlong field season (Mann-Whitney V = 12, p-value < 0.001). Understanding
changes in song stereotypy can improve our knowledge of vocal learning,
performance, and cultural transmission.
168th Meeting: Acoustical Society of America
2152
3:15
2pAB4. Plugin architecture for creating algorithms for bioacoustic signal
processing software. Christopher A. Marsh, Marie A. Roch (Dept. of Comput. Sci., San Diego State Univ., 5500 Campanile Dr., San Diego, CA 921827720, cmarsh@rohan.sdsu.edu), and David K. Mellinger (Cooperative Inst.
for Marine Resources Studies, Oregon State Univ., Newport, OR)
2pAB7. Temporal patterns in detections of sperm whales (Physeter
macrocephalus) in the North Pacific Ocean based on long-term passive
acoustic monitoring. Karlina Merkens (Protected Species Div., NOAA Pacific Islands Fisheries Sci. Ctr., NMFS/PIFSC/PSD/Karlina Merkens, 1845
Wasp Blvd., Bldg. 176, Honolulu, HI 96818, karlina.merkens@noaa.gov),
Anne Simonis (Scripps Inst. of Oceanogr., Univ. of California San Diego,
La Jolla, CA), and Erin Oleson (Protected Species Div., NOAA Pacific
Islands Fisheries Sci. Ctr., Honolulu, HI)
There are several acoustic monitoring software packages that allow for
the creation and execution of algorithms that automate detection, classification, and localization (DCL). Algorithms written for one program are generally not portable to other programs, and usually must be written in a specific
programming language. We have developed an application programming
interface (API) that seeks to resolve these issues by providing a plugin
framework for creating algorithms for two acoustic monitoring packages:
Ishmael and PAMGuard. This API will allow new detection, classification,
and localization algorithms to be written for these programs without requiring knowledge of the monitoring software’s source code or inner workings,
and lets a single implementation run on either platform. The API also allows
users to write DCL algorithms in a wide variety of languages. We hope that
this will promote the sharing and reuse of algorithm code. [Funding from
ONR.]
2:30
2pAB5. Acoustic detection of migrating gray, humpback, and blue
whales in the coastal, northeast Pacific. Regina A. Guazzo, John A. Hildebrand, and Sean M. Wiggins (Scripps Inst. of Oceanogr., Univ. of California, San Diego, 9450 Gilman Dr., #80237, La Jolla, CA 92092, rguazzo@
ucsd.edu)
Many large cetaceans of suborder Mysticeti make long annual migrations along the California coast. A bottom-mounted hydrophone was
deployed in shallow water off the coast of central California and recorded
during November 2012 to September 2013. The recording was used to
determine the presence of blue whales, humpback whales, and gray whales.
Gray whale calls were further analyzed and the number of calls per day and
per hour were calculated. It was found that gray whales make their migratory M3 calls at a higher rate than previously observed. There were also
more M3 calls recorded at night than during the day. This work will be continued to study the patterns and interactions between species and compared
with shore-based survey data.
2:45
2pAB6. Importing acoustic metadata into the Tethys scientific workbench/database. Sean T. Herbert (Marine Physical Lab., Scripps Inst. of
Oceanogr., 8237 Lapiz Dr., San Diego, CA 92126, sth.email@gmail.com)
and Marie A. Roch (Comput. Sci., San Diego State Univ., San Diego,
CA)
Tethys is a temporal-spatial scientific workbench/database created to
enable the aggregation and analysis of acoustic metadata from recordings
such as animal detections and localizations. Tethys stores data in a specific
format and structure, but researchers produce and store data in various formats. Examples of storage formats include spreadsheets, relational databases, or comma-separated value (CSV) text files. Thus, one aspect of the
Tethys project has been to focus on providing options to allow data import
regardless of the format in which it is stored. Data import can be accomplished in one of two ways. The first is translation, which transforms source
data from other formats into the format Tethys uses. Translation does not
require any programming, but rather the specification of an import map
which associates the researcher’s data with Tethys fields. The second
method is a framework called Nilus that enables detection and localization
algorithms to create Tethys formatted documents directly. Programs can either be designed around Nilus, or be modified to make use of it, which does
require some programming. These two methods have been used to successfully import over 4.5 million records into Tethys. [Work funded by NOPP/
ONR/BOEM.]
3:00–3:15 Break
2153
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Sperm whales (Physeter macrocephalus), a long-lived, cosmopolitan
species, are well suited for long-term studies, and their high amplitude echolocation signals make them ideal for passive acoustic monitoring. NOAA’s
Pacific Islands Fisheries Science Center has deployed High-frequency
Acoustic Recording Packages (200 kHz sampling rate) at 13 deep-water
locations across the central and western North Pacific Ocean since 2005.
Recordings from all sites were manually analyzed for sperm whale signals,
and temporal patterns were examined on multiple scales. There were sperm
whale detections at all sites, although the rate of detection varied by location, with the highest rate at Wake Island (15% of samples), and the fewest
detections at sites close to the equator (<1%). Only two locations (Saipan
and Pearl and Hermes Reef) showed significant seasonal patterns, with more
detections in the early spring and summer than in later summer or fall. There
were no significant patterns relating to lunar cycles. Analysis of diel variation revealed that sperm whales were detected more during the day and
night compared to dawn and dusk at most sites. The variability shown in
these results emphasizes the importance of assessing basic biological patterns and variations in the probability of detection before progressing to further analysis, such as density estimation, where the effects of uneven
sampling effort could significantly influence results.
3:30
2pAB8. Automatic detection of tropical fish calls recorded on moored
acoustic recording platforms. Maxwell B. Kaplan, T. A. Mooney (Biology, Woods Hole Oceanographic Inst., 266 Woods Hole Rd., MS50, Woods
Hole, MA 02543, mkaplan@whoi.edu), and Jim Partan (Appl. Ocean Phys.
and Eng., Woods Hole Oceanographic Inst., Woods Hole, MA)
Passive acoustic recording of biological sound production on coral reefs
can help identify spatial and temporal differences among reefs; however,
the contributions of individual fish calls to overall trends are often overlooked. Given that the diversity of fish call types may be indicative of fish
species diversity on a reef, quantifying these call types could be used as a
proxy measure for biodiversity. Accordingly, automatic fish call detectors
are needed because long acoustic recorders deployments can generate large
volumes of data. In this investigation, we report the development and performance of two detectors—an entropy detector, which identifies troughs in
entropy (i.e., uneven distribution of entropy across the frequency band of interest, 100–1000 Hz), and an energy detector, which identifies peaks in root
mean square sound pressure level. Performance of these algorithms is
assessed against a human identification of fish sounds recorded on a coral
reef in the US Virgin Islands in 2013. Results indicate that the entropy and
energy detectors, respectively, have false positive rates of 9.9% and 9.9%
with false negative rates of 28.8% and 31.3%. These detections can be used
to cluster calls into types, in order to assess call type diversity at different
reefs.
3:45
2pAB9. Social calling behavior in Southeast Alaskan humpback whales
(Megaptera novaeangliae): Communication and context. Michelle Fournet (Dept. of Fisheries and Wildlife, Oregon State Univ., 425 SE Bridgeway
Ave., Corvallis, OR 97333, mbellalady@gmail.com), Andrew R. Szabo
(Alaska Whale Foundation, Petersburg, AK), and David K. Mellinger (Cooperative Inst. for Marine Resources Studies, Oregon State Univ., Newport,
OR)
Across their range humpback whales (Megaptera novaeangliae) produce
a wide array of vocalizations including song, foraging vocalizations, and a
variety of non-song vocalizations known as social calls. This study investigates the social calling behavior of Southeast Alaskan humpback whales
from a sample of 299 vocalizations paired with 365 visual surveys collected
168th Meeting: Acoustical Society of America
2153
2p TUE. PM
2:15
over a three-month period on a foraging ground in Frederick Sound in
Southeast Alaska. Vocalizations were classified using visual-aural analysis,
statistical cluster analyses, and discriminant function analysis. The relationship between vocal behavior and spatial-behavioral context was analyzed
using a Poisson log-linear regression (PLL). Preliminary results indicate
that some call types were commonly produced while others were rare, and
that the greatest variety of calling occurred when whales were clustered.
Moreover, calling rates in one vocal class, the pulsed (P) vocal class, were
negatively correlated with mean nearest-neighbor distance, indicating that P
calling rates increased as animals clustered, suggesting that the use of P
calls may be spatially mediated. The data further suggest that vocal behavior
varies based on social context, and that vocal behavior trends toward complexity as the potential for social interactions increases. [Work funded by
Alaska Whale Foundation and ONR.]
4:00
2pAB10. First measurements of humpback whale song received sound
levels recorded from a tagged calf. Jessica Chen, Whitlow W. L. Au
(Hawaii Inst. of Marine Biology, Univ. of Hawaii at Manoa, 46-007 Lilipuna Rd., Kaneohe, HI 96744, jchen2@hawaii.edu), and Adam A. Pack
(Departments of Psych. and Biology, Univ. of Hawaii at Hilo, Hilo, HI)
There is increasing concern over the potential ecological effects from
high levels of oceanographic anthropogenic noise on marine mammals. Current US NOAA regulations on received noise levels as well as the Draft
Guidance for Assessing the Effect of Anthropogenic Sound on Marine
Mammals are based on limited studies conducted on few species. For the
regulations to be effective, it is important to first understand what whales
hear and their received levels of natural sounds. This novel study presents
the measurement of sound pressure levels of humpback whale song received
at a humpback whale calf in the wintering area of Maui, Hawaii. This individual was tagged with an Acousonde acoustic and data recording tag and
captured vocalizations from a singing male escort associated with the calf
and its mother. Although differences in behavioral reaction to anthropogenic
versus natural sounds have yet to be quantified, this represents the first
known measurement of sound levels that a calf may be exposed to naturally
from conspecifics. These levels can also be compared to calculated humpback song source levels. Despite its recovering population, the humpback
whale is an endangered species and understanding their acoustic environment is important for continued regulation and protection.
4:15
2pAB11. Seismic airgun surveys and vessel traffic in the Fram Strait
and their contribution to the polar soundscape. Sharon L. Nieukirk,
Holger Klinck, David K. Mellinger, Karolin Klinck, and Robert P. Dziak
(Cooperative Inst. for Marine Resources Studies, Oregon State Univ., 2030
SE Marine Sci. Dr., Newport, OR 97365, sharon.nieukirk@oregonstate.
edu)
Low-frequency (<1 kHz) noise associated with human offshore activities has increased dramatically over the last 50 years. Of special interest are
areas such as the Arctic where anthropogenic noise levels are relatively low
but could change dramatically, as sea ice continues to shrink and trans-polar
shipping routes open. In 2009, we began an annual deployment of two calibrated autonomous hydrophones in the Fram Strait to record underwater ambient sound continuously for one year at a sampling rate of 2 kHz. Ambient
noise levels were summarized via long-term spectral average plots and
reviewed for anthropogenic sources. Vessel traffic data were acquired from
the Automatic Identification System (AIS) archive and ship density was
2154
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
estimated by weighting vessel tracklines by vessel length. Background noise
levels were dominated by sounds from seismic airguns during spring,
summer and fall months; during summer these sounds were recorded in all
hours of the day and all days of a month. Ship density in the Fram Strait
peaked in late summer and increased every year. Future increases in ship
traffic and seismic surveys coincident with melting sea ice will increase ambient noise levels, potentially affecting the numerous species of acoustically
active whales using this region.
4:30
2pAB12. Using the dynamics of mouth opening in echolocating bats to
predict pulse parameters among individual Eptesicus fuscus. Laura N.
Kloepper, James A. Simmons (Dept. of Neurosci., Brown Univ., 185 Meeting St. Box GL-N, Brown University Providence, RI 02912, laura_kloepper@brown.edu), and John R. Buck (Elec. and Comput. Eng., Univ. of
Massachusetts Dartmouth, Dartmouth, MA)
The big brown bat (Eptesicus fuscus) produces echolocation sounds in
its larynx and emits them through its open mouth. Individual mouth-opening
cycles last for about 50 ms, with the sound produced in the middle, when
the mouth is approaching or reaching maximum gape angle. In previous
work, the mouth gape-angle at pulse emission only weakly predicted pulse
duration and the terminal frequency of the first-harmonic FM downsweep.
In the present study, we investigated whether the dynamics of mouth opening around the time of pulse emission predict additional pulse waveform
characteristics. Mouth angle openings for 24 ms before and 24 ms after
pulse emission were compared to pulse waveform parameters for three big
brown bats performing a target detection task. In general, coupling to the air
through the mouth seems less important than laryngeal factors for determining acoustic parameters of the broadcasts. Differences in mouth opening dynamics and pulse parameters among individual bats highlight this relation.
[Supported by NSF and ONR.]
4:45
2pAB13. Investigating whistle characteristics of three overlapping populations of false killer whales (Pseudorca crassidens) in the Hawaiian
Islands. Yvonne M. Barkley, Erin Oleson (NOAA Pacific Islands Fisheries
Sci. Ctr., 1845 Wasp Blvd., Bldg. 176, Honolulu, HI 96818, yvonne.barkley@noaa.gov), and Julie N. Oswald (Bio-Waves, Inc., Encinitas, CA)
Three genetically distinct populations of false killer whales Pseudorca
crassidens) reside in the Hawaiian Archipelago: two insular populations
(one within the main Hawaiian Islands [MHI] and the other within the
Northwestern Hawaiian Islands [NWHI]), and a wide-ranging pelagic population with a distribution overlapping the two insular populations. The
mechanisms that created and maintain the separation among these populations are unknown. To investigate the distinctiveness of whistles produced
by each population, we adapted the Real-time Odontocete Call Classification Algorithm (ROCCA) whistle classifier to classify false killer whale
whistles to population based on 54 whistle measurements. 911 total whistles
from the three populations were included in the analysis. Results show that
the MHI population is vocally distinct, with up to 80% of individual whistles correctly classified. The NWHI and pelagic populations achieved
between 48 and 52% correct classification for individual whistles. We evaluated the sensitivity of the classifier to the input whistle measurements to
determine which variables are driving the classification results. Understanding how these three populations differ acoustically may improve the efficacy
of the classifier and create new acoustic monitoring approaches for a difficult-to-study species.
168th Meeting: Acoustical Society of America
2154
TUESDAY AFTERNOON, 28 OCTOBER 2014
INDIANA G, 1:45 P.M. TO 4:15 P.M.
Session 2pAO
Acoustical Oceanography: General Topics in Acoustical Oceanography
John A. Colosi, Chair
Department of Oceanography, Naval Postgraduate School, 833 Dyer Road, Monterey, CA 93943
1:45
2pAO1. Analysis of sound speed fluctuations in the Fram Strait near
Greenland during summer 2013. Kaustubha Raghukumar, John A. Colosi
(Oceanogr., Naval Postgrad. School, 315B Spanagel Hall, Monterey, CA
93943, kraghuku@nps.edu), and Peter F. Worcester (Scripps Inst. of Oceanogr., Univ. of California San Diego, San Diego, CA)
We analyze sound speed fluctuations in roughly 600 m deep polar waters
from a recent experiment. The Thin-ice Arctic Acoustics Window
(THAAW) experiment was conducted in the waters of Fram Strait, east of
Greenland, during the summer of 2013. A drifting acoustic mooring that
incorporated environmental sensors measured temperature and salinity over
a period of four months, along a 500 km north-south transect. We examine
the relative contributions of salinity-driven polar internal wave activity, and
temperature/salinity variability along isopycnal surfaces (spice) on sound
speed perturbations in the Arctic. Both internal-wave and spice effects are
compared against the more general deep water PhilSea09 measurements.
Additionally, internal wave spectra, energies, and modal bandwidth are
compared against the well-known Garrett-Munk spectrum. Given the resurgence of interest in polar acoustics, we believe that this analysis will help
parameterize sound speed fluctuations in future acoustic propagation
models.
2:00
2pAO2. Sound intensity fluctuations due to mode coupling in the presence of nonlinear internal waves in shallow water. Boris Katsnelson (Marine geoSci., Univ. of Haifa, Mt Carmel, Haifa 31905, Israel, katz@phys.
vsu.ru), Valery Grogirev (Phys., Voronezh Univ., Voronezh, Russian Federation), and James Lynch (WHOI, Woods Hole, MA)
Intensity fluctuations of the low frequency LFM signals (band 270–330
Hz) were observed in Shallow Water 2006 experiment in the presence of
moving train consisting of about seven separate nonlinear internal waves
crossing acoustic track at some angle (~ 80 ). It is shown that spectrum of
the sound intensity fluctuations calculated for time period of radiation
(about 7.5 minutes) contains a few peaks, corresponding to predominating
frequency ~6.5 cph (and its harmonics) and small peak, corresponding to
comparatively high frequency, about 30 cph, which is interpreted by authors
as manifestation of horizontal refraction. Values of mentioned frequencies
are in accordance with theory of mode coupling and horizontal refraction on
moving nonlinear internal waves, developed earlier by authors. [Work was
supported by BSF.]
2:15
2pAO3. A comparison of measured and forecast acoustic propagation
in a virtual denied area characterized by a heterogeneous data collection asset-network. Yong-Min Jiang and Alberto Alvarez (Res. Dept.,
NATO-STO-Ctr. for Maritime Res. and Experimentation, Viale San Bartolomeo 400, La Spezia 19126, Italy, jiang@cmre.nato.int)
The fidelity of sonar performance predictions depends on the model
used and the quantity and quality of the environmental information that is
available. To investigate the impact of the oceanographic information collected by a heterogeneous and near-real time adaptive network of robots in a
2155
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
simulated access denied area, a field experiment (REP13-MED) was conducted by CMRE during August 2013 in an area (70 81 km) located offshore La Spezia (Italy), in the Ligurian Sea. The sonar performance assessment makes use of acoustic data recorded by a vertical line array at
source—receiver ranges from 0.5 to 30 km. Continuous wave pulses at multiple frequencies (300–600 Hz) were transmitted at two source depths, 25
and 60 meters, at each range. At least 60 pings were collected for each
source depth to build up the statistics of the acoustic received level and
quantify the measurement uncertainty. A comparison of the acoustic transmission loss measured and predicted using an ocean prediction model
(ROMS) assimilating the observed oceanographic data is presented, and the
performance of the observational network is evaluated. [Work funded by
NATO–Allied Command Transformation]
2:30
2pAO4. Performance assessment of a short hydrophone array for
seabed characterization using natural-made ambient noise. Peter L.
Nielsen (Res. Dept., STO-CMRE, V.S. Bartolomeo 400, La Spezia 19126,
Italy, nielsen@cmre.nato.int), Martin Siderius, and Lanfranco Muzi (Dept.
of Elec. and Comput. Eng., Portland State Univ., Portland, OR)
The passive acoustic estimate of seabed properties using natural-made
ambient noise received on a glider equipped hydrophone array provides the
capability to perform long duration seabed characterization surveys on
demand in denied areas. However, short and compact arrays associated with
gliders are limited to a few hydrophones and small aperture. Consequently,
these arrays exhibit lower resolution of the estimated seabed properties, and
the reliability of the environmental estimates may be questionable. The
objective of the NATO-STO CMRE sea trial REP14-MED (conducted west
of Sardinia, Mediterranean Sea) is to evaluate the performance of a prototype glider array with eight hydrophones in a line and variable hydrophone
spacing for seabed characterization using natural-made ambient noise. This
prototype array is deployed vertically above the seabed together with a 32element reference vertical line array. The arrays are moored at different sites
with varying sediment properties and stratification. The seabed reflection
properties and layering structure at these sites are estimated from ambient
noise using both arrays and the results are compared to assess the glider
array performance. Synthetic extension of the glider array is performed to
enhance resolution of the bottom properties, and the results are compared
with these from the longer reference array.
2:45
2pAO5. Species classification of individual fish using the support vector
machine. Atsushi Kinjo, Masanori Ito, Ikuo Matsuo (Tohoku Gakuin Univ.,
Tenjinzawa 2-1-1, Izumi-ku, Sendai, MIyagi 981-3193, Japan, atsushi.
kinjo@gmail.com), Tomohito Imaizumi, and Tomonari Akamatsu (Fisheries Res. Agency, National Res. Inst. of Fisheries Eng., Hasaki, Ibaraki,
Japan)
The fish species classification using echo-sounder is important for fisheries. In the case of fish school of mixed species, it is necessary to classify
individual fish species by isolating echoes from multiple fish. A broadband
signal, which offered the advantage of high range resolution, was applied to
detect individual fish for this purpose. The positions of fish were estimated
168th Meeting: Acoustical Society of America
2155
2p TUE. PM
Contributed Papers
from the time difference of arrivals by using the split-beam system. The target strength (TS) spectrum of individual fish echo was computed from the
isolated echo and the estimated position. In this paper, the Support Vector
Machine was introduced to classify fish species by using these TS spectra.
In addition, it is well known that the TS spectra are dependent on not only
fish species but also fish size. Therefore, it is necessary to classify both fish
species and size by using these features. We tried to classify two species
and two sizes of schools. Subject species were chub mackerel (Scomber
japonicas) and Japanese jack mackerel (Trachurus japonicus). We calculated the classification rates to limit the training data, frequency bandwidth
and tilt angles. It was clarified that the best classification rate was 71.6 %.
[This research was supported by JST, CREST.]
Values of these parameters in the experiment are estimated by optimizing
focusing of the back-propagated CCFs. The results are consistent with the
values of the seafloor parameters evaluated independently by other means.
3:00–3:15 Break
Estimation of the shear properties of seafloor sediments in littoral waters
is important in modeling the acoustic propagation and predicting the
strength of sediments for geotechnical applications. One of the promising
approaches to estimate shear speed is by using the dispersion of seismoacoustic interface (Scholte) waves that travel along the water-sediment
boundary. The propagation speed of the Scholte waves is closely related to
the shear wave speed over a depth of 1–2 wavelengths into the seabed. A
geophone system for the measurement of these interface waves, along with
an inversion scheme that inverts the Scholte wave dispersion data for sediment shear speed profiles have been developed. The components of this
inversion scheme are a genetic algorithm and a forward model which is
based on dynamic stiffness matrix approach. The effects of the assumptions
of the forward model on the inversion, particularly the shear wave depth
profile, will be explored using a finite element model. The results obtained
from a field test conducted in very shallow waters in Davisville, RI, will be
presented. These results are compared to historic estimates of shear speed
and recently acquired vibracore data. [Work sponsored by ONR, Ocean
Acoustics.]
3:15
2pAO6. Waveform inversion of ambient noise cross-correlation functions in a coastal ocean environment. Xiaoqin Zang, Michael G. Brown,
Neil J. Williams (RSMAS, Univ. of Miami, 4600 Rickenbacker Cswy.,
Miami, FL 33149, xzang@rsmas.miami.edu), Oleg A. Godin (ESRL,
NOAA, Boulder, CO), Nikolay A. Zabotin, and Liudmila Zabotina (CIRES,
Univ. of Colorado, Boulder, CO)
Approximations to Green’s functions have been obtained by cross-correlating concurrent records of ambient noise measured on near-bottom instruments at 5 km range in a 100 m deep coastal ocean environment. Inversion
of the measured cross-correlation functions presents a challenge as neither
ray nor modal arrivals are temporally resolved. We exploit both ray and
modal expansion of the wavefield to address the inverse problem using two
different parameterizations of the seafloor structure. The inverse problem is
investigated by performing an exhaustive search over the relevant parameter
space to minimize the integrated squared difference between computed and
measured correlation function waveforms. To perform the waveform-based
analysis described, it is important that subtle differences between correlation
functions and Green’s functions are accounted for. [Work supported by NSF
and ONR.]
3:30
2pAO7. Application of time reversal to acoustic noise interferometry in
shallow water. Boris Katsnelson (Marine GeoSci., Univ. of Haifa, 1, Universitetskaya sq, Voronezh 394006, Russian Federation, katz@phys.vsu.ru),
Oleg Godin (Univ. of Colorado, Boulder, CO), Jixing Qin (State Key Lab,
Inst. of Acoust., Beijing, China), Nikolai Zabotin, Liudmila Zabotina (Univ.
of Colorado, Boulder, CO), Michael Brown, and Neil Williams (Univ. of
Miami, Miami, FL)
Two-point cross-correlations function (CCF) of diffuse acoustic noise
approximates the Green’s function, which describes deterministic sound
propagation between the two measurement points. Similarity between CCFs
and Green’s functions motivates application to acoustic noise interferometry
of the techniques that were originally developed for remote sensing using
broadband, coherent compact sources. Here, time reversal is applied to
CCFs of the ambient and shipping noise measured in 100 meter-deep water
in the Straits of Florida. Noise was recorded continuously for about six days
at three points near the seafloor by pairs of hydrophones separated by 5.0,
9.8, and 14.8 km. In numerical simulations, a strong focusing occurs in the
vicinity of one hydrophone when the Green’s function is back-propagated
from the other hydrophone, with the position and strength of the focus being
sensitive to density, sound speed, and attenuation coefficient in the bottom.
2156
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
3:45
2pAO8. Shear wave inversion in a shallow coastal environment. Gopu
R. Potty (Dept. of Ocean Eng., Univ. of Rhode Island, 115 Middleton Bldg.,
Narragansett, RI 02882, potty@egr.uri.edu), Jennifer L. Giard (Marine
Acoust., Inc., Middletown, RI), James H. Miller, Christopher D. P. Baxter
(Dept. of Ocean Eng., Univ. of Rhode Island, Narragansett, RI), Marcia J.
Isakson, and Benjamin M. Goldsberry (Appl. Res. Labs., The Univ. of
Texas at Austin, Austin, TX)
4:00
2pAO9. The effects of pH on acoustic transmission loss in an estuary.
James H. Miller (Ocean Eng., Univ. of Rhode Island, URI Bay Campus, 215
South Ferry Rd., Narragansett, RI 02882, miller@egr.uri.edu), Laura Kloepper (Neurosci., Brown Univ., Providence, RI), Gopu R. Potty (Ocean Eng.,
Univ. of Rhode Island, Narragansett, RI), Arthur J. Spivack, Steven
D’Hondt, and Cathleen Turner (Graduate School of Oceanogr., Univ. of
Rhode Island, Narragansett, RI)
Increasing atmospheric CO2 will cause the ocean to become more acidic
with pH values predicted to be more than 0.3 units lower over the next 100
years. These lower pH values have the potential to reduce the absorption
component of transmission loss associated with dissolved boron. Transmission loss effects have been well studied for deep water where pH is relatively stable over time-scales of many years. However, estuarine and coastal
pH can vary daily or seasonally by about 1 pH unit and cause fluctuations in
one-way acoustic transmission loss of 2 dB over a range of 10 km at frequencies of 1 kHz or higher. These absorption changes can affect the sound
pressure levels received by animals due to identifiable sources such as
impact pile driving. In addition, passive and active sonar performance in
these estuarine and coastal waters can be affected by these pH fluctuations.
Absorption changes in these shallow water environments offer a potential
laboratory to study their effect on ambient noise due to distributed sources
such as shipping and wind. We introduce an inversion technique based on
perturbation methods to estimate the depth-dependent pH profile from measurements of normal mode attenuation. [Miller and Potty supported by ONR
322OA.]
168th Meeting: Acoustical Society of America
2156
TUESDAY AFTERNOON, 28 OCTOBER 2014
INDIANA A/B, 1:30 P.M. TO 5:45 P.M.
Session 2pBA
Biomedical Acoustics: Quantitative Ultrasound II
Michael Oelze, Chair
UIUC, 405 N Mathews, Urbana, IL 61801
Contributed Papers
2pBA1. Receiver operating characteristic analysis for the detectability
of malignant breast lesions in acousto-optic transmission ultrasound
breast imaging. Jonathan R. Rosenfield (Dept. of Radiology, The Univ. of
Chicago, 5316 South Dorchester Ave., Apt. 423, Chicago, IL 60615, jrosenfield@uchicago.edu), Jaswinder S. Sandhu (Santec Systems Inc., Arlington
Heights, IL), and Patrick J. La Rivière (Dept. of Radiology, The Univ. of
Chicago, Chicago, IL)
Conventional B-mode ultrasound imaging has proven to be a valuable
supplement to x-ray mammography for the detection of malignant breast
lesions in premenopausal women with high breast density. We have developed a high-resolution transmission ultrasound breast imaging system
employing a novel acousto-optic (AO) liquid crystal detector to enable rapid
acquisition of full-field breast ultrasound images during routine cancer
screening. In this study, a receiver operating characteristic (ROC) analysis
was performed to assess the diagnostic utility of our prototype system.
Using a comprehensive system model, we simulated the AO transmission
ultrasound images expected for a 1-mm malignant lesion contained within a
dense breast consisting of 75% normal breast parenchyma and 25% fat tissue. A Gaussian noise model was assumed with SNRs ranging from 0 to 30.
For each unique SNR, an ROC curve was constructed and the area under the
curve (AUC) was computed to assess the lesion detectability of our system.
For SNRs in excess of 10, the analysis revealed AUCs greater than 0.8983,
thus demonstrating strong detectability. Our results indicate the potential for
using an imaging system of this kind to improve breast cancer screening
efforts by reducing the high false negative rate of mammography in premenopausal women.
1:45
2pBA2. 250-MHz quantitative acoustic microscopy for assessing human
lymph-node microstructure. Daniel Rohrbach (Lizzi Ctr. for Biomedical
Eng., Riverside Res., 156 William St., 9th Fl., New York City, NY 10038,
drohrbach@RiversideResearch.org), Emi Saegusa-Beecroft (Dept. of General Surgery, Univ. of Hawaii and Kuakini Medical Ctr., Honolulu, HI),
Eugene Yanagihara (Kuakini Medical Ctr., Dept. of Pathol., Honolulu, HI),
Junji Machi (Dept. of General Surgery, Univ. of Hawaii and Kuakini Medical Ctr., Honolulu, HI), Ernest J. Feleppa, and Jonathan Mamou (Lizzi Ctr.
for Biomedical Eng., Riverside Res., New York, NY)
We employed quantitative acoustic microscopy (QAM) to measure
acoustic properties of tissue microstructure. 32 QAM datasets were acquired
from 2, fresh and 11, deparaffinized, 12-mm-thick lymph-node samples
obtained from cancer patients. Our custom-built acoustic microscope was
equipped with an F-1.16, 250-MHz transducer having a 160-MHz bandwidth to acquire reflected signals from the tissue and a substrate that intimately contacted the tissue. QAM images with a spatial resolution of 7 mm
were generated of attenuation (A), speed of sound (SOS), and acoustic impedance (Z). Samples then were stained using hematoxylin and eosin,
imaged by light microscopy, and co-registered to QAM images. The spatial
resolution and contrast of QAM images were sufficient to distinguish tissue
regions consisting of lymphocytes, fat cells and fibrous tissue. Average
properties for lymphocyte-dominated tissue were 1552.6 6 30 m/s for SOS,
2157
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
9.53 6 3.6 dB/MHz/cm for A, and 1.58 6 0.08 Mrayl for Z. Values for Z
obtained from fresh samples agreed well with those obtained from 12-mm
sections from the same node. Such 2D images provide a basis for developing improved ultrasound-scattering models underlying quantitative ultrasound methods currently used to detect cancerous regions within lymph
nodes. [NIH Grant R21EB016117.]
2:00
2pBA3. Detection of sub-micron lipid droplets using transmission-mode
attenuation measurements in emulsion phantoms and liver. Wayne
Kreider, Ameen Tabatabai (CIMU, Appl. Phys. Lab., Univ. of Washington,
1013 NE 40th St., Seattle, WA 98105, wkreider@uw.edu), Adam D. Maxwell (Dept. of Urology, Univ. of Washington, Seattle, WA), and Yak-Nam
Wang (CIMU, Appl. Phys. Lab., Univ. of Washington, Seattle, WA)
In liver transplantation, donor liver viability is assessed by the both the
amount and type of fat present in the organ. General guidelines dictate that
livers with more than 30% fat should not be transplanted; however, a lack of
available donor organs has led to the consideration of livers with more fat.
As a part of this process, it is desirable to distinguish micro-vesicular fat
(< 1 mm droplets) from macro-vesicular fat (~10 mm droplets). A method of
evaluating the relative amounts of micro- and macro-fat is proposed based
on transmission-mode ultrasound attenuation measurements. For an emulsion of one liquid in another, attenuation comprises both intrinsic losses in
each medium and excess attenuation associated with interactions between
media. Using an established coupled-phase model, the excess attenuation
associated with a monodisperse population of lipid droplets was calculated
with physical properties representative of both liver tissue and dairy products. Calculations predict that excess attenuation can exceed intrinsic attenuation and that a well-defined peak in excess attenuation at 1 MHz should
occur for droplets around 0.8 mm in diameter. Such predictions are consistent with preliminary transmission-mode measurements in dairy products.
[Work supported by NIH grants EB017857, EB007643, EB016118, and T32
DK007779.]
2:15
2pBA4. Using speckle statistics to improve attenuation estimates for
cervical assessment. Viksit Kumar and Timothy Bigelow (Mech. Eng.,
Iowa State Univ., 4112 Lincoln Swing St., Unit 113, Ames, IA 50014, vkumar@iastate.edu)
Quantitative ultrasound parameters like attenuation can be used to
observe microchanges in the cervix. To give a better estimate of attenuation
we can use speckle properties to classify which attenuation estimates are
valid and conform to the theory. For fully developed and only one type of
scatterer, Rayleigh distribution models the signal envelope. But in tissues as
the number of scatterer type increases and the speckle becomes unresolved
Rayleigh model fails. Gamma distribution has been empirically shown to be
the best fit among all the distributions. Since more than one scatterer type is
present for our work we used a mixture of gamma distributions. EM algorithm was used to find the parameters of the mixture and on basis of that the
tissue types were segmented from each other based on the criteria of different scattering properties. Attenuation estimates were then calculated for tissues of the same scattering type only. 67 Women underwent Transvaginal
168th Meeting: Acoustical Society of America
2157
2p TUE. PM
1:30
scan and the attenuation estimates were calculated for them after segregation of tissues on scattering basis. Attenuation was seen to decrease as the
time of delivery came closer.
2:30
2pBA5. Using two-dimensional impedance maps to study weak scattering in isotropic random media. Adam Luchies and Michael Oelze (Elec.
and Comput. Eng., Univ. of Illinois at Urbana-Champaign, 405 N Matthews
Ave, Urbana, IL 61801, luchies1@illinois.edu)
An impedance map (ZM) is a computational tool for studying weak scattering in soft tissues. Currently, three-dimensional (3D) ZMs are created
from a series of adjacent histological slides that have been stained to emphasize acoustic scattering structures. The 3D power spectrum of the 3DZM
may be related to quantitative ultrasound parameters such as the backscatter
coefficient. However, constructing 3DZMs is expensive, both in terms of
computational time and financial cost. Therefore, the objective of this study
was to investigate using two-dimensional (2D) ZMs to estimate 3D power
spectra. To estimate the 3D power spectrum using 2DZMs, the autocorrelation of 2DZMs extracted from a volume were estimated and averaged. This
autocorrelation estimate was substituted into the 3D Fourier transform that
assumes radial symmetry to estimate the 3D power spectrum. Simulations
were conducted on sparse collections of spheres and ellipsoids to validate
the proposed method. Using a single slice that intersected approximately 75
particles, a mean absolute error was achieved of 1.1 dB and 1.5 dB for
sphere and ellipsoidal particles, respectively. The results from the simulations suggest that 2DZMs can provide accurate estimates of the power spectrum and are a feasible alternative to the 3DZM approach.
2:45
2pBA6. Backscatter coefficient estimation using tapers with gaps. Adam
Luchies and Michael Oelze (Elec. and Comput. Eng., Univ. of Illinois at
Urbana-Champaign, 405 N Matthews Ave., Urbana, IL 61801, luchies1@
illinois.edu)
When using the backscatter coefficient (BSC) to estimate quantitative
ultrasound (QUS) parameters such as the effective scatterer diameter (ESD)
and the effective acoustic concentration (EAC), it is necessary to assume
that the interrogated medium contains diffuse scatterers. Structures that invalidate this assumption can significantly affect the estimated BSC parameters in terms of increased bias and variance and decrease performance when
classifying disease. In this work, a method was developed to mitigate the
effects of non-diffuse echoes, while preserving as much signal as possible
for obtaining diffuse scatterer property estimates. Specially designed tapers
with gaps that avoid signal truncation were utilized for this purpose. Experiments from physical phantoms were used to evaluate the effectiveness of
the proposed BSC estimation methods. The mean squared error (MSE) for
BSC between measured and theoretical had an average value of approximately 1.0 and 0.2 when using a Hanning taper and PR taper, respectively,
with six gaps. The BSC error due to amplitude bias was smallest for PR
tapers with time-bandwidth product Nx = 1. The BSC error due to shape
bias was smallest for PR tapers with Nx = 4. These results suggest using
different taper types for estimating ESD versus EAC.
3:00
2pBA7. Application of the polydisperse structure function to the characterization of solid tumors in mice. Aiguo Han and William D. O’Brien
(Elec. and Comput. Eng., Univ. of Illinois at Urbana-Champaign, 405 N.
Mathews Ave., Urbana, IL 61801, han51@uiuc.edu)
A polydisperse structure function model has been developed for modeling ultrasonic scattering from dense scattering media. The polydisperse
structure function is incorporated to a fluid-filled sphere scattering model to
model the backscattering coefficient (BSC) of solid tumors in mice. Two
types of tumors were studied: a mouse sarcoma (Englebreth-Holm-Swarm
[EHS]) and a mouse carcinoma (4T1). The two kinds of tumors had significantly different microstructures. The carcinoma had a uniform distribution
of carcinoma cells. The sarcoma had cells arranged in groups usually containing less than 20 cells per group, causing an increased scatterer size and
size distribution. Excised tumors (13 EHS samples and 15 4T1 samples)
were scanned using single-element transducers covering the frequency
2158
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
range 11–105 MHz. The BSC was estimated using a planar reference technique. The model was fit to the experimental BSC using a least-square fit.
The mean scatterer radius and the Schulz width factor (which characterizes
the width of the scatterer size distribution) were estimated. The results
showed significantly higher scatterer size estimates and wider scatterer size
distribution estimates for EHS than for 4T1, consistent with the observed
difference in microstructure of the two types of tumors. [Work supported by
NIH CA111289.]
3:15–3:30 Break
3:30
2pBA8. Experimental comparison of methods for measuring backscatter coefficient using single element transducers. Timothy Stiles and
Andrew Selep (Phys., Monmouth College, 700 E Broadway Ave., Monmouth, IL 61462, tstiles@monmouthcollege.edu)
The backscatter coefficient (BSC) has promise as a diagnostic aid. However, measurements of the BSC of soft-tissue mimicking materials have proven difficult; results on the same samples by various laboratories have up to
two orders of magnitude difference. This study compares methods of data
analysis using data acquired from the same samples using single element
transducers, with a frequency range of 1 to 20 MHz and pressure focusing
gains between 5 and 60. The samples consist of various concentrations of
milk in agar with scattering from glass microspheres. Each method utilizes a
reference spectrum from a planar reflector but differ in the diffraction and
attenuation correction algorithms. Results from four methods of diffraction
correction and three methods of attenuation correction are compared to each
other and to theoretical predictions. Diffraction correction varies from no
correction to numerical integration of the beam throughout the data acquisition region. Attenuation correction varies from limited correction for the
attenuation up to the start of the echo acquisition window, to correcting for
attenuation within a numerical integration of the beam profile. Results indicate the best agreements with theory are the methods that utilize the numerical integration of the beam profile.
3:45
2pBA9. Numerical simulations of ultrasound-pulmonary capillary
interaction. Brandon Patterson (Mech. Eng., Univ. of Michigan, 626 Spring
St., Apt. #1, Ann Arbor, MI 48103-3200, awesome@umich.edu), Douglas
L. Miller (Radiology, Univ. of Michigan, Ann Arbor, MI), David R. Dowling, and Eric Johnsen (Mech. Eng., Univ. of Michigan, Ann Arbor, MI)
Although lung hemorrhage (LH) remains the only bioeffect of non-contrast, diagnostic ultrasound (DUS) proven to occur in mammals, a fundamental understanding of DUS-induced LH remains lacking. We hypothesize
that the fragile capillary beds near the lungs surface may rupture as a result
of ultrasound-induced strains and viscous stresses. We perform simulations
of DUS waves propagating in tissue (modeled as water) and impinging on a
planar lung surface (modeled as air) with hemispherical divots representing
individual capillaries (modeled as water). Experimental ultrasound pulse
waveforms of frequencies 1.5–7.5 MHz are used for the simulation. A highorder accurate discontinuity-capturing scheme solves the two-dimensional,
compressible Navier-Stokes equations to obtain velocities, pressures,
stresses, strains, and displacements in the entire domain. The mechanics of
the capillaries are studied for a range of US frequencies and amplitudes.
Preliminary results indicate a strong dependence of the total strain on the
capillary size relative to the wavelength.
4:00
2pBA10. Acoustic radiation force due to nonaxisymmetric sound beams
incident on spherical viscoelastic scatterers in tissue. Benjamin C. Treweek, Yurii A. Ilinskii, Evgenia A. Zabolotskaya, and Mark F. Hamilton
(Appl. Res. Labs., Univ. of Texas at Austin, 10000 Burnet Rd., Austin, TX
78758, btreweek@utexas.edu)
The theory for acoustic radiation force on a viscoelastic sphere of arbitrary size in tissue was extended recently to account for nonaxisymmetric
incident fields [Ilinskii et al., POMA 19, 045004 (2013)]. A spherical harmonic expansion was used to describe the incident field. This work was specialized at the spring 2014 ASA meeting to focused axisymmetric sound
168th Meeting: Acoustical Society of America
2158
4:15
2pBA11. Convergence of Green’s function-based shear wave simulations in models of elastic and viscoelastic soft tissue. Yiqun Yang (Dept.
of Elec. and Comput. Eng., Michigan State Univ., Michigan State University, East Lansing, MI, yiqunyang.nju@gmail.com), Matthew Urban (Dept.
of Physiol. and Biomedical Eng., Mayo Clinic College of Medicine, Rochester, MN), and Robert McGough (Dept. of Elec. and Comput. Eng., Michigan State Univ., East Lansing, MI)
Green’s functions effectively simulate shear waves produced by an
applied acoustic radiation force in elastic and viscoelastic soft tissue. In an
effort to determine the optimal parameters for these simulations, the convergence of Green’s function-based calculations is evaluated for realistic spatial distributions of the initial radiation force “push.” The input to these
calculations is generated by FOCUS, the “Fast Object-oriented C + + Ultrasound Simulator,” which computes the approximate intensity fields generated by a Phillips L7-4 ultrasound transducer array for both focused and
unfocused beams. The radiation force in the simulation model, which is proportional to the simulated intensity, is applied for 200 ls, and the resulting
displacements are calculated with the Green’s function model. Simulation
results indicate that, for elastic media, convergence is achieved when the intensity field is sampled at roughly one-tenth of the wavelength of the compressional component that delivers the radiation force “push.” Aliasing and
oscillation artifacts are observed in the model for an elastic medium at lower
sampling rates. For viscoelastic media, spatial sampling rates as low as two
samples per compressional wavelength are sufficient due to the low-pass filtering effects of the viscoelastic medium. [Supported in part by NIH Grants
R01EB012079 and R01DK092255.]
4:30
2pBA12. Quantifying mechanical heterogeneity of breast tumors using
quantitative ultrasound elastography. Tengxiao Liu (Dept. of Mech.,
Aerosp. and Nuclear Eng., Rensselaer Polytechnic Inst., Troy, NY), Olalekan A. Babaniyi (Mech. Eng., Boston Univ., Boston, MA), Timothy J. Hall
(Medical Phys., Univ. of Wisconsin, Wisconsin, WI), Paul E. Barbone
(Mech. Eng., Boston Univ., 110 Cummington St., Boston, MA 02215, barbone@bu.edu), and Assad A. Oberai (Dept. of Mech., Aerosp. and Nuclear
Eng., Rensselaer Polytechnic Inst., Troy, NY)
Heterogeneity is a hallmark of cancer whether one considers the genotype
of cancerous cells, the composition of their microenvironment, the distribution of blood and lymphatic microvasculature, or the spatial distribution of
the desmoplastic reaction. It is logical to expect that this heterogeneity in tumor microenvironment will lead to spatial heterogeneity in its mechanical
properties. In this study we seek to quantify the mechanical heterogeneity
within malignant and benign tumors using ultrasound based elasticity imaging. By creating in-vivo elastic modulus images for ten human subjects with
breast tumors, we show that Young’s modulus distribution in cancerous breast
tumors is more heterogeneous when compared with tumors that are not malignant, and that this signature may be used to distinguish malignant breast
tumors. Our results complement the view of cancer as a heterogeneous disease by demonstrating that mechanical properties within cancerous tumors
are also spatially heterogeneous. [Work supported by NIH, NSF.]
2159
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
4:45
2pBA13. Convergent field elastography. Michael D. Gray, James S. Martin,
and Peter H. Rogers (School of Mech. Eng., Georgia Inst. of Technol., 771
Ferst Dr. NW, Atlanta, GA 30332-0405, michael.gray@me.gatech.edu)
An ultrasound-based system for non-invasive estimation of soft tissue
shear modulus will be presented. The system uses a nested pair of transducers
to provide force generation and motion measurement capabilities. The outer
annular element produces a ring-like ultrasonic pressure field distribution.
This in turn produces a ring-like force distribution in soft tissue, whose
response is primarily observable as a shear wave field. A second ultrasonic
transducer nested inside the annular element monitors the portion of the shear
field that converges to the center of the force distribution pattern. Propagation
speed is estimated from shear displacement phase changes resulting from dilation of the forcing radius. Forcing beams are modulated in order to establish
shear speed frequency dependence. Prototype system data will be presented
for depths of 10–14 cm in a tissue phantom, using drive parameters within
diagnostic ultrasound safety limits. [Work supported by ONR and the Neely
Chair in Mechanical Engineering, Georgia Institute of Technology.]
5:00
2pBA14. Differentiation of benign and malignant breast lesions using
Comb-Push Ultrasound Shear Elastography. Max Denis, Mohammad
Mehmohammadi (Physiol. and Biomedical Eng., Mayo Clinic, 200 First St.
SW, Rochester, MN 55905, denis.max@mayo.edu), Duane Meixner, Robert
Fazzio (Radiology-Diagnostic, Mayo Clinic, Rochester, MN), Shigao Chen,
Mostafa Fatemi (Physiol. and Biomedical Eng., Mayo Clinic, Rochester,
MN), and Azra Alizad (Physiol. and Biomedical Eng., Mayo Clinic, Rochester, Missouri)
In this work, the results from our Comb Push Ultrasound Shear Elastography (CUSE) assessment of suspicious breast lesions are presented. The elasticity value of the breast lesions are correlated to histopathological findings to
evaluate their diagnostic value in differentiating between malignant and benign breast lesions. A total of 44 patients diagnosed with suspicious breast
lesions were evaluated using CUSE prior to biopsy. All patient study procedures were conducted according to the protocol approved by Mayo Clinic
Institutional Review Board (IRB). Our cohort consisted of 27 malignant and
17 benign breast lesions. The results indicate an increase in shear wave velocity in both benign and malignant lesions compared to normal breast tissue.
Furthermore, the Young’s modulus is significantly higher in malignant
lesions. An optimal cut-off value of the Young’s modulus 80 kPa was
observed for the receiver operating characteristic (ROC) curve. This is concordant with the published cut-off values of elasticity for suspicious breast
lesions. [This work is supported in part by the grant 3R01CA148994-04S1
and 5R01CA148994-04 from NIH.]
5:15
2pBA15. Comparison between diffuse infrared and acoustic transmission over the human skull. Qi Wang, Namratha Reganti, Yutoku Yoshioka,
Mark Howell, and Gregory T. Clement (BME, LRI, Cleveland Clinic, 9500
Euclid Ave., Cleveland, OH 44195, qiqiwang83@gmail.com)
Skull-induced distortion and attenuation present a challenge to both
transcranial imaging and therapy. Whereas therapeutic procedures have
been successful in offsetting aberration using from prior CTs, this approach
impractical for imaging. In effort to provide a simplified means for aberration correction, we have been investigating the use of diffuse infrared light
as an indicator of acoustic properties. Infrared wavelengths were specifically
selected for tissue penetration; however this preliminary study was performed through bone alone via a transmission mode to facilitate comparison
with acoustic measurements. The inner surface of a half human skull, cut
along the sagittal midline, was illuminated using an infrared heat lamp and
images of the outer surface were acquired with an IR-sensitive camera. A
range of source angles were acquired and averaged to eliminate source bias.
Acoustic measurement were likewise obtained over the surface with a
source (1 MHz, 12.7 mm-diam) oriented parallel to the skull surface and
hydrophone receiver (1 mm PVDF). Preliminary results reveal a positive
correlation between sound speed and optical intensity, whereas poor correlation is observed between acoustic amplitude and optical intensity. [Work
funded under NIH R01EB014296.]
168th Meeting: Acoustical Society of America
2159
2p TUE. PM
beams with various focal spot sizes and a scatterer located at the focus. The
emphasis of the present contribution is nonaxisymmetric fields, either
through moving the scatterer off the axis of an axisymmetric beam or
through explicitly defining a nonaxisymmetric beam. This is accomplished
via angular spectrum decomposition of the incident field, spherical wave
expansions of the resulting plane waves about the center of the scatterer,
Wigner D-matrix transformations to express these spherical waves in a coordinate system with the polar axis aligned with the desired radiation force
component, and finally integration over solid angle to obtain spherical wave
amplitudes as required in the theory. Various scatterer sizes and positions
relative to the focus are considered, and the effects of changing properties
of both the scatterer and the surrounding tissue are examined. [Work supported by the ARL:UT McKinney Fellowship in Acoustics.]
5:30
2pBA16. A computerized tomography system for transcranial ultrasound imaging. Sai Chun Tang (Dept. of Radiology, Harvard Med. School,
221 Longwood Ave., Rm. 521, Boston, MA 02115, sct@bwh.harvard.edu)
and Gregory T. Clement (Dept. of Biomedical Eng., Cleveland Clinic,
Cleveland, OH)
Hardware for tomographic imaging presents both challenge and opportunity for simplification when compared with traditional pulse-echo imaging
systems. Specifically, point diffraction tomography does not require simultaneous powering of elements, in theory allowing just a single transmit
channel and a single receive channel to be coupled with a switching or multiplexing network. In our ongoing work on transcranial imaging, we have
developed a 512-channel system designed to transmit and/or receive a high
voltage signal from/to arbitrary elements of an imaging array. The overall
design follows a hierarchy of modules including a software interface, microcontroller, pulse generator, pulse amplifier, high-voltage power converter,
switching mother board, switching daughter board, receiver amplifier, analog-to-digital converter, peak detector, memory, and USB communication.
Two pulse amplifiers are included, each capable producing up to 400 Vpp
via power MOSFETS. Switching is based around mechanical relays that
allow passage of 200 V, while still achieving switching times of under 2 ms,
with an operating frequency ranging from below 100 kHz to 10 MHz. The
system is demonstrated through ex vivo human skulls using 1 MHz transducers. The overall system design is applicable to planned human studies in
transcranial image acquisition, and may have additional tomographic applications for other materials necessitating a high signal output. [Work was
supported by NIH R01 EB014296.]
TUESDAY AFTERNOON, 28 OCTOBER 2014
INDIANA C/D, 2:45 P.M. TO 3:30 P.M.
Session 2pEDa
Education in Acoustics: General Topics in Education in Acoustics
Uwe J. Hansen, Chair
Chemistry& Physics, Indiana State University, 64 Heritage Dr., Terre Haute, IN 47803-2374
Contributed Papers
2:45
2pEDa1. @acousticsorg: The launch of the Acoustics Today twitter feed.
Laura N. Kloepper (Dept. of Neurosci., Brown Univ., 185 Meeting St. Box
GL-N, Providence, RI 02912, laura_kloepper@brown.edu) and Daniel Farrell
(Web Development office, Acoust. Society of America, Melville, NY)
Acoustics Today has recently launched our twitter feed, @acousticsorg.
Come learn how we plan to spread the mission of Acoustics Today, promote
the science of acoustics, and connect with acousticians worldwide! We will
also discuss proposed upcoming social media initiatives and how you, an
ASA member, can help contribute. This presentation will include an
extended question period in order to gather feedback on how Acoustics
Today can become more involved with social media.
3:00
2pEDa2. Using Twitter for teaching. William Slaton (Phys. & Astronomy,
The Univ. of Central Arkansas, 201 Donaghey Ave., Conway, AR 72034,
wvslaton@uca.edu)
new-found information with students. As a user discovers a network of science bloggers and journalists to follow the amount of science uncovered
grows. Conversations between science writers and scientists themselves
enhance this learning opportunity. Several examples of using twitter for
teaching will be presented.
3:15
2pEDa3. Unconventional opportunities to recruit future science, technology, engineering, and math scholars. Roger M. Logan (Teledyne,
12338 Westella, Houston, TX 77077, rogermlogan@sbcglobal.net)
Pop culture conventions provide interesting and unique opportunities to
inspire the next generation of STEM contributors. Literary, comic, and
anime are a few example of this type of event. This presentation will provide insights into these venues as well as how to get involved and help communicate that careers in STEM can be fun and rewarding.
The social media microblogging platform, Twitter, is an ideal avenue to
learn about new science in the field of acoustics as well as to share that
2160
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2160
TUESDAY AFTERNOON, 28 OCTOBER 2014
INDIANA C/D, 3:30 P.M. TO 4:00 P.M.
Session 2pEDb
Education in Acoustics: Take 5’s
Uwe J. Hansen, Chair
Chemistry& Physics, Indiana State University, 64 Heritage Dr., Terre Haute, IN 47803-2374
TUESDAY AFTERNOON, 28 OCTOBER 2014
INDIANA E, 1:55 P.M. TO 5:00 P.M.
Session 2pID
Interdisciplinary: Centennial Tribute to Leo Beranek’s Contributions in Acoustics
William J. Cavanaugh, Cochair
Cavanaugh Tocci Assoc. Inc., 3 Merifield Ln., Natick, MA 01760-5520
Carl Rosenberg, Cochair
Acentech Incorporated, 33 Moulton Street, Cambridge, MA 02138
Chair’s Introduction—1:55
Invited Papers
2:00
2pID1. Leo Beranek’s role in the Acoustical Society of America. Charles E. Schmid (10677 Manitou Pk. Blvd., Bainbridge Island,
WA 98110, cechmid@att.net)
Leo Beranek received the first 75th anniversary certificate issued by the Acoustical Society of America commemorating his longtime association with the Society at the joint ASA/ICA meeting in Montreal in 2013. Both the Society and Leo have derived mutual benefits from this long and fruitful association. Leo has held many important roles as leader in the ASA. He served as vice president (1949–
1950), president (1954–1955), Chair of the Z24 Standards Committee (1950–1953), meeting organizer (he was an integral part of the
Society’s 25th, 50th, and 75th Anniversary meetings), associate editor (1950–1959), author of three books sold via ASA, publisher of 75
peer-reviewed JASA papers, and presented countless papers at ASA meetings. Much of his work has been recognized by the Society
which presented him with the R. Bruce Lindsay Award (1944), the Wallace Clement Sabine Award (1961), the Gold Medal (1975), and
an Honorary Fellowship (1994). He has participated in the Acoustical Society Foundation and donated generously to it. He has been an
inspiration for younger Society members (which include all of us on this occasion celebrating his 100th birthday).
2:15
2pID2. Leo Beranek’s contributions to noise control. George C. Maling (INCE FOUNDATION, 60 High Head Rd., Harpswell, ME
04079, INCEUSA@aol.com) and William W. Lang (INCE FOUNDATION, Poughkeepsie, New York)
Leo Beranek has made contributions to noise control for many years, beginning with projects during World War II when he was a
Harvard University. Later, at MIT, he taught a course (6.35) which included noise control, and ran MIT summer courses on the subject.
His book, Noise Reduction, was published during that time. Additional books followed. Noise control became an important part of the
consulting work at Bolt Beranek and Newman. Two projects are of particular interest: The efforts to silence a wind tunnel in Cleveland,
2161
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2161
2p TUE. PM
For a Take-Five session no abstract is required. We invite you to bring your favorite acoustics teaching ideas. Choose from the following: short demonstrations, teaching devices, or videos. The intent is to share teaching ideas with your colleagues. If possible, bring a
brief, descriptive handout with enough copies for distribution. Spontaneous inspirations are also welcome. You sign up at the door for a
five-minute slot before the session starts. If you have more than one demo, sign-up for two consecutive slots.
Ohio, and the differences in noise emissions and perception as the country entered the jet age. Leo was one of the founders of the Institute of Noise Control Engineering, and served as its charter president. Much of the success of the Institute is due to his early leadership.
He has also played an important role in noise policy, beginning in the late 1960s and, in particular, with the passage of the Noise Control
Act of 1972. This work continued into the 1990s with the formation of the “Peabody Group,” and cooperation with the National Academy of Engineering in the formation of noise policy.
2:30
2pID3. Beranek’s porous material model: Inspiration for advanced material analysis and design. Cameron J. Fackler and Ning
Xiang (Graduate Program in Architectural Acoust., Rensselaer Polytechnic Inst., 110 8th St., Greene Bldg., Troy, NY 12180, facklc@
rpi.edu)
In 1942, Leo Beranek presented a model for predicting the acoustic properties of porous materials [J. Acoust. Soc. Am. 13, 248
(1942)]. Since then, research into many types of porous materials has grown into a broad field. In addition to Beranek’s model, many
other models for predicting the acoustic properties of porous materials in terms of key physical material parameters have been developed. Following a brief historical review, this work concentrates on studying porous materials and microperforated panels—pioneered
by one of Beranek’s early friends and fellow students, Dah-You Maa. Utilizing equivalent fluid models, porous material and microperforated panel theories have recently been unified. In this work, the Bayesian inference framework is applied to single- and multilayered porous and microperforated materials. Bayesian model selection and parameter estimation are used to guide the analysis and design of
innovative multilayer acoustic absorbers.
2:45
2pID4. Technology, business, and civic visionary. David Walden (retired from BBN, 12 Linden Rd., East Sandwich 02537, dave@
walden-family.com)
In high school and college, Leo Beranek was already developing the traits of an entrepreneur. At Bolt Beranek and Newman he built
a culture of innovation. He and his co-founders also pursued a policy of looking for financial returns, via diversification and exploitation
of intellectual property, beyond their initial acoustics-based professional services business. In particular, in 1956–1957 Leo recruited
J.C.R. Licklider to help BBN move into the domain of computers. In time, information sciences and computing became as significant a
business for BBN as acoustics. While BBN did innovative work in many areas of computing, perhaps the most visible area was with the
technology that became the Internet. In 1969, Leo left day-to-day management of BBN, although he remained associated with the company for more years. Beyond BBN, Leo worked, often in a leadership role, with a variety of institutions to improve civic life and culture
around Boston.
3:00
2pID5. Leo Beranek and concert hall acoustics. Benjamin Markham (Acentech Inc, 33 Moulton St., Cambridge, MA 02138, bmarkham@acentech.com)
Dr. Leo Beranek’s pioneering concert hall research and project work has left an indelible impression on the study and practice of
concert hall design. Working as both scientist and practitioner simultaneously for most of his 60 + years in the industry, his accomplishments include dozens of published papers on concert hall acoustics, several seminal books on the subject, and consulting credit for
numerous important performance spaces. This paper will briefly outline a few of his key contributions to the field of concert hall acoustics (including his work regarding audience absorption, the loudness parameter G, the system of concert hall metrics and ratings that he
developed, and other contributions), his project work (including the Tanglewood shed, Philharmonic Hall, Tokyo Opera City concert
hall, and others), and his role as an inspiration for other leaders in the field. His work serves as the basis, the framework, the inspiration,
or the jumping-off point for a great deal of current concert hall research, as evidenced by the extraordinarily high frequency with which
his work is cited; this paper will conclude with some brief remarks on the future of concert hall research that will build on Dr. Beranek’s
extraordinary career.
3:15
2pID6. Concert hall acoustics: Recent research. Leo L. Beranek (Retired, 10 Longwood Dr., Westwood, MA 02090, beranekleo@
ieee.org)
Recent research on concert hall acoustics is reviewed. Discussed are (1) ten top quality halls acoustically; (2) listeners acoustical
preferences; (3) how musical dynamics are enhanced by hall shape; (4) effect of seat upholstering on sound strength and hall dimensions; (5) recommended minimum and maximum hall dimensions and audience capacities in shoebox, surround, and fan shaped halls.
3:30
2pID7. Themes of thoughts and thoughtfulness. Carl Rosenberg (Acentech Inc., 33 Moulton St., Cambridge, MA 02138, crosenberg@acentech.com) and William J. Cavanaugh (Cavanaugh/Tocci, Sudbury, MA)
In preparing and compiling the background for the issue of Acoustics Today on Leo Beranek to commemorate his 100th birthday,
there were some consistent themes of Leo’s work and contribution to colleagues and scholars with whom he worked. This was particularly evident in the many “side-bars” solicited from over three dozen friends and colleagues. The authors discuss these patterns and share
insights on the manner in which Leo was most influential. There will be opportunities for audience participants to share their thoughts
and birthday greetings with Leo.
3:45–4:15 Panel Discussion
4:15–5:00 Celebration
2162
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2162
TUESDAY AFTERNOON, 28 OCTOBER 2014
SANTA FE, 1:00 P.M. TO 3:50 P.M.
Session 2pMU
Musical Acoustics: Synchronization Models in Musical Acoustics and Psychology
Rolf Bader, Chair
Institute of Musicology, University of Hamburg, Neue Rabenstr. 13, Hamburg 20354, Germany
Invited Papers
1:00
2pMU1. Models and findings of synchronization in musical acoustics and music psychology. Rolf Bader (Inst. of Musicology, Univ.
of Hamburg, Neue Rabenstr. 13, Hamburg 20354, Germany, R_Bader@t-online.de)
2p TUE. PM
Synchronization is a crucial mechanism in music tone production and perception. With wind instruments, the overtone series of
notes synchronize to nearly perfect harmonic relations due to nonlinear effects and turbulence at the driving mechanism although the
overblown pitches of flutes or horns may differ considerably from such a simple harmonic relation. Organ pipes close to each other synchronize in pitch by interaction of the sound pressures. With violins, the sawtooth motion appears because of a synchronization of the
stick/slip interaction with the string length. All these models are complex systems also showing bifurcations in terms of multiphonics,
biphonation or subharmonics. On the subjects perception and music production side models of synchronization, like the free-energy
principle modeling perception by minimizing surprise and adaptation to physical parameters of sound production, neural nets of timbre,
tone, or rhythm perception or synergetic models of rhythm production are generally suited much better to model music perception than
simplified linear models.
1:20
2pMU2. One glottal airflow—Two vocal folds. Ingo R. Titze (National Ctr. for Voice and Speech, Univ. of Utah, 156 South Main St.,
Ste. 320, Salt Lake City, UT 84101-3306, ingo.titze@utah.edu) and Ingo R. Titze (Dept. of Commun. Sci. and Disord., Univ. of Iowa,
Iowa City, IA)
Vocalization for speech and singing involves self-sustained oscillation between a stream of air and a pair of vocal folds. Each vocal
fold has its own set of natural frequencies (modes of vibration) governed by the viscoelastic properties of tissue layers and their boundary conditions. Due to asymmetry, the modes of left and right vocal folds are not always synchronized. The common airflow between
them can entrain the modes, but not always in a 1:1 ratio. Examples of bifurcations are given for human and animal vocalization, as well
as from computer simulation. Vocal artists may use desynchronization for special vocal effects. Surgeons who repair vocal folds make
decisions about the probability of regaining synchronization when one vocal fold is injured. Complete desynchronization, allowing only
one vocal fold to oscillate, may be a better strategy in some cases then attempting to achieve symmetry.
1:40
2pMU3. Synchronization of organ pipes—Experimental facts and theory. Markus W. Abel and Jost L. Fischer (Inst. for Phys. and
AstroPhys., Potsdam Univ., Karl/Liebknecht Str. 24-25, Potsdam 14469, Germany, markus.abel@physik.uni-potsdam.de)
Synchronization of musical instruments has raised attention due to the important implications on sound production in musical instruments and technological applications. In this contribution, we show new results on the interaction of two coupled organ pipes: we present a new experiment where the pipes were positioned in a plane with varying distance, further we briefly refer to a corresponding
description in terms of a reduced model, and eventually show numerical simulations which are in full agreement with the measurements.
Experimentally, the 2D setup allows for the observation of a new phenomenon: a synchronization/desynchronization transition at regular
distances of the pipes. The developed model basically consists of a self-sustained oscillator with nonlinear, delayed coupling. The nonlinearity reflects the complicated interaction of emitted acoustical waves with the jet exiting at the organ pipe mouth, and the delay term
takes care of the wave propagation. Synchronization is a clear evidence for the importance of nonlinearities in music and continues to be
a source of astonishing results.
2:00
2pMU4. Nonlinear coupling mechanisms in acoustic oscillator systems which can lead to synchronization. Jost Fischer (Dept. for
Phys. and Astronomy, Univ. of Potsdam, Karl-Liebknecht-Str 24/25, Potsdam, Brandenburg 14476, Germany, jost.fischer@uni-potsdam.de) and Markus Abel (Ambrosys GmbH, Potsdam, Germany)
We present results on the coupling mechanisms in wind-driven, self-sustained acoustic oscillators. Such systems are found in engineering applications, as gas burners, and—more beautiful—in musical instruments. As a result, we find that coupling and oscillators are
nonlinear in character, which can lead to synchronization. We demonstrate our ideas using one of the oldest and most complex musical
devices: organ pipes. Building up on the questions of preceding works, the elements of the sound generation are identified using detailed
2163
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2163
experimental and theoretical studies, as well as numerical simulations. From these results, we derive the nonlinear coupling mechanisms
of the mutual interaction of organ pipes. This leads to a nonlinear coupled acoustic oscillator model, which is based on the aeroacoustical
and fluid dynamical first principles. The model calculations are compared with the experimental results from preceding works. It appears
that the sound generation and the coupling mechanisms are properly described by the developed nonlinear coupled model of self-sustained oscillators. In particular, we can explain the unusual nonlinear shape Arnold tongues of the coupled two-pipe system. Finally, we
show the power of modern CFD simulations by a 2D simulation of two mutually interacting organ pipes, i.e., the compressible NavierStokes equations are numerically solved.
2:20–2:40 Break
2:40
2pMU5. Auditory-inspired pitch extraction using a synchrony capture filterbank. Kumaresan Ramdas, Vijay Kumar Peddinti
(Dept. of Elec., Comput. Eng., Univ. of Rhode Island, Kelley A216 4 East Alumni Ave., Kingston, RI 02881, kumar@ele.uri.edu), and
Peter Cariani (Hearing Res. Ctr. & Dept. of Biomedical Eng., Boston Univ., Boston, MA)
The question of how harmonic sounds in speech and music produce strong, low pitches at their fundamental frequencies, F0’s, has
been of theoretical and practical interest to scientists and engineers for many decades. Currently the best auditory models for F0 pitch,
(e.g., Meddis & Hewitt, 1991), are based on bandpass filtering (cochlear mechanics), half-wave rectification and low-pass filtering (hair
cell transduction, synaptic transmission), channel autocorrelations (all-order interspike interval distributions) aggregated into a summary
autocorrelation, followed by an analysis that determines the most prevalent interspike intervals. As a possible alternative to explicit autocorrelation computations, we propose an alternative model that uses an adaptive Synchrony Capture Filterbank (SCFB) in which channels in a filterbank neighborhood are driven exclusively (captured) by dominant frequency components closest to them. Channel outputs
are then adaptively phase aligned with respect to a common time reference to compute a Summary Phase Aligned Function (SPAF),
aggregated across all channels, from which F0 can then be easily extracted. Possible relations to brain rhythms and phase-locked loops
are discussed. [Work supported by AFSOR FA9550-09-1-0119, Invited to special session about Synchronization in Musical Acoustics
and Music Psychology.]
3:00
2pMU6. Identification of sound sources in soundscape using acoustic, psychoacoustic, and music parameters. Ming Yang and Jian
Kang (School of Architecture, Univ. of Sheffield, Western Bank, Sheffield S10 2TN, United Kingdom, arp08my@sheffield.ac.uk)
This paper explores the possibility of automatic identification/classification of environmental sounds, by analyzing sound with a
number of acoustic, psychoacoustic, and music parameters, including loudness, pitch, timbre, and rhythm. The sound recordings of single sound sources labeled in four categories, i.e., water, wind, birdsongs, and urban sounds including street music, mechanical sounds
and traffic noise, are automatically identified with machine learning and mathematic models, including artificial neural networks and discriminant functions, based on the results of the psychoacoustic/music measures. The accuracies of the identification are above 90% for
all the four categories. Moreover, based on these techniques, identification of construction noise sources from general urban background
noise is explored, using the large construction project of London Bridge Station redevelopment as a case study site.
Contributed Papers
3:20
3:35
2pMU7. Neuronal synchronization of musical large-scale form. Lenz Hartmann (Insitut for Systematic Musicology, Universit€at Hamburg, Feldstrasse
59, Hamburg, Hamburg 20357, Germany, lenz.hartmann@gmx.de)
2pMU8. Nonlinearities and self-organization in the sound production of
the Rhodes piano. Malte Muenster, Florian Pfeifle, Till Weinrich, and Martin Keil (Systematic Musicologie, Univ. of Hamburg, Pilatuspool, 19, Hamburg, Hamburg 20355, Germany, m.muenster@arcor.de)
Musical form in this study is taken as structural aspects of music ranging
over several bars as a combination of all elements that constitutes a piece of
music, like pitch, rhythm or timbre. In an EEG-study 25 participants listen
to the first about four minutes of a piece of electronic dance music for three
times each and ERP grand-averages were calculated. Correlations of a one
second time windows between the ERPs of all electrodes and therefore of
different brain regions is used as a measure of synchronization between
these areas. Local maxima corresponding to strong synchronization show up
at expectancy points of boundaries in the present musical form. A modified
FFT analysis of the ERPs of the mean of all trials and all channels that just
take the ten highest peaks in consideration show strongest brain activity at
frequencies in the gamma-band (about 40–60 Hz) and in the beta-band
(about 20–30 Hz).
2164
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Over the last five decades the Rhodes piano became a common keyboard
instrument. It is played in such diverse musical genres as Jazz, Funk, Fusion,
or Pop. The sound processing of the Rhodes has not been studied in detail
beforeIts sound is produced by a mechanical driven tuning fork like system
causing a change in the magnetic flux of an electromagnetic pick up system.
The mechanical part of the tone production consists of a small diameter tine
made of stiff spring steel, the tine, and a tone bar made of brass, which is
strongly coupled to the former and acts as a resonator. The system is an
example for strong generator-resonator coupling. The tine acts as a generator
forcing the tonebar to vibrate with its fundamental frequency. Despite of
extremely different and much lower eigenfrequencies the tonebar is enslaved
by the tine. The tine is of lower spatial dimension and less damped and acts
nearly linear. The geometry of the tonebar is much more complex and therefore of higher dimension and damped stronger. The vibration of these two
parts are perfectly in-phase or anti-phase pointing to a quasi-synchronization
behavior. Moreover, the tonebar is responsible for the timbre of the initial
transient. It adds the glockenspiel sound to the transient and extends the sustain. The sound production is discussed as synergetic, self-organizing system, leading to a very precise harmonic overtone structure and characteristic
initial transients enhancing the variety of musical performance.
168th Meeting: Acoustical Society of America
2164
TUESDAY AFTERNOON, 28 OCTOBER 2014
MARRIOTT 3/4, 1:25 P.M. TO 3:45 P.M.
Session 2pNSa
Noise and Psychological and Physiological Acoustics: New Frontiers in Hearing Protection II
Elliott H. Berger, Cochair
Occupational Health & Environmental Safety Division, 3M, 7911, Zionsville Rd., Indianapolis, IN 46268-1650
William J. Murphy, Cochair
Hearing Loss Prevention Team, Centers for Disease Control and Prevention, National Institute for Occupational Safety and
Health, 1090 Tusculum Ave., Mailstop C-27, Cincinnati, OH 45226-1998
Chair’s Introduction—1:25
2p TUE. PM
Invited Paper
1:30
2pNSa1. Comparison of impulse peak insertion loss measured with gunshot and shock tube noise sources. William J. Murphy
(Hearing Loss Prevention Team, Centers for Disease Control and Prevention, National Inst. for Occupational Safety and Health, 1090
Tusculum Ave., Mailstop C-27, Cincinnati, OH 45226-1998, wjm4@cdc.gov), Elliott H. Berger (Personal Safety Div., E-A-RCAL lab,
3M, Indianapolis, IN), and William A. Ahroon (US Army Aeromedical Res. Lab., US Army, Fort Rucker, AL)
The National Institute for Occupational Safety and Health in cooperation with scientists from 3M and the U.S. Army Aeromedical
Research Laboratory conducted a series of Impulse peak insertion loss (IPIL) tests of the acoustic test fixtures from the Institute de Saint
Louis (ISL) with a 0.223 caliber rifle and two different acoustic shock tubes. The Etymotic Research ETYPlugsTM earplug, 3MTM TacticalProTM communication headset and the dual protector combination were tested with all three impulse noise sources. The spectra, IPIL,
and the reduction of different damage risk criteria will be presented. The spectra from the noise sources vary considerably with the rifle
having peak energy at about 1000 Hz. The shock tubes had peak levels around 125 and 250 Hz. The IPIL values for the rifle were greater
than those measured with the two shock tubes. The shock tubes had comparable IPIL results except at 150 dB for the dual protector condition. The treatment of the double protection condition is complicated because the earmuff reduces the shock wave and reduces the
effective level experienced by the earplug. For the double protection conditions, bone conduction presents a potential limiting factor for
the effective attenuation that can be achieved by hearing protection.
Contributed Paper
1:50
important inconsistency between the test conditions and final application.
Shock tube test procedures are also very inflexible and provide only minimal insight into the function and performance of advanced electronic hearing protection devices that may have relatively complex response as a
function of amplitude and frequency content. To address the issue of measuring the amplitude-dependent attenuation provided by a hearing protection
device, a method using a compression driver attached to an enclosed waveguide was developed. The hearing protection device is placed at the end of
the waveguide and the response to exposure to impulsive and frequency-dependent signals at calibrated levels is measured. Comparisons to shock tube
and standard frequency response measurements will be discussed.
2pNSa2. Evaluation of level-dependent performance of in-ear hearing
protection devices using an enclosed sound source. Theodore F. Argo and
G. Douglas Meegan (Appl. Res. Assoc., Inc., 7921 Shaffer Parkway, Littleton, CO 80127, targo@ara.com)
Hearing protection devices are increasingly designed with the capability
to protect against impulsive sound. Current methods used to test protection
from impulsive noise, such as blasts and gunshots, suffer from various drawbacks and complex, manual experimental procedures. For example, the use
of a shock tube to emulate blast waves typically produces a blast wind of a
much higher magnitude than that generated by an explosive, a specific but
Invited Papers
2:05
2pNSa3. Exploration of flat hearing protector attenuation and sound detection in noise. Christian Giguere (Audiology/SLP Program, Univ. of Ottawa, 451 Smyth Rd., Ottawa, ON K1H8M5, Canada, cgiguere@uottawa.ca) and Elliott H. Berger (Personal Safety
Div., 3M, Indianapolis, IN)
Flat-response devices are a class of hearing protectors with nearly uniform attenuation across frequency. These devices can protect
the individual wearer while maintaining the spectral balance of the surrounding sounds. This is typically achieved by reducing the muffling effect of conventional hearing protectors which provide larger attenuation at higher than lower frequencies, especially with
2165
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2165
earmuffs. Flat hearing protectors are often recommended when good speech communication or sound perception is essential, especially
for wearers with high-frequency hearing loss, to maintain audibility at all frequencies. However, while flat-response devices are
described in some acoustical standards, the tolerance limits for the definition of flatness are largely unspecified and relatively little is
known on the exact conditions when such devices can be beneficial. The purpose of this study is to gain insight into the interaction
between the spectrum of the noise, the shape of the attenuation-frequency response, and the hearing loss configuration on detection
thresholds using a psychoacoustic model of sound detection in noise.
2:25
2pNSa4. Electronic sound transmission hearing protectors and horizontal localization: Training/adaptation effects. John Casali
(Auditory Systems Lab, Virginia Tech, 250 Durham Hall, Blacksburg, VA 24061, jcasali@vt.edu) and Martin Robinette (U.S. Army
Public Health Command, U.S. Army, Aberdeen Proving Ground, MD)
Auditory situation awareness is known to be affected by some hearing protectors, even advanced electronic devices. A horizontal
localization task was employed to determine how use/training with electronic sound transmission hearing protectors affected auditory
localization ability, as compared to open-ear. Twelve normal-hearing participants performed baseline localization testing in a hemianechoic field in three listening conditions: open-ear, in-the-ear (ITE) device (Etymotic EB-15), and over-the-ear (OTE) device (Peltor
ComTac II). Participants then wore either the ITE or OTE protector for 12, almost daily, one-hour training sessions. Post-training, participants again underwent localization testing with all three conditions. A computerized custom software-hardware interface presented
localization sounds and collected accuracy and timing measures. ANOVA and post hoc statistical tests revealed that pre-training localization performance with either the ITE or OTE protector was significantly worse (p<0.05) than open-ear performance. After training
with any given listening condition, performance in that condition improved, in part from a practice effect. However, post-training localization showed near equal performance between the open-ear and the protector on which training occurred. Auditory learning, manifested as significant localization accuracy improvement, occurred for the training device, but not for the non-training device, i.e., no
crossover benefit from the training device to the non-training device occurred.
Contributed Papers
2:45
2pNSa5. Measuring effective detection and localization performance of
hearing protection devices. Richard L. McKinley (Battlespace Acoust.,
Air Force Res. Lab., 2610 Seventh St., AFRL/711HPW/RHCB, Wright-Patterson AFB, OH 45433-7901, richard.mckinley.1@us.af.mil), Eric R.
Thompson (Ball Aerosp. and Technologies, Air Force Res. Lab., WrightPatterson AFB, OH), and Brian D. Simpson (Battlespace Acoust., Air Force
Res. Lab., Wright-Patterson AFB, OH)
Awareness of the surrounding acoustic environment is essential to the
safety of persons. However, the use of hearing protection devices can degrade the ability to detect and localize sounds, particularly quiet sounds.
There are ANSI/ASA standards describing methods for measuring attenuation, insertion loss, and speech intelligibility in noise for hearing protection
devices, but currently there are no standard methods to measure the effects
of hearing protection devices on localization and/or detection performance.
A method for measuring the impact of hearing protectors on effective detection and localization performance has been developed at AFRL. This
method measures the response time in an aurally aided visual search task
where the presentation levels are varied. The performance with several
level-dependent hearing protection devices will be presented.
3:00
2pNSa6. Personal alert safety system localization field tests with firefighters. Joelle I. Suits, Casey M. Farmer, Ofodike A. Ezekoye (Dept. of
Mech. Eng., The Univ. of Texas at Texas, 204 E Dean Keaton St., Austin,
TX 78712, jsuits@utexas.edu), Mustafa Z. Abbasi, and Preston S. Wilson
(Dept. of Mech. Eng. and Appl. Res. Labs., The Univ. of Texas at Austin,
Austin, TX)
When firefighters get lost or incapacitated on the fireground, there is little time to find them. This project has focused on a contemporary device
used in this situation, the Personal Alert Safety System. We have studied the
noises on the fireground (i.e., chainsaws, gas powered ventilation fans,
pumper trucks) [J. Acoust. Soc. Am. 134, 4221 (2013)], how the fire environment affects sound propagation [J. Acoust. Soc. Am. 134, 4218 (2013)],
and how firefighter personal protective equipment (PPE) affects human
hearing [POMA 19, 030054 (2013)]. To put all these pieces together, we
2166
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
have traveled to several fire departments across the country conducting tests
to investigate how certain effects manifest themselves when firefighters
search for the source of a sound. We tasked firefighters to locate a target
sound in various acoustic environments while their vision was obstructed
and while wearing firefighting PPE. We recorded how long it took them to
find the source, what path they took, when they first heard the target sound,
and the frequency content and sound pressure level of the acoustic environment. The results will be presented in this talk. [Work supported by U.S.
Department of Homeland Security Assistance to Firefighters Grants
Program.]
3:15
2pNSa7. Noise level from burning articles on the fireground. Mustafa Z.
Abbasi, Preston S. Wilson (Appl. Res Lab and Dept. of Mech. Eng., Univ.
of Texas at Austin, 204 E Dean Keeton St., Austin, TX 78751, mustafa_
abbasi@utexas.edu), and Ofodike A. Ezekoye (Dept. of Mech. of Eng., The
Univ. of Texas at Austin, Austin, TX)
Firefighters encounter an extremely difficult environment due to the
presence of heat, smoke, falling debris etc. If one of them needs rescue, an
audible alarm is used to alert others of their location. This alarm, known as
the Personal Alert Safety System (PASS) alarm, has been part of firefighter
gear since the early 1980s. The PASS has been enormously successful, but a
review of The National Institute for Occupational Safety and Health
(NIOSH) firefighter fatality report suggests that there are instances when the
alarm is not heard or not localized. In the past, we have studied fireground
noise from various pieces of gear such as chainsaws and fans, etc., to understand the soundscape present during a firefighting operation. However, firefighters, and other interested parties have raised the issue of noise caused by
the fire itself. The literature shows that buoyancy-controlled, non-premixed
flames aerodynamically oscillate in the 10–16 Hz range, depending on the
diameter of the fuel base. Surprisingly, few acoustic measurements have
been made even for these relatively clean fire conditions. However, experience suggests burning items do create sounds. Most likely these sound are
from the decomposition of the material as it undergoes pyrolysis (turns in
gaseous fuel and char). This paper will present noise measurements from
various burning articles as well as characterization of the fire to understand
this noise source.
168th Meeting: Acoustical Society of America
2166
3:30
The insertion losses were evaluated by GRAS 45CB Acoustic Test Fixture.
We detected greater number of viable counts in the foam earplugs than in
the premolded earplugs. Staphylococcus aureus was detected in 10 foam
earplugs (5.1%). The deterioration of insertion loss was found only in the
deformed earplugs. The condition of work environment such as presence of
dust or use of oily liquid might cause the deterioration. Both the condition
of bacterial attachment and the insertion loss were not correlated with the
duration of use. We observed no correlation between the condition of bacterial attachment and the insertion loss of earplugs and neither of them was
related to the duration of long-term use of the earplugs.
2pNSa8. Bacterial attachment and insertion loss of earplugs used longtime in the noisy workplace. Jinro Inoue, Aya Nakamura, Yumi Tanizawa,
and Seichi Horie (Dept. of Health Policy and Management, Univ. of Occupational and Environ. Health, Japan, 1-1 Iseigaoka, Yahatanishi-ku, Kitakyushu, Fukuoka 807-8555, Japan, j-inoue@med.uoeh-u.ac.jp)
In the real noisy workplace, workers often use earplugs for a longtime.
We assessed the condition of bacterial attachment and the insertion loss of
197 pairs of earplugs collected from 6 companies. The total viable counts
and the presence of Staphylococcus aureus were examined by 3M Petrifilm.
TUESDAY AFTERNOON, 28 OCTOBER 2014
MARRIOTT 9/10, 1:00 P.M. TO 4:20 P.M.
Session 2pNSb
2p TUE. PM
Noise and Structural Acoustics and Vibration: Launch Vehicle Acoustics II
R. Jeremy Kenny, Cochair
Marshall Flight Center, NASA, Huntsville, AL 35812
Tracianne B. Neilsen, Cochair
Brigham Young University, N311 ESC, Provo, UT 84602
Chair’s Introduction—1:00
Invited Papers
1:05
2pNSb1. Comparison of the acoustical emissions of multiple full-scale rocket motors. Michael M. James, Alexandria R. Salton
(Blue Ridge Res. and Consulting, 29 N Market St., Ste. 700, Asheville, NC 28801, michael.james@blueridgeresearch.com), Kent L.
Gee, and Tracianne B. Neilsen (Phys. and Astronomy, Brigham Young Univ., Provo, UT)
Development of the next-generation space flight vehicles has prompted a renewed focus on rocket sound source characterization and
near-field propagation modeling. Improved measurements of the sound near the rocket plume are critical for direct determination of the
acoustical environment both in the near and far-fields. They are also crucial inputs to empirical models and to validate computational
aeroacoustics models. Preliminary results from multiple measurements of static horizontal firings of Alliant Techsystems motors including the GEM-60, Orion 50S XLG, and the Reusable Solid Rocket Motor (RSRM) performed in Promontory, UT, are analyzed and compared. The usefulness of scaling by physical parameters such as nozzle diameter, velocity, and overall sound power is demonstrated.
The sound power spectra, directional characteristics, distribution along the exhaust flow, and pressure statistical metrics are examined
over the multiple motors. These data sets play an important role in formulating more realistic sound source models, improving acoustic
load estimations, and aiding in the development of the next generation space flight vehicles via improved measurements of sound near
the rocket plume.
1:25
2pNSb2. Low-dimensional acoustic structures in the near-field of clustered rocket nozzles. Andres Canchero, Charles E. Tinney
(Aerosp. Eng. and Eng. Mech., The Univ. of Texas at Austin, 210 East 24th St., WRW-307, 1 University Station, C0600, Austin, TX
78712-0235, andres.canchero@utexas.edu), Nathan E. Murray (National Ctr. for Physical Acoust., Univ. of MS, Oxford, MS), and
Joseph H. Ruf (NASA Marshall Space Flight Ctr., Huntsville, AL)
The plume and acoustic field produced by a cluster of two and four rocket nozzles is visualized by way of retroreflective shadowgraphy. Both steady state and transient operations of the nozzles (start-up and shut-down) were conducted in the fully-anechoic chamber
and open jet facility of The University of Texas at Austin. The laboratory scale rocket nozzles comprise thrust-optimized parabolic
(TOP) contours, which during start-up, experience free shock separated flow, restricted shock separated flow, and an “end-effects
regime” prior to flowing full. Shadowgraphy images are first compared with several RANS simulations during steady operations. A
proper orthogonal decomposition (POD) of various regions in the shadowgraphy images is then performed to elucidate the prominent
features residing in the supersonic annular flow region, the acoustic near field and the interaction zone that resides between the nozzle
2167
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2167
plumes. Synchronized surveys of the acoustic loads produced in close vicinity to the rocket clusters are compared to the low-order shadowgraphy images in order to identify the various mechanisms within the near-field that are responsible for generating sound.
1:45
2pNSb3. Experimental study on lift-off acoustic environments of launch vehicles by scaled cold jet. Hiroki Ashida, Yousuke Takeyama (Integrated Defence & Space Systems, Mitsubishi Heavy Industries, Ltd., 10, Oye-cho, Minato-ku, Nagoya City, Aichi 455-8515,
Japan, hiroki1_ashida@mhi.co.jp), Kiyotaka Fujita, and Aki Azusawa (Technol. & Innovation Headquarters, Mitsubishi Heavy Industries, Ltd., Aichi, Japan)
Mitsubishi Heavy Industries (MHI) have been operating the current Japanese flagship launch vehicle H-IIA and H-IIB, and developing the next flagship launch vehicle H-X. The concept of H-X is affordable, convenient, and comfortable for payloads including mitigation of acoustic environment during launch. Acoustic measurements were conducted using scaled GN2 cold jet and aperture plate to
facilitate understanding of lift-off acoustic source and to take appropriate measures against it without use of water injection. It was seen
that the level of vehicle acoustics in high frequency range depends on the amount of interference between the jet and the plate, and
enlargement of the aperture is effective for acoustic mitigation.
2:05
2pNSb4. Detached-Eddy simulations of rocket plume noise at lift-off. A. Lyrintzis, V. Golubev (Aerosp. Eng., Embry-Riddle Aeronutical Univ., Daytona Beach, FL), K. Kurbatski (Aerosp. Eng., Embry-Riddle Aeronutical Univ., Lebanon, New Hampshire), E. Osman
(Aerosp. Eng., Embry-Riddle Aeronutical Univ., Denver, Colorado), and Reda Mankbadi (Aerosp. Eng., Embry-Riddle Aeronutical
Univ., 600 S. Clyde Morris Blvd, Daytona Beach, FL 32129, Reda.Mankbadi@erau.edu)
The three-dimensional turbulent flow and acoustic field of a supersonic jet impinging on a solid plate at different inclination angles
is studied computationally using the general-purpose CFD code ANSYS Fluent. A pressure-based coupled solver formulation with the second-order weighted central-upwind spatial discretization is applied. Hot jet thermal condition is considered. Acoustic radiation of
impingement tones is simulated using a transient time-domain formulation. The effects of turbulence in steady state are modeled by the
SST k- turbulence model. The Wall-Modeled Large-Eddy Simulation (WMLES) model is applied to compute transient solutions. The
near-wall mesh on the impingement plate is fine enough to resolve the viscosity-affected near-wall region all the way to the laminar sublayer. Inclination angle of the impingement plate is parameterized in the model for automatic re-generation of the mesh and results. The
transient solution reproduces the mechanism of impingement tone generation by the interaction of large-scale vortical structures with
the impingement plate. The acoustic near field is directly resolved by the Computational Aeroacoustics (CAA) to accurately propagate
impingement tone waves to near-field microphone locations. Results show the effect of the inclination angle on sound level pressure
spectra and overall sound pressure level directivities.
2:25
2pNSb5. Large-eddy simulations of impinging over-expanded supersonic jet noise for launcher applications. Julien Troyes, François Vuillot (DSNA, Onera, BP72, 29 Ave. de la Div. Leclerc, Ch^atillon Cedex 92322, France, julien.troyes@onera.fr), and Hadrien
Lambare (DLA, CNES, Paris, France)
During the lift-off phase of a space launcher, powerful rocket motors generate harsh acoustic environment on the launch pad. Following the blast waves created at ignition, jet noise is a major contributor to the acoustic loads received by the launcher and its payload.
Recent simulations performed at ONERA to compute the noise emitted by solid rocket motors at lift-off conditions are described.
Far-field noise prediction is achieved by associating a LES solution of the jet flow with an acoustics surface integral method. The computations are carried out with in-house codes CEDRE for the LES solution and KIM for Ffowcs Williams & Hawkings porous surface
integration method. The test case is that of a gas generator, fired vertically onto a 45 degrees inclined flat plate which impingement point
is located 10 diameters from nozzle exit. Computations are run for varied numerical conditions, such as turbulence modeling along the
plate and different porous surfaces location and type. Results are discussed and compared with experimental acoustic measurements
obtained by CNES at MARTEL facility.
2:45–3:05 Break
3:05
2pNSb6. Scaling metrics for predicting rocket noise. Gregory Mack, Charles E. Tinney (Ctr. for AeroMech. Res., The Univ. of Texas
at Austin, ASE/EM, 210 East 24th St., Austin, TX 78712, cetinney@utexas.edu), and Joseph Ruf (Combustion and Flow Anal. Team,
ER42, NASA Marshal Space Flight Ctr., Huntsville, AL)
Several years of research at The University of Texas at Austin concerning the sound field produced by large area-ratio rocket nozzles
is presented [Baars et al., AIAA J. 50(1), (2012); Baars and Tinney, Exp. Fluids, 54 (1468), (2013); Donald et al., AIAA J. 52(7),
(2013)]. The focus of these studies is on developing an in-depth understanding of the various acoustic mechanisms that form during
start-up of rocket engines and how they may be rendered less efficient in the generation of sound. The test articles comprise geometrically scaled replicas of large area ratio nozzles and are tested in a fully anechoic chamber under various operating conditions. A framework for scaling laboratory-scale nozzles is presented by combining established methods with new methodologies [Mayes, NASA TN
D-21 (1959); Gust, NASA TN-D-1999 (1964); Eldred, NASA SP-8072 (1972); Sutherland AIAA Paper 1993–4383 (1993); Varnier,
AIAA J. 39:10 (2001); James et al. Proc. Acoust. Soc. Amer. 18(3aNS), (2012)]. In particular, both hot and cold flow tests are reported
which comprise single, three and four nozzle clusters. An effort to correct for geometric scaling is also presented.
2168
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2168
3:25
2pNSb7. Acoustic signature characterization of a sub-scale rocket launch. David Alvord and K. Ahuja (Aerosp. & Acoust. Technologies Div., Georgia Tech Res. Inst., 7220 Richardson Rd., Smyrna, GA 30080, david.alvord@gtri.gatech.edu)
Georgia Tech Research Institute (GTRI) conducted a flight test of a sub-scale rocket in 2013 outside Talladega, Alabama, to acquire
the launch acoustics produced. The primary objective of the test was to characterize the acquired data during a sub-scale launch and
compare it with heritage launch data from the STS-1 Space Shuttle flight. Neither launch included acoustic suppression; however, there
were differences in the ground geometry. STS-1 launched from the Mobile Launch Platform at Pad 39B with the RS-25 liquid engines
and Solid Rocket Boosters (SRBs) firing into their respective exhaust ducts and flame trench, while the GTRI flight test vehicle launched
from a flat reflective surface. The GTRI launch vehicle used a properly scaled Solid Rocket Motor (SRM) for propellant; therefore, primary analysis will focus on SRM/SRB centric acoustic events. Differences in the Ignition Overpressure (IOP) wave signature between
both due to this will be addressed. Additionally, the classic liftoff acoustics “football shape” is preserved between both full and sub-scale
flights. The launch signatures will be compared, with note taken of specific launch acoustic events more easily investigated with subscale launch data or supplement current sub-scale static hotfire testing.
3:45
Large, heavy-lift rockets have significant acoustic and infrasonic energy that can often be detected from a considerable distance.
These sounds, under certain environmental conditions, can propagate hundreds of kilometers from the launch location. Thus, groundbased infrasound arrays can be used to monitor the low frequencies emitted by these large rocket launches. Multiple launches and static
engine tests have been successfully recorded over many years using small infrasound arrays at various distances from the launch location. Infrasonic measurements using a 20 m array and parabolic equation modeling of a recent launch of an Aries V rocket at Wallops
Island, Virginia, will be discussed.
Contributed Paper
4:05
2pNSb9. Influence of source level, peak frequency, and atmospheric
absorption on nonlinear propagation of rocket noise. Michael F. Pearson
(Phys., Brigham Young Univ., 560 W 700 S, Lehi, UT 84043, m3po22@
gmail.com), Kent L. Gee, Tracianne B. Neilsen, Brent O. Reichman (Phys.,
Brigham Young Univ., Provo, UT), Michael M. James (Blue Ridge Res.
and Consulting, Asheville, NC), and Alexandira R. Salton (Blue Ridge Res.
and Consulting, Asheville, Utah)
Nonlinear propagation effects in rocket noise have been previously
shown to be significant [M. B. Muhlestein et al. Proc. Mtgs. Acoust.
(2013)]. This paper explores the influence of source level, peak frequency,
2169
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
and ambient atmospheric conditions on predictions of nonlinear propagation. An acoustic pressure waveform measured during a full-scale solid
rocket motor firing is numerically propagated via generalized Burgers equation model for atmospheric conditions representative of plausible spaceport
locations. Cases are explored where the overall sound pressure level and
peak frequency has been scaled to model engines of different scale or thrust.
The predicted power spectral densities and overall sound pressure levels,
both flat and A-weighted, are compared for nonlinear and linear propagation
for distances up to 30 km. The differences in overall level suggest that further research to appropriately include nonlinear effects in launch vehicle
noise models is worthwhile.
168th Meeting: Acoustical Society of America
2169
2p TUE. PM
2pNSb8. Infrasonic energy from orbital launch vehicles. W. C. Kirkpatrick Alberts, John M. Noble, and Stephen M. Tenney (US
Army Res.Lab., 2800 Powder Mill, Adelphi, MD 20783, kirkalberts@verizon.net)
TUESDAY AFTERNOON, 28 OCTOBER 2014
INDIANA C/D, 1:00 P.M. TO 2:40 P.M.
Session 2pPA
Physical Acoustics and Education in Acoustics: Demonstrations in Acoustics
Uwe J. Hansen, Cochair
Chemistry & Physics, Indiana State University, 64 Heritage Dr., Terre Haute, IN 47803-2374
Murray S. Korman, Cochair
Physics Department, U.S. Naval Academy, 572 C Holloway Road, Chauvenet Hall Room 295, Annapolis, MD 21402
Chair’s Introduction—1:00
Invited Papers
1:05
2pPA1. Sharing experiments for home and classroom demonstrations. Thomas D. Rossing (Stanford Univ., Music, Stanford, CA
94305, rossing@ccrma.stanford.edu)
In the third edition of The Science of Sound, we included a list of “Experiments for Home, Laboratory, and Classroom Demonstrations” at the end of each chapter. Some of the demonstrations are done by the instructor in class, some are done by students for extra
credit, some are intended to be done at home. We describe a representative number of these, many of which can be done without special
equipment.
1:30
2pPA2. A qualitative demonstration of the behavior of the human cochlea. Andrew C. Morrison (Dept. of Natural Sci., Joliet Junior
College, 1215 Houbolt Rd., Joliet, IL 60431, amorrison@jjc.edu)
Demonstrations of the motion of the basilar membrane in the human cochlea designed by Keolian [J. Acoust. Soc. Am. 101, 11991201 (1997)], Tomlinson et. al. [J. Acoust. Soc. Am. 121, 3115 (2007)], and others provide a way for students in a class to visualize the
behavior of the basilar membrane and explore the physical mechanisms leading to many auditory phenomena. The designs of Keolian
and Tomlinson are hydrodynamic. A non-hydrodynamic apparatus has been designed that can be constructed with commonly available
laboratory supplies and items readily available at local hardware stores. The apparatus is easily set up for demonstration purposes and is
compact for storing between uses. The merits and limitations of this design will be presented.
1:55
2pPA3. Nonlinear demonstrations in acoustics. Murray S. Korman (Phys. Dept., U.S. Naval Acad., 572 C Holloway Rd., Chauvenet
Hall Rm. 295, Annapolis, MD 21402, korman@usna.edu)
The world is nonlinear, and in presenting demonstrations in acoustics, one often has to consider the effects of nonlinearity. In this
presentation the nonlinear effects are made to be very pronounced. The nonlinear effects of standing waves on a lightly stretched string
(which is also very elastic) lead to wave shape distortion, mode jumping and hysteresis effects in the resonant behavior of a tuning curve
near a resonance. The effects of hyperelasticity in a rubber string are discussed. A two dimensional system like a vibrating rectangular
or circular drum-head are well known. The nonlinear effects of standing waves on a lightly stretched hyperelastic membrane make an
interesting and challenging study. Here, tuning curve behavior demonstrates that there is softening of the system for slightly increasing
vibration amplitudes followed by stiffening of the system at larger vibration amplitudes. The hysteretic behavior of the tuning curve for
sweeping from lower to higher frequencies and then from higher to lower frequencies (for the same drive amplitude) is demonstrated.
Lastly, the nonlinear effects of a column of soil or fine granular material loading a thin elastic circular clamped plate are demonstrated
near resonance. Here again, the nonlinear highly asymmetric tuning curve behavior is demonstrated.
2:20–2:40 Audience Interaction
2170
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2170
TUESDAY AFTERNOON, 28 OCTOBER 2014
MARRIOTT 1/2, 2:00 P.M. TO 3:50 P.M.
Session 2pSA
Structural Acoustics and Vibration, Signal Processing in Acoustics, and Engineering Acoustics: Nearfield
Acoustical Holography
Sean F. Wu, Chair
Mechanical Engineering, Wayne State University, 5050 Anthony Wayne Drive, College of Engineering Building, Rm 2133,
Detroit, MI 48202
Chair’s Introduction—2:00
Invited Papers
2p TUE. PM
2:05
2pSA1. Transient nearfield acoustical holography. Sean F. Wu (Mech. Eng., Wayne State Univ., 5050 Anthony Wayne Dr., College
of Eng. Bldg., Rm. 2133, Detroit, MI 48202, sean_wu@wayne.edu)
Transient Nearfield Acoustical Holography Sean F. Wu Department of Mechanical Engineering, Wayne State University, Detroit,
MI, 48202 This paper presents the general formulations for reconstructing the transient acoustic field generated by an arbitrary object
with a uniformly distributed surface velocity in free space. These formulations are derived from the Kirchhoff-Helmholtz integral theory
that correlates the transient acoustic pressure at any field point to those on the source surface. For a class of acoustic radiation problems
involving an arbitrarily oscillating object with a uniformly distributed surface velocity, for example, a loudspeaker membrane, the normal surface velocity is frequency dependent but is spatially invariant. Accordingly, the surface acoustic pressure is expressible as the
product of the surface velocity and the quantity that can be solved explicitly by using the Kirchhoff-Helmholtz integral equation. This
surface acoustic pressure can be correlated to the field acoustic pressure using the Kirchhoff-Helmholtz integral formulation. Consequently, it is possible to use nearfield acoustic holography to reconstruct acoustic quantities in entire three-dimensional space based on a
single set of acoustic pressure measurements taken in the near field of the target object. Examples of applying these formulations to
reconstructing the transient acoustic pressure fields produced by various arbitrary objects are demonstrated.
2:30
2pSA2. A multisource-type representation statistically optimized near-field acoustical holography method. Alan T. Wall (Battlespace Acoust. Branch, Air Force Res. Lab., Bldg. 441, Wright-Patterson AFB, OH 45433, alantwall@gmail.com), Kent L. Gee, and Tracianne B. Neilsen (Dept. of Phys. and Astronomy, Brigham Young Univ., Provo, UT)
A reduced-order approach to near-field acoustical holography (NAH) that accounts for sound fields generated by multiple spatially
separated sources of different types is presented. In this method, an equivalent wave model (EWM) of a given field is formulated based
on rudimentary knowledge of source types and locations. The statistically optimized near-field acoustical holography (SONAH) algorithm is utilized to perform the NAH projection after the formulation of the multisource EWM. The combined process is called multisource-type representation SONAH (MSTR SONAH). This method is used to reconstruct simulated sound fields generated by
combinations of multiple source types. It is shown that MSTR SONAH can successfully reconstruct the near field pressures in multisource environments where other NAH methods result in large errors. The MSTR SONAH technique can be extended to general sound
fields where the shapes and locations of sources and scattering bodies are known.
2:55
2pSA3. Bayesian regularization applied to real-time near-field acoustic holography. Thibaut Le Magueresse (MicrodB, 28 chemin
du petit bois, Ecully 69131, France, thibaut-le-magueresse@microdb.fr), Jean-Hugh Thomas (Laboratoire d’Acoustique de l’Universite
ome Antoni (Laboratoire Vibrations Acoustique, Villeurbanne, France), and Sebasien Paillasseur
du Maine, Le Mans, France), Jer^
(MicrodB, Ecully, France)
Real-Time Near-field Acoustic Holography is used to recover non stationary acoustic sound sources using a planar microphone
array. In the direct way, describing propagation requires the convolution of the spatial spectrum of the source under study with a known
impulse response. When the convolution operator is replaced with a matrix product, the propagation operator is re-written in a Toeplitz
matrix form. Solving the inverse problem is based on a Singular value decomposition of this propagator and Tikhonov regularization is
used to stabilize the solution. The purpose here is to study the regularization process. The formulation of this problem in the Tikhonov
sense estimates the solution from the knowledge of the propagation model, the measurements and the regularization parameter. This parameter is calculated by making a compromise between the fidelity to the real measured data and the fidelity to available a priori information. A new regularization parameter is introduced based on a Bayesian approach to maximize the information taken into account.
Comparisons of the results are proposed, using the L-Curve and the generalized cross validation. The superiority of the Bayesian parameter is observed for the reconstruction of a non stationary experimental source using real-time near-field acoustic holography.
2171
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2171
Contributed Papers
3:20
3:35
2pSA4. Acoustic building infiltration measurement system. Ralph T.
Muehleisen, Eric Tatara (Decision and Information Sci., Argonne National
Lab., 9700 S. Cass Ave., Bldg. 221, Lemont, IL 60439, rmuehleisen@anl.
gov), Ganesh Raman, and Kanthasamy Chelliah (Mater., Mech., and
Aerosp. Eng., Illinois Inst. of Technol., Chicago, IL)
2pSA5. Reversible quasi-holographic line-scan processing for acoustic
imaging and feature isolation of transient scattering. Daniel Plotnick,
Philip L. Marston, David J. Zartman (Phys. and Astronomy, Washington
State Univ., 1510 NW Turner DR, Apartment 4, Pullman, WA 99163,
dsplotnick@gmail.com), and Timothy M. Marston (Appl. Phys. Lab., Univ.
of Washington, Seattle, WA)
Building infiltration is a significant portion of the heating and cooling
load of buildings and accounts for nearly 4% of the total energy use in the
United States. Current measurement methods for locating and quantifying
infiltration in commercial buildings to apply remediation are very limited.
In this talk, the development of a new measurement system, the Acoustic
Building Infiltration Measurement System (ABIMS) is presented. ABIMS
uses Nearfield Acoustic Holography (NAH) to measure the sound field
transmitted through a section of the building envelope. These data are used
to locate and quantify the infiltration sites of a building envelope section.
The basic theory of ABIMS operation and results from computer simulations are presented.
Transient acoustic scattering data from objects obtained using a onedimensional line scan or two-dimensional raster scan can be processed via a
linear quasi-holographic method [K. Baik, C. Dudley, and P. L. Marston, J.
Acoust. Soc. Am. 130, 3838–3851 (2011)] in a way that is reversible, allowing isolation of spatially or temporally dependent features [T. M. Marston et
al., in Proc. IEEE Oceans 2010]. Unlike nearfield holography the subsonic
wavenumber components are suppressed in the processing. Backscattering
data collected from a collocated source/receiver (monostatic scattering) and
scattering involving a stationary source and mobile receiver (bistatic) may
be processed in this manner. Distinct image features such as those due to
edge diffraction, specular reflection, and elastic effects may be extracted in
the image domain and then reverse processed to allow examination of those
features in time and spectral domains. Multiple objects may also be isolated
in this manner and clutter may be removed [D. J. Zartman, D. S. Plotnick,
T. M. Marston, and P. L. Marston, Proceedings of Meetings on Acoustics
19, 055011 (2013) http://dx.doi.org/10.1121/1.4800881]. Experimental
examples comparing extracted features with physical models will be discussed and demonstrations of signal enhancement in an at sea experiment,
TREX13, will be shown. [Work supported by ONR.]
TUESDAY AFTERNOON, 28 OCTOBER 2014
MARRIOTT 5, 1:00 P.M. TO 5:00 P.M.
Session 2pSC
Speech Communication: Segments and Suprasegmentals (Poster Session)
Olga Dmitrieva, Chair
Purdue University, 640 Oval Drive, Stanley Coulter 166, West Lafayette, IN 47907
All posters will be on display from 1:00 p.m. to 5:00 p.m. To allow contributors an opportunity to see other posters, contributors of oddnumbered papers will be at their posters from 1:00 p.m. to 3:00 p.m. and contributors of even-numbered papers will be at their posters
from 3:00 p.m. to 5:00 p.m.
Contributed Papers
2pSC1. Interactions among lexical and discourse characteristics in
vowel production. Rachel S. Burdin, Rory Turnbull, and Cynthia G. Clopper (Linguist, The Ohio State Univ., 1712 Neil Ave., 222 Oxely Hall,
Columbus, OH 43210, burdin@ling.osu.edu)
Various factors are known to affect vowel production, including word
frequency, neighborhood density, contextual predictability, mention in the
discourse, and audience. This study explores interactions between all five of
these factors on vowel duration and dispersion. Participants read paragraphs
that contained target words which varied in predictability, frequency, and
density. Each target word appeared twice in the paragraph. Participants read
each paragraph twice: as if they were talking to a friend (“plain speech”)
2172
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
and as if they were talking to a hearing-impaired or non-native interlocutor
(“clear speech”). Measures of vowel duration and dispersion were obtained.
Results from the plain speech passages revealed that second mention and
more predictable words were shorter than first mention and less predictable
words, and that vowels in second mention and low density words were less
peripheral than in first mention and high density words. Interactions
between frequency and mention, and density and mention, were also
observed, with second mention reduction only occurring in low density and
low frequency words. We expect to observe additional effects of speech
style, with clear speech vowels being longer and more disperse than plain
speech vowels, and that these effects will interact with frequency, density,
predictability, and mention.
168th Meeting: Acoustical Society of America
2172
We investigated vowel quantity in Yakut (Sakha), a Turkic language spoken in Siberia by over 400,000 speakers in the Republic of Sakha (Yakutia)
in the Russian Federation. Yakut is a quantity language; all vowel and consonant phonemes have short and long contrastive counterparts. The study aims
at revealing acoustic characteristics of the binary quantity distinction in vowels. We used two sets of data: (1) A female native Yakut speaker read a 200word list containing disyllabic nouns and verbs with four different combinations of vowel length in the two syllables (short–short, short–long, long–short,
and long–long) and a list of 50 minimal pairs differing only in vowel length;
(2) Spontaneous speech data from 9 female native Yakut speakers (aged 19–
77), 200 words with short vowels and 200 words with long vowels, were
extracted for analysis. Acoustic measurements of the short and long vowels’
f0-values, duration and intensity were done. Mixed-effects models showed a
significant durational difference between long and short vowels for both data
sets. However, the preliminary results indicated that, unlike in quantity languages like Finnish and Estonian, there was no consistent effect of f0 as the
phonetic correlate in Yakut vowel quantity distinction.
2pSC3. Acoustic and perceptual characteristics of vowels produced by
self-identified gay and heterosexual male speakers. Keith Johnson (Linguist, Univ. of California, Berkeley, Berkeley, CA) and Erik C. Tracy
(Psych., Univ. of North Carolina Pembroke, PO Box 1510, Pembroke, NC
28372, erik.tracy@uncp.edu)
Prior research (Tracy & Satariano, 2011) investigated the perceptual characteristics of gay and heterosexual male speech; it was discovered that listeners primarily relied on vowels to identify sexual orientation. Using singleword utterances produced by those same speakers, the current study examined
both the acoustic characteristics of vowels, such as pitch, duration, and the
size of the vowel space, and how these characteristics relate to the perceived
sexual orientation of the speaker. We found a correlation between pitch and
perceived sexual identity for vowels produced by heterosexual speakers—
higher f0 was associated with perceptual “gayness.” We did not find this correlation for gay speakers. Vowel duration did not reliably distinguish gay and
heterosexual speakers, but speakers who produced longer vowels were perceived as gay and speakers who produced shorter vowels were perceived as
heterosexual. The size of the vowel space did not reliably differ between gay
and heterosexual speakers. However, speakers who produced a larger vowel
space were perceived as more gay-sounding than speakers who produced a
smaller vowel space. The results suggest that listeners rely on these acoustic
characteristics when asked to determine a male speaker’s sexual orientation,
but that the stereotypes that they seem to rely upon are inaccurate.
2pSC4. Acoustic properties of the vowel systems of Bolivian Quechua/
Spanish bilinguals. Nicole Holliday (Linguist, New York Univ., 10 Washington Pl., New York, NY 10003, nrh245@nyu.edu)
This paper describes the vowel systems of Quechua/Spanish bilinguals in
Cochabamba, Bolivia, and examines these systems to illuminate variation
between phonemic and allophonic vowels in this Quechua variety. South Bolivian Quechua is described as phonemically trivocalic, and Bolivian Spanish
is described as pentavocalic (Cerr
on-Palomino 1994). Although South Bolivian Quechua has three vowel categories, Quechua uvular stop consonants promote high vowel lowering, effectively lowering /i/ and /u/ towards space
otherwise occupied by /e/ and /o/ respectively, producing a system with five
surface vowels but three phonemic vowels (Buckley 2000). The project was
conducted with eleven Quechua/Spanish bilinguals from the Cochabamba
department in Bolivia. Subjects participated in a Spanish to Quechua oral
translation task and a word list task in Spanish. Results indicate that Quechua/
Spanish bilinguals maintain separate vowel systems. In the Spanish vowel
systems, each vowel occupies its own space and backness. A one-way
ANOVA reveals that /i/ is higher and fronter than /e/, and /u/ is higher than /
o/ (p<0.05). The Quechua vowel systems are somewhat more variable, with
substantial overlap between /i/ and /e/, and between /u/ and /o/. Potential
explanations for this result include lexical conditioning, speaker literacy
effects, and differences in realizations of phonemic versus allophonic vowels.
2173
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
2pSC5. Cue integration in the perception of fricative-vowel coarticulation in Korean. Goun Lee and Allard Jongman (Linguist, The Univ. of
Kansas, 1541 Lilac Ln., Blake Hall, Rm. 427, Lawrence, KS 66045-3129,
cconni@ku.edu)
Korean distinguishes two fricatives—fortis [s’] and non-fortis [s]. Perception of this distinction was tested in two different vowel contexts, with
three types of stimuli (consonant-only, vowel-only, or consonant-vowel
sequences) (Experiment 1). The relative contribution of consonantal and
vocalic cues was also examined with cross-spliced stimuli (Experiment 2).
Listeners’ weighting of 7 perceptual cues—spectral mean (initial 60%, final
40%), vowel duration, H1-H2* (onset, mid), and cepstral peak prominence
(onset, mid)—was examined. The data demonstrate that identification performance was heavily influenced by vowel context and listener performance
was more accurate in the /a/ vowel context than in the /i/ vowel context. In
addition, the type of stimulus presented changed the perceptual cue weighting. When presented with conflicting cues, listener decisions were driven by
the vocalic cues in the /a/ vowel context. These results suggest that perceptual cues associated with breathy phonation are the primary cues for fricative identification in Korean.
2pSC6. Voicing, devoicing, and noise measures in Shanghainese voiced
and voiceless glottal fricatives. Laura L. Koenig (Haskins Labs and Long
Island Univ., 300 George St., New Haven, CT 06511, koenig@haskins.yale.
edu) and Lu-Feng Shi (Haskins Labs and Long Island Univ., Brooklyn, New
York)
Shanghainese has a rather rare voicing distinction between the glottal
fricatives /h/ and /¨/. We evaluate the acoustic characteristics of this contrast in ten male and ten female speakers of urban Shanghainese dialect. Participants produced 20 CV words with a mid/low central vowel in a short
carrier phrase. All legal consonant-tone combinations were used: /h/ preceded high, low, and short tones whereas /¨/ preceded low and short tones.
Preliminary analyses suggested that the traditional “voiced” and “voiceless”
labels for these sounds are not always phonetically accurate; hence we measure the duration of any voicing break relative to the entire phrase, as well
as the harmonics-to-noise ratio (HNR) over the time. We expect longer relative voiceless durations and lower HNR measures for /h/ compared to /¨/. A
question of interest is whether any gender differences emerge. A previous
study on American English [Koenig, 2000, JSLHR 43, 1211–1228] found
that men phonated through their productions of /h/ more often than women,
and interpreted that finding in terms of male-female differences in vocal
fold characteristics. A language that contrasts /h/ and /¨/ might minimize
any such gender variation. Alternatively, the contrast might be realized in
slightly different ways in men and women.
2pSC7. Incomplete neutralization of sibilant consonants in Penang
Mandarin: A palatographic case study. Ting Huang, Yueh-chin Chang,
and Feng-fan Hsieh (Graduate Inst. of Linguist, National Tsing Hua Univ.,
Rm. B306, HSS Bldg., No. 101, Section 2, Kuang-Fu Rd., Hsinchu City
30013, Taiwan, funting.huang@gmail.com)
It has been anecdotally observed that the three-way contrasts in Standard
Chinese are reduced to two-way contrasts in Penang Mandarin (PM). PM is
a variety of Mandarin Chinese spoken in Penang of Malaysia, which is
influenced by Penang Hokkien. This work shows that the alleged neutralization of contrasts is incomplete (10 consonants x 3 vowel contexts x 5
speakers). More specifically, alveopalatal [ˆ] may range from postalveolar
zone (73.33%) to alveolar zone (26.67%), and so does retroflex [] (46.67%
vs. 46.67%). [s] and [n] are apical (or [ + anterior]) coronals. The goal of
this study is three-fold: (i) to describe the places of articulation of PM coronals and the patterns of ongoing sound changes, (ii) to show the neutralization of place contrasts is incomplete whereby constriction length remains
distinct for these sibilant sounds, and (iii) to demonstrate different coarticulatory patterns of consonants in variant vowel contexts. The intricate division of coronal consonants does not warrant a precise constriction location
on the upper palate. This PM data lend support to Ladefoged and Wu’s
(1984) observation that it is not easy to pin down a clear-cut boundary
between dental and alveolar stops, and between alveolar and palatoalveolar
fricatives.
168th Meeting: Acoustical Society of America
2173
2p TUE. PM
2pSC2. Phonetic correlates of phonological quantity of Yakut. Lena
Vasilyeva, Juhani J€arvikivi, and Anja Arnhold (Dept. of Linguist, Univ. of
AB, Edmonton, AB T6G2E7, Canada, lvasilye@ualberta.ca)
2pSC8. Final voicing and devoicing in American English. Olga Dmitrieva (Linguistics/School of Lang. and Cultures, Purdue Univ., 100 North
University St., Beering Hall, Rm. 1289, West Lafayette, IN 47907, odmitrie@purdue.edu)
strident fricatives /v/ and /dh/, and 40–50% of /t/ closures and releases. Further quantification of landmark modification patterns will provide useful information about the processing of surface phonetic variation.
English is typically described as a language in which voicing contrast is
not neutralized in word-final position. However, a tendency towards devoicing (at least partial) of final voiced obstruents in English has been reported
by the previous studies (e.g., Docherty (1992) and references therein). In the
present study, we examine a number of acoustic correlates of obstruent voicing and the robustness with which each one is able to differentiate between
voiced and voiceless obstruents in the word-final position in the speech
recorded by twenty native speakers of the Mid-Western dialect of American
English. The examined acoustic properties include preceding vowel duration, closure or frication duration, duration of the release portion, and duration of voicing during the obstruent closure, frication, and release. Initial
results indicate that final voiced obstruents are significantly different from
the voiceless ones in terms of preceding vowel duration and closure/frication duration. However, release duration for stops does not appear to correlate with voicing in an equally reliable fashion. A well-pronounced
difference in terms of closure voicing between voiced and voiceless final
stops is significantly reduced in fricative consonants, which indicates a tendency towards neutralization of this particular correlate of voicing in the
word-final fricatives of American English.
2pSC11. Age- and gender-related variation in voiced stop prenasalization in Japanese. Mieko Takada (Aichi Gakuin Univ., Nisshin, Japan), Eun
Jong Kong (Korea Aerosp. Univ., Goyang-City, South Korea), Kiyoko
Yoneyama (Daito Bunka Univ., 1-9-1 Takashimadaira, Itabashi-ku, Tokyo
175-8571, Japan, yoneyama@ic.daito.ac.jp), and Mary E. Beckman (Ohio
State Univ., Columbus, OH)
2pSC9. An analysis of the singleton-geminate contrast in Japanese fricatives and stops. Christopher S. Rourke and Zack Jones (Linguist, The Ohio
State Univ., 187 Clinton St., Columbus, OH 43202, rourke.16@osu.edu)
Previous acoustic analyses of the singleton-geminate contrast in Japanese have focused primarily on read speech. The present study instead analyzed the lengths of singleton and geminate productions of word-medial
fricatives and voiceless stops in spontaneous monologues from the Corpus
of Spontaneous Japanese (Maekawa, 2003). The results of a linear mixed
effects regression model mirrored previous findings in read speech that the
geminate effect (the durational difference between geminate and singletons)
of stops is significantly larger than that of fricatives. This study also found a
large range of variability in the geminate effect size between talkers. The
size of the geminate effect between fricatives and voiceless stops was found
to be slightly correlated, suggesting that they might be related to other rateassociated production differences between individuals. This suggestion was
evaluated by exploring duration differences associated with talker age and
gender. While there was no relationship between age and duration, males
produced shorter durations than females for both fricatives and stops. However, the size of the geminate effect was not related to the gender of the
speaker. The cause of these individual differences may be related to sound
perception. Future research will investigate the cause of these individual differences in geminate effect size.
2pSC10. Quantifying surface phonetic variation using acoustic landmarks as feature cues. Jeung-Yoon Choi and Stefanie Shattuck-Hufnagel
(Res. Lab. of Electronics, MIT, 50 Vassar St., Rm. 36-523, Cambridge, MA
02139, sshuf@mit.edu)
Acoustic landmarks, which are abrupt spectral changes associated with
certain feature sequences in spoken utterances, are highly informative and
have been proposed as the initial analysis stage in human speech perception,
and for automatic speech recognition (Stevens, JASA 111(4), 2002, 1872–
1891). These feature cues and their parameter values also provide an
effective tool for quantifying systematic context-governed surface phonetic
variation (Shattuck-Hufnagel and Veilleux, ICPhS XVI, 2007, 925–928).
However, few studies have provided landmark-based information about the
full range of variation in continuous communicative speech. The current
study examines landmark modification patterns in a corpus of maptask-elicited speech, hand annotated for whether the landmarks were realized as
predicted from the word forms or modified in context. Preliminary analyses
of a single conversation (400 s, one speaker) show that the majority of landmarks (about 84%) exhibited the canonical form predicted from their lexical
specifications, and that modifications were distributed systematically across
segment types. For example, 90% of vowel landmarks (at amplitude/F1
peaks) were realized as predicted, but only 70% of the closures for non2174
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Modern Japanese is generally described as having phonologically voiced
(versus voiceless) word-initial stops. However, phonetic details vary across
dialects and age groups; in Takada’s (2011) measurements of recordings of
456 talkers from multiple generations of talkers across five dialects, Osakaarea speakers and older speakers in the Tokyo area (Tokyo, Chiba, Saitama,
and Kanagawa prefectures) typically show pre-voicing (lead VOT), but
younger speakers show many “devoiced” (short lag VOT) values, a tendency that is especially pronounced among younger Tokyo-area females.
There is also variation in the duration of the voice bar, with very long values
(up to -200 ms lead VOT) observed in the oldest female speakers. Spectrograms of such tokens show faint formants during the stop closure, suggesting a velum-lowering gesture to vent supra-glottal air pressure to sustain
vocal fold vibration. Further evidence of pre-nasalization in older Tokyoarea females comes from comparing amplitude trajectories for the voice bar
to amplitude trajectories during nasal consonants, adapting a method proposed by Burton, Blumstein, and Stevens (1972) for exploring phonemic
pre-nasalization contrasts. Differences in trajectory shape patterns between
the oldest males and females and between older and younger females are
like the differences that Kong, Syrika, and Edwards (2012) observed across
Greek dialects.
2pSC12. An acoustic comparison of dental and retroflex sibilants in
Chinese Mandarin and Taiwan Mandarin. Hanbo Yan and Allard Jongman (Linguist, Univ. of Kansas, 1732 Anna Dr., Apt. 11, Lawrence, KS
66044, yanhanbo@ku.edu)
Mandarin has both dental and retroflex sibilants. While the Mandarin
varieties spoken in China and Taiwan are often considered the same, native
speakers of Mandarin can tell the difference between the two. One obvious
difference is that between the retroflex ([], [t], [th]) and dental sibilants
([s], [ts], [tsh]). This study investigates the acoustic properties of the sibilants of Chinese Mandarin and Taiwan Mandarin. Eight native speakers
each of Chinese and Taiwan Mandarin produced the six target sibilants in
word-initial position. A number of acoustic parameters, including spectral
moments and duration, were analyzed to address two research questions: (a)
which parameters distinguish the dental and retroflex in each type of Mandarin; (b) is there a difference between Chinese and Taiwan Mandarin?
Results show that retroflex sibilants have a lower M1 and M2, and a higher
M3 than dental sibilants in each language. Moreover, Chinese Mandarin has
significantly larger M1, M2, and M3 differences than Taiwan Mandarin.
This pattern suggests that, in contrast to Chinese Mandarin, Taiwan Mandarin is merging the retroflex sibilants in a dental direction.
2pSC13. Statistical relationships between phonological categories and
acoustic-phonetic properties of Korean consonants. Noah H. Silbert
(Commun. Sci. & Disord., Univ. of Cincinnati, 3202 Eden Ave., 344 French
East Bldg., Cincinnati, OH 45267, noah.silbert@uc.edu) and Hanyong Park
(Linguist, Univ. of Wisconsin, Milwaukee, WI)
The mapping between segmental contrasts and acoustic-phonetic properties is complex and many-to-many. Contrasts are often cued by a multiple
acoustic-phonetic properties, and acoustic-phonetic properties typically provide information about multiple contrasts. Following the approach of de
Jong et al. (2011, JASA 129, 2455), we analyze multiple native speakers’
repeated productions of Korean obstruents using a hierarchical multivariate
statistical model of the relationship between multidimensional acoustics and
phonological categories. Specifically, we model the mapping between categories and multidimensional acoustic measurements from multiple repetitions of 14 Korean obstruent consonants produced by 20 native speakers (10
168th Meeting: Acoustical Society of America
2174
2pSC14. Corpus testing a fricative discriminator: Or, just how invariant is this invariant? Philip J. Roberts (Faculty of Linguist, Univ. of
Oxford, Ctr. for Linguist and Philology, Walton St., Oxford OX1 2HG,
United Kingdom, philip.roberts@ling-phil.ox.ac.uk), Henning Reetz (Institut fuer Phonetik, Goethe-Universitaet Frankfurt, Frankfurt-am-Main, Germany), and Aditi Lahiri (Faculty of Linguist, Univ. of Oxford, Oxford,
United Kingdom)
Acoustic cues to the distinction between sibilant fricatives are claimed
to be invariant across languages. Evers et al. (1998) present a method for
distinguishing automatically between [s] and [S], using the slope of regression lines over separate frequency ranges within a DFT spectrum. They
report accuracy rates in excess of 90% for fricatives extracted from recordings of minimal pairs in English, Dutch and Bengali. These findings are
broadly replicated by Maniwa et al. (2009), using VCV tokens recorded in
the lab. We tested the algorithm from Evers et al. (1998) against tokens of
fricatives extracted from the TIMIT corpus of American English read
speech, and the Kiel corpora of German. We were able to achieve similar
accuracy rates to those reported in previous studies, with the following caveats: (1) the measure relies on being able to perform a DFT for frequencies
from 0 to 8 kHz, so that a minimum sampling rate of 16 kHz is necessary
for it to be effective, and (2) although the measure draws a similarly clear
distinction between [s] and [S] to those found in previous studies, the threshold value between the two sounds is sensitive to the dynamic range of the
input signal.
2pSC15. Discriminant variables for plosive- and fricative-type single
and geminate stops in Japanese. Shigeaki Amano and Kimiko Yamakawa
(Faculty of Human Informatics, Aichi Shukutoku Univ., 2-9 Katahira, Nagakute, Aichi 480-1197, Japan, psy@asu.aasa.ac.jp)
Previous studies suggested that a plosive-type geminate stop in Japanese
is discriminated from a single stop with variables of stop closure duration
and subword duration that spans from the mora preceding the geminate stop
to the vowel following the stop. However, this suggestion does not apply to
a fricative-type geminate stop that does not have a stop closure. To overcome this problem, this study proposes Inter-Vowel Interval (IVI) and Successive Vowel Interval (SVI) as discriminant variables. IVI is the duration
between the end of the vowel preceding the stop and the beginning of the
vowel following the stop. SVI is the duration between the beginning of the
vowel preceding the stop and the end of the vowel following the stop. When
discriminant analysis was conducted between single and geminate stops of
plosive and fricative types using IVI and SVI as independent variables, the
discriminant ratio was very high (99.5%, n = 368). This result indicates that
IVI and SVI are the general variables that represent acoustic features distinguishing Japanese single and geminate stops of both plosive and fricative
types. [This study was supported by JSPS KAKENHI Grant Numbers
24652087, 25284080, 26370464 and by Aichi-Shukutoku University Cooperative Research Grant 2013-2014.]
2pSC16. Perceptual affinity of Mandarin palatals and retroflexes. Yunghsiang Shawn Chang (Dept. of English, National Taipei Univ. of Technol.,
Zhongxiao E. Rd., Sec. 3, No. 1, Taipei 106, Taiwan, shawnchang@ntut.
edu.tw)
Mandarin palatals [tˆ, tˆh, ˆ], which only occur before [i, y] vowels, are in
complementary distribution with the alveolars [ts, tsh, s], the velars [k, kh, x],
2175
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
and the retroflexes [t, th, ]. Upon investigating perceptually motivated
accounts for the phonological representation of the palatals, Wan (2010)
reported that Mandarin palatals were considered more similar to the alveolars
than the velars, whereas Lu (2014) found categorical results for the palatal-alveolar discrimination. The current study furthered the investigation to the perceptual affinity between Mandarin palatals and retroflexes by having 15 native
listeners identify two 8-step resynthesized [-s] continua (adapted from Chang
et al. (2013)) cross-spliced with [i, y] vowels, respectively. To avoid phonotactic restrictions from biasing perception, all listeners were trained on identifying
the [çi, i, si] and [çy, y, sy] syllables produced by a phonetician before the
experiment. The results showed that all resynthesized stimuli, though lacking
palatal-appropriate vocalic transitions, were subject to palatal perception. Particularly, two intermediate steps along the [i-si] continuum and five along the
[y-sy] continuum were identified as palatal syllables by over 70% of the listeners. The results suggest that Mandarin palatals could be identified with both
the retroflexes and alveolars based on perceptual affinity.
2pSC17. Perceptual distinctiveness of dental vs. palatal sibilants in different vowel contexts. Mingxing Li (Linguist, The Univ. of Kansas, 1541
Lilac Ln., Blake Hall 427, Lawrence, KS 66045-3129, mxlistar@gmail.
com)
This paper reports a similarity rating experiment and a speeded AX discrimination experiment to test the perceptual distinctiveness of dental vs.
palatal sibilants in different vowel contexts. The stimuli were pairs of CV
sequences where the onsets were [s, ts, tsh] vs. [ˆ, tˆ, tˆh] as in Mandarin
Chinese and the vowels were [i, a, o]; the durations of the consonants and
vowels were set to values close to those in natural speech; the inter-stimulus-interval was set at 100ms to facilitate responses based on psychoacoustic
similarity. A significant effect of vowel contexts was observed in the similarity rating by 20 native American English speakers, whereby the dental vs.
palatal sibilants were judged to be the least distinct in the [i] context. A similar pattern was observed in the speeded AX discrimination, whereby the [i]
context introduced slower “different” responses than other vowels. In general, this study supports the view that the perceptual distinctiveness of a
consonant pair may vary with different vowel contexts. Moreover, the
experiment results match the typological pattern of dental vs. palatal sibilants across Chinese dialects, where contrasts like /si, tsi, tshi/ vs. /ˆi, tˆi,
tˆhi/ are often enhanced by vowel allophony.
2pSC18. Phonetic correlates of stance-taking. Valerie Freeman, Richard
Wright, Gina-Anne Levow (Linguist, Univ. of Washington, Box 352425,
Seattle, WA 98195, valerief@uw.edu), Yi Luan (Elec. Eng., Univ. of Washington, Seattle, WA), Julian Chan (Linguist, Univ. of Washington, Seattle,
WA), Trang Tran, Victoria Zayats (Elec. Eng., Univ. of Washington, Seattle, WA), Maria Antoniak (Linguist, Univ. of Washington, Seattle, WA),
and Mari Ostendorf (Elec. Eng., Univ. of Washington, Seattle, WA)
Stance, or a speaker’s attitudes or opinions about the topic of discussion,
has been investigated textually in conversation- and discourse analysis and
in computational models, but little work has focused on its acoustic-phonetic properties. This is a difficult problem, given that stance is a complex
activity that must be expressed along with several other types of meaning
(informational, social, etc.) using the same acoustic channels. In this presentation, we begin to identify some acoustic indicators of stance in natural
speech using a corpus of collaborative conversational tasks which have been
hand-annotated for stance strength (none, weak, moderate, and strong) and
polarity (positive, negative, and neutral). A preliminary analysis of 18 dyads
completing two tasks suggests that increases in stance strength are correlated with increases in speech rate and pitch and intensity medians and
ranges. Initial results for polarity also suggest correlations with speech rate
and intensity. Current investigations center on local modulations in pitch
and intensity, durational and spectral differences between stressed and
unstressed vowels, and disfluency rates in different stance conditions. Consistent male/female differences are not yet apparent but will also be examined further.
168th Meeting: Acoustical Society of America
2175
2p TUE. PM
male, 10 female) in onset position in monosyllables. The statistical model
allows us to analyze distinct within- and between-speaker sources of variability in consonant production, and model comparisons allow us to assess
the utility of complexity in the assumed underlying phonological category
system. In addition, by using the same set of acoustic measurements for the
current project’s Korean consonants and the English consonants analyzed
by de Jong et al., we can model the within- and between-language acoustic
similarity of phonological categories, providing a quantitative basis for predictions about cross-language phonetic perception.
2pSC19. Compounds in modern Greek. Angeliki Athanasopoulou and
Irene Vogel (Linguist and Cognit. Sci., Univ. of Delaware, 125 East Main
St., Newark, DE 19716, angeliki@udel.edu)
2pSC22. The role of prosody in English sentence disambiguation. Taylor
L. Miller (Linguist & Cognit. Sci., Univ. of Delaware, 123 E Main St., Newark, DE 19716, tlmiller@udel.edu)
The difference between compounds and phrases has been studied extensively in English (e.g., Farnetani, Torsello, & Cosi, 1988; Plag, 2006;
Stekauer,
Zimmermann, & Gregova, 2007). However, little is known about
the analogous difference in Modern Greek (Tzakosta, 2009). Greek compounds (Ralli, 2003) form a single phonological word, and thus, they only contain one primary stress. That means that the individual words lose their
primary stress. The present study is the first acoustic investigation of the stress
properties of Greek compounds and phrases. Native speakers of Greek produce
ten novel adjective + noun compounds and their corresponding phrases (e.g.,
phrase: [kocino dodi] “a red tooth” vs. compound: [kocinod
odis] “someone
with red teeth”) in the sentence corresponding to “The XXX is at the top/bottom of the screen.” Preliminary results confirm the earlier descriptive claims
that compounds only have a single stress, while phrases have one on each
word. Specifically, the first word (i.e., adjective) in compounds is reduced in
F0 (101 Hz), duration (55 ms), and intensity (64 dB) compared to phrases
(F0 = 117Hz, duration = 85 ms, and intensity = 67 dB). Also, both words are
very similar for all of the measures in phrases. The second word (i.e., noun) is
longer than the first word, possibly indicating phrase-final lengthening.
Only certain ambiguous sentences are perceptually disambiguable. Some
researchers argue that this is due to syntactic structure (Lehiste 1973, Price
1991, Kang & Speer 2001), while others argue prosodic structure is responsible (Nespor & Vogel 1986 = N&V, Hirshberg & Avesani 2000). The present
study further tests the role of prosodic constituents in sentence disambiguation
in English. Target sentences were recorded in disambiguating contexts;
twenty subjects listened to the recordings and chose one of two meanings.
Following N&V’s experimental design with Italian, the meanings of each target structurally corresponded to different syntactic constituents and varied
with respect to phonological phrases (/) and intonational phrases (I). The
results confirm N&V’s Italian findings: listeners are only able to disambiguate
sentences with different prosodic constituent structures (p < 0.05); those differing in (I) but not (/) have the highest success rate—86% (e.g., [When danger threatens your children]I [call the police]I vs. [When danger threatens]I
[your children call the police]I ). As reported elsewhere (e.g., Lehiste 1973),
we also observed a meaning bias in some cases (e.g., in “Julie ordered some
large knife sharpeners,” listeners preferred “large [knife sharpeners]” but in
“Jill owned some gold fish tanks,” they preferred “[goldfish] tanks”).
2pSC20. Evoked potentials during voice error detection at register
boundaries. Anjli Lodhavia (Dept. of Commun. Disord. and Sci., Rush
Univ., 807 Reef Court, Wheeling, IL 60090, alodhavia1@gmail.com), Sona
Patel (Dept. of Speech-Lang. Pathol., Seton Hall Univ., South Orange, NJ),
Saul Frankford (Dept. of Commun. Sci. and Disord., Northwestern Univ.,
Tempe, Arizona), Oleg Korzyukov, and Charles R. Larson (Dept. of Commun. Sci. and Disord., Northwestern Univ., Evanston, IL)
2pSC23. Perceptual isochrony and prominence in spontaneous speech.
Tuuli Morrill (Linguist, George Mason Univ., 4400 University Dr., 3E4,
Fairfax, VA, tmorrill@msu.edu), Laura Dilley (Commun. Sci. and Disord.,
Michigan State Univ., East Lansing, MI), and Hannah Forsythe (Linguist,
Michigan State Univ., East Lansing, MI)
Singers require great effort to avoid vocal distortion at register boundaries, as they are trained to diminish the prominence of register breaks. We
examined neural mechanisms underlying voice error detection in singers at
their register boundaries. We hypothesized that event-related potentials
(ERPs), reflecting brain activity, would be larger if a singer’s pitch was
unexpectedly shifted toward, rather than away, from their register break.
Nine trained singers sustained a musical note for ~3 seconds near their
modal register boundaries. As the singers sustained these notes, they heard
their voice over headphones shift in pitch ( + /- 400 cents, 200 ms) either toward or away from the register boundary. This procedure was repeated for
200 trials. The N1 and P2 ERP amplitudes for three central electrodes (FCz,
Cz, Fz) were computed from the EEGs of all participants. Results of a multivariate analysis of variance for shift direction ( + 400c, -400c) and register
(low, high) showed significant differences in N1 and P2 amplitude for direction at the low boundary of modal register, but not the high register boundary. These results may suggest increased neural activity in singers when
trying to control the voice when crossing the lower register boundary.
2pSC21. The articulatory tone-bearing unit: Gestural coordination of
lexical tone in Thai. Robin P. Karlin and Sam Tilsen (Linguist, Cornell
Univ., 103 W Yates St., Ithaca, NY 14850, karlin.robin@gmail.com)
Recently, tones have been analyzed as articulatory gestures that can coordinate with segmental gestures. In this paper, we show that the tone gestures
that make up a HL contour tone are differentially coordinated with articulatory gestures in Thai syllables, and that the coordinative patterns are influenced by the segments and moraic structure of the syllables. The
autosegmental approach to lexical tone describes tone as a suprasegment that
must be associated to some tone-bearing unit (TBU); in Thai, the language of
study, the proposed TBU is the mora. Although the autosegmental account
largely describes the phonological patterning of tones, it remains unclear how
the abstract representation of tone is implemented. An electromagnetic articulograph (EMA) study of four speakers of Thai was conducted to examine the
effects of segment type and moraic structure on the coordination of tone gestures. In a HL contour tone, tone gestures behave similarly to consonant gestures, and show patterns of coordination with gestures that correspond to
moraic segments. However, there is also a level of coordination between the
H and L tone gestures. Based on these results, a model of TBUs is proposed
within the Articulatory Phonology framework that incorporates tone-segment
coordination as well as tone-tone coordination.
2176
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
While it has been shown that stressed syllables do not necessarily occur
at equal time intervals in speech (Cummins, 2005; Dauer, 1983), listeners
frequently perceive stress as occurring regularly, a phenomenon termed perceptual isochrony (Lehiste, 1977). A number of studies have shown that in
controlled experimental materials, a perceptually isochronous sequence of
stressed syllables generates expectations which affect word segmentation
and lexical access in subsequent speech (e.g., Dilley & McAuley, 2008).
The present research used the Buckeye Corpus of Conversational Speech
(Pitt et al., 2007) to address two main questions (1) What acoustic and linguistic factors are associated with the occurrence of perceptual isochrony?
and (2) What are the effects of perceptually isochronous speech passages on
the placement of prominence in subsequent speech? In particular, we investigate the relationship between perceptual isochrony and lexical items traditionally described as “unstressed” (e.g., grammatical function words),
testing whether these words are more likely to be perceived and/or produced
as prominent when they are preceded and/or followed by a perceptually isochronous passage. These findings will contribute to our understanding of the
relationship between acoustic correlates of phrasal prosody and lexical perception. [Research partially supported by NSF CAREER Award BCS
0874653 to L. Dilley.]
2pSC24. French listeners’ processing of prosodic focus. Jui Namjoshi
(French, Univ. of Illinois at Urbana-Champaign, 2090 FLB, MC-158, S.
Mathews Ave, Urbana, IL 61801, namjosh2@illinois.edu)
Focus in French, typically conveyed by syntax (e.g., clefting) with prosody, can be signaled by prosody alone (contrastive pitch accents on the first
syllable of focused constituents, cf. nuclear pitch accents, on the last nonreduced syllable of the Accentual Phrase) (Fery, 2001; Jun & Fougeron,
2000). Do French listeners, like L1-English listeners (Ito & Speer, 2008)
use contrastive accents to anticipate upcoming referents? 20 French listeners
completed a visual-world eye-tracking experiment. Cross-spliced, amplitude-neutralized stimuli included context (1) and critical (2) sentences in a
2x2 design, with accent on object (nuclear/ contrastive) and person’s information status (new/ given) as within-subject variables (see (1)-(2)). Average
amplitudes and durations for object words were 67 dB and 0.68 s for contrastive accents, and 63.8 dB and 0.56 s for nuclear accents, respectively.
Mixed-effects models showed a significant effect of accent-by-informationstatus interaction on competitor fixation proportions in the post-disambiguation time window (p<0.05). Contrastive accents yielded lower competitor
fixation proportions with a given person than with a new person, suggesting
that contrastive accents constrain lexical competition in French. (1) Clique
168th Meeting: Acoustical Society of America
2176
2pSC25. Prominence, contrastive focus and information packaging in
Ghanaian English discourse. Charlotte F. Lomotey (Texas A&M University-Commerce, 1818D Hunt St., Commerce, TX 75428, cefolatey@yahoo.
com)
Contrastive focus refers to the coding of information that is contrary to
the presuppositions of the interlocutor. Thus, in everyday speech, speakers
employ prominence to mark contrastive focus such that it gives an alternative answer to an explicit or implicit statement provided by the previous discourse or situation (Rooth, 1992), and plays an important role in facilitating
language understanding. Even though contrastive focus has been investigated in native varieties of English, there is little or no knowledge of similar
studies as far as non-native varieties of English, including that of Ghana, are
concerned. The present study investigates how contrastive focus is marked
with prosodic prominence in Ghanaian English, and how such a combination creates understanding among users of this variety. To achieve this, data
consisting of 61/2 hours of English conversations from 200 Ghanaians were
analyzed using both auditory and acoustic means. Results suggest that Ghanaians tend to shift the contrastive focus from the supposed focused syllable
onto the last syllable of the utterance, especially when that syllable ends the
utterance. Although such tendencies may shift the focus of the utterance, the
data suggest that listeners do not seem to have any problem with speakers’
packaging of such information.
2pSC26. The representation of tone 3 sandhi in Mandarin: A psycholinguistic study. Yu-Fu Chien and Joan Sereno (Linguist, The Univ. of Kansas, 1407 W 7th St., Apt. 18, Lawrence, KS 66044-6716, whouselefthand@
gmail.com)
In Mandarin, tone 3 sandhi is a tonal alternation phenomenon in which a
tone 3 syllable changes to a tone 2 syllable when it is followed by another
tone 3 syllable. Thus, the initial syllable of Mandarin bisyllabic sandhi
words is tone 3 underlyingly but becomes tone 2 on the surface. An auditory-auditory priming lexical decision experiment was conducted to
2177
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
investigate how Mandarin tone 3 sandhi words are processed by Mandarin
native listeners. The experiment examined prime-target pairs, with monosyllabic primes and bisyllabic Mandarin tone 3 sandhi targets. Each tone sandhi
target word was preceded by one of three corresponding monosyllabic
primes: a tone 2 prime (Surface-Tone overlap) (chu2-chu3li3), a tone 3
prime (Underlying-Tone overlap) (chu3-chu3li3), or a control prime (Baseline condition) (chu1-chu3li3). In order to assess the contribution of frequency of occurrence, 15 High Frequency and 15 Low Frequency sandhi
target words were used. Thirty native speakers of Mandarin participated.
Results showed that tone 3 sandhi targets elicited significantly stronger
facilitation effects in the Underlying-Tone condition than in the SurfaceTone condition, with little effect of frequency of occurrence. The data will
be discussed in terms of lexical access and the nature of the representation
of Mandarin words.
2pSC27. Perception of sound symbolism in mimetic stimuli: The voicing
contrast in Japanese and English. Kotoko N. Grass (Linguist, Univ. of
Kansas, 9953 Larsen St., Overland Park, KS 66214, nakata.k@ku.edu) and
Joan Sereno (Linguist, Univ. of Kansas, Lawrence, KS)
Sound symbolism is a concept in which the sound of a word and the
meaning of the word are systematically related. The current study investigated whether the voicing contrast between voiced /d, g, z/ and voiceless /t,
k, s/ consonants systematically affects categorization of Japanese mimetic
stimuli along a number of perceptual and evaluative dimensions. For the
nonword stimuli, voicing of consonants was also manipulated, creating a
continuum from voiced to voiceless endpoints (e.g., [gede] to [kete]), in
order to examine the categorical nature of the perception. Both Japanese
native speakers and English native speakers, who had no knowledge of Japanese, were examined. Stimuli were evaluated on size (big–small) and shape
(round–spiky) dimensions as well as two evaluative dimensions (good–bad,
graceful–clumsy). In the current study, both Japanese and English listeners
associated voiced sounds with largeness, badness, and clumsiness and voiceless sounds with smallness, goodness, and gracefulness. For the shape
dimension, however, English and Japanese listeners showed contrastive categorization, with English speakers associating voiced stops with roundness
and Japanese listeners associating voiced stops with spikiness. Interestingly,
sound symbolism was very categorical in nature. Implications of the current
data for theories of sound symbolism will be discussed.
168th Meeting: Acoustical Society of America
2177
2p TUE. PM
sur le macaRON de Marie-Helène. (2) Puis clique sur le chocoLAT/ CHOcolat de Marie-Helène/ Jean-Sebastien. (nuclear/contrastive accent, given/
new person) ‘(Then) Click on the macaron/chocolate of Marie-Helène/JeanSebastien.’
TUESDAY AFTERNOON, 28 OCTOBER 2014
INDIANA F, 1:00 P.M. TO 4:30 P.M.
Session 2pUW
Underwater Acoustics: Propagation and Scattering
Megan S. Ballard, Chair
Applied Research Laboratories, The University of Texas at Austin, P.O. Box 8029, Austin, TX 78758
Contributed Papers
1:00
1:30
2pUW1. Low frequency propagation experiments in Currituck Sound.
Richard D. Costley (GeoTech. and Structures Lab., U.S. Army Engineer
Res. & Development Ctr., 3909 Halls Ferry Rd., Vicksburg, MS 39180,
dan.costley@usace.army.mil), Kent K. Hathaway (Coastal & Hydraulics
Lab., US Army Engineer Res. & Development Ctr., DC, NC), Andrew
McNeese, Thomas G. Muir (Appl. Res. Lab., Univ. of Texas at Austin, Austin, TX), Eric Smith (GeoTech. and Structures Lab., U.S. Army Engineer
Res. & Development Ctr., Vicksburg, Texas), and Luis De Jesus Diaz (GeoTech. and Structures Lab., U.S. Army Engineer Res. & Development Ctr.,
Vicksburg, MS)
2pUW3. Results from a scale model acoustic propagation experiment
over a translationally invariant wedge. Jason D. Sagers (Environ. Sci.
Lab., Appl. Res. Labs., The Univ. of Texas at Austin, 10000 Burnet Rd.,
Austin, TX 78758, sagers@arlut.utexas.edu)
In water depths on the order of a wavelength, sound propagates with
considerable involvement of the bottom, whose velocities and attenuation
vary with depth into the sediment. In order to study propagation in these
types of environments, experiments were conducted in Currituck Sound on
the Outer Banks of North Carolina using a Combustive Sound Source (CSS)
and bottom mounted hydrophones and geophones as receivers. The CSS
was deployed at a depth of approximately 1 meter and generated transient
signals, several wavelengths long, at frequencies around 300 Hz. The results
are used to determine transmission loss in water depths of approximately 3
meters, as well as to examine the generation and propagation of Sholte type
interface waves. The measurements are compared to numerical models generated with a two-dimensional finite-element code. [Work supported by the
U.S. Army Engineer Research and Development Center. Permission to publish was granted by Director, Geotechnical & Structures Laboratory.]
1:15
2pUW2. Three-dimensional acoustic propagation effect in subaqueous
sand dune field. Andrea Y. Chang, Chi-Fang Chen (Dept. of Eng. Sci. and
Ocean Eng., National Taiwan Univ., No. 1, Sec. 4, Roosevelt Rd., Taipei
10617, Taiwan, yychang@ntu.edu.tw), Linus Y. Chiu (Inst. of Undersea
Technol., National Sun Yat-sen Univ., Kaohsiung, Taiwan), Emily Liu
(Dept. of Eng. Sci. and Ocean Eng., National Taiwan Univ., Taipei, Taiwan), Ching-Sang Chiu, and Davis B. Reeder (Dept. of Oceanogr., Naval
Postgrad. School, Monterey, CA)
Very large subaqueous sand dunes are discovered on the upper continental slope of the northern South China Sea in water depth of 160–600 m,
which composed of fine to medium sand. The amplitude and the crest-tocrest wavelength of sand dunes are about 5–15 m and 200–400 m, respectively. This topographic feature could causes strong acoustic scattering,
mode coupling, and out-of- plane propagation effects, which consequently
result in sound energy redistribution within ocean waveguide. This research
focus on the three-dimensional propagation effects (e.g., horizontal refraction) induced by the sand dunes in the South China Sea, which are expected
as the angle of propagation relative to the bedform crests decreases. The
three-dimensional propagation effects are studied by numerical modeling
and model-data comparison. For numerical modeling, the in-situ topographic data of subaqueous sand dune and sound speed profiles were inputted to calculate the acoustic fields, which were further decomposed into
mode fields to show the modal horizontal refraction effects. The modeling
results were manifested by data observations. [This work is sponsored by
the Ministry of Science and Technology of Taiwan.]
2178
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
A 1:7500 scale underwater acoustic propagation experiment was conducted in a laboratory tank to investigate three-dimensional (3D) propagation effects, with the objective of providing benchmark quality data for
comparison with numerical models. A computer controlled positioning system accurately moves the receiving hydrophone in 3D space while a stationary source hydrophone emits band-limited pulse waveforms between 200
kHz and 1 MHz. The received time series can be post-processed to estimate
travel time, transmission loss, and vertical and horizontal arrival angle. Experimental results are shown for a 1.22 2.13 m bathymetric part possessing both a flat bottom bathymetry and a translationally invariant wedge with
a 10 slope. Comparisons between the experimental data and numerical
models are also shown. [Work supported by ONR.]
1:45
2pUW4. Numerical modeling of measurements from an underwater
scale-model tank experiment. Megan S. Ballard and Jason D. Sagers
(Appl. Res. Labs., The Univ. of Texas at Austin, P.O. Box 8029, Austin, TX
78758, meganb@arlut.utexas.edu)
Scale-model tank experiments are beneficial because they offer a controlled environment in which to make underwater acoustic propagation
measurements, which is helpful when comparing measured data to calculations from numerical propagation models. However, to produce agreement
with the measured data, experimental details must be carefully included in
the model. For example, the frequency-dependent transmitting and receiving
sensitivity and vertical directionality of both hydrophones must be included.
In addition, although it is possible to measure the geometry of the tank
experiment, including water depth and source and receiver positions, positional uncertainty exists due to the finite resolution of the measurements.
The propagated waveforms from the experiment can be used to resolve
these parameters using inversion techniques. In this talk, model-data comparisons of measurements made in a 1:7500 scale experiment are presented.
The steps taken to produce agreement between the measured and modeled
data are discussed in detail for both range-independent and range-dependent
configurations.
2:00
2pUW5. A normal mode inner product to account for acoustic propagation over horizontally variable bathymetry. Charles E. White, Cathy Ann
Clark (Naval Undersea Warfare Ctr., 1176 Howell St., Newport, RI 02841,
charlie.e.white@navy.mil), Gopu Potty, and James H. Miller (Ocean Eng.,
Univ. of Rhode Island, Narragansett, RI)
This talk will consider the conversion of normal mode functions over
local variations in bathymetry. Mode conversions are accomplished through
an inner product, which enables the modes compromising the field at each
range-dependent step to be written as a function of those in the preceding
step. The efficiency of the method results from maintaining a stable number
168th Meeting: Acoustical Society of America
2178
2:15
2pUW6. An assessment of the effective density fluid model for backscattering from rough poroelastic interfaces. Anthony L. Bonomo, Nicholas P.
Chotiros, and Marcia J. Isakson (Appl. Res. Labs., The Univ. of Texas at Austin, 10000 Burnet Rd., Austin, TX 78713, anthony.bonomo@gmail.com)
The effective density fluid model (EDFM) was developed to approximate the behavior of sediments governed by Biot’s theory of poroelasticity.
Previously, it has been shown that the EDFM predicts reflection coefficients
and backscattering strengths that are in close agreement with those of the
full Biot model for the case of a homogeneous poroelastic half-space. However, it has not yet been determined to what extent the EDFM can be used in
place of the full Biot model for other cases. In this work, the finite element
method is used to compare the backscattering strengths predicted using the
EDFM with the predictions of the full Biot model for three cases: a homogeneous poroelastic half-space with a rough interface, a poroelastic layer
overlying an elastic half-space with both interfaces rough, and an inhomogeneous poroelastic half-space consisting of a shear modulus gradient with a
rough interface. [Work supported by ONR, Ocean Acoustics.]
2:30
2pUW7. Scattering by randomly rough surfaces. I. Analysis of slope
approximations. Patrick J. Welton (Appl. Res. Lab., The Univ. of Texas at
Austin, 1678 Amarelle St., Thousand Oaks, CA 91320-5971, patrickwelton@verizon.net)
Progress in numerical methods now allows scattering in two dimensions
to be computed without resort to approximations. However, scattering by
three-dimensional random surfaces is still beyond the reach of current numerical techniques. Within the restriction of the Kirchhoff approximation
(single scattering) some common approximations used to predict scattering
by randomly rough surfaces will be examined. In this paper, two widely
used approximate treatments for the surface slopes will be evaluated and
compared to the exact slope treatment.
2:45–3:00 Break
3:00
2pUW8. Scattering by randomly rough surfaces. II. Spatial spectra
approximations. Patrick J. Welton (Appl. Res. Lab., The Univ. of Texas at
Austin, 1678 Amarelle St., Thousand Oaks, CA 91320-5971, patrickwelton@verizon.net)
The spatial spectrum describing a randomly rough surface is crucial to
the theoretical analysis of the scattering behavior of the surface. Most of the
models assume that the surface displacements are a zero-mean process. It is
shown that a zero-mean process requires that the spatial spectrum vanish
when the wavenumber is zero. Many of the spatial spectra models used in
the literature do not meet this requirement. The impact of the zero-mean
requirement on scattering predictions will be discussed, and some spectra
models that meet the requirement will be presented.
3:15
2pUW9. Scattering by randomly rough surfaces. III. Phase approximations. Patrick J. Welton (Appl. Res. Lab., The Univ. of Texas at Austin,
1678 Amarelle St., Thousand Oaks, CA 91320-5971, patrickwelton@verizon.net)
solution. Approximate image solutions for an infinite, pressure-release plane
surface are studied for an omnidirectional source using the 2nd, 3rd, and 4th
order phase approximations. The results are compared to the exact image solution to examine the effects of the phase approximations. The result based
on the 2nd order (Fresnel phase) approximation reproduces the image solution for all geometries. Surprisingly, the results for the 3rd and 4th order
phase approximations are never better than the Fresnel result, and are substantially worse for most geometries. This anomalous behavior is investigated and the cause is found to be the multiple stationary phase points
produced by the 3rd and 4th order phase approximations.
3:30
2pUW10. Role of binding energy (edge-to-face contact of mineral platelets) in the acoustical properties of oceanic mud sediments. Allan D.
Pierce (Retired, PO Box 339, 399 Quaker Meeting House Rd., East Sandwich, MA 02537, allanpierce@verizon.net) and William L. Siegmann
(Mathematical Sci., Rensselaer Polytechnic Inst., Troy, NY)
A theory for mud sediments presumes a card-house model, where the
platelets arrange themselves in a highly porous configuration; electrostatic
forces prevent face-to-face contacts. The primary type of contact is where
the edge of one platelet touches a face of another. Why such is not also prevented by electrostatic forces is because of van der Waals (vdW) forces
between the molecular structures within the two platelets. A quantitative
assessment is given of such forces, taking into account the atomic composition and crystalline structure of the platelets, proceeding from the London
theory of interaction between non-polar molecules. Double-integration over
both platelets leads to a quantitative and simple prediction for the potential
energy of vdW interaction as a function of the separation distance, edgefrom-face. At moderate nanoscale distances, the resulting force is attractive
and is much larger than the electrostatic repulsion force. But, at very close
(touching) distances, the intermolecular force becomes also repulsive, so
that there is a minimum potential energy, which is identified as the binding
energy. This finite binding energy, given a finite environmental temperature,
leads to some statistical mechanical theoretical implications. Among the
acoustical implications is a relaxation mechanism for the attenuation of
acoustic waves propagating through mud.
3:45
2pUW11. Near bottom self-calibrated measurement of normal reflection coefficients by an integrated deep-towed camera/acoustical system.
Linus Chiu, Chau-Chang Wang, Hsin-Hung Chen (Inst. of Undersea Technol., National Sun Yat-sen Univ., No. 70, Lienhai Rd., Kaohsiung 80424,
Taiwan, linus@mail.nsysu.edu.tw), Andrea Y. Chang (Asia-Pacific Ocean
Res. Ctr., National Sun Yat-sen Univ., Kaohsiung, Taiwan), and Chung-Ray
Chu (Inst. of Undersea Technol., National Sun Yat-sen Univ., Kaohsiung,
Taiwan)
Normal incidence echo data (bottom reflection) can provide acoustic
reflectivity estimates used to predict sediment properties with using seabed
sediment models. Accuracy of normal reflection coefficient measurement
thus become very significant to the bottom inversion result. A deep-towed
camera platform with acoustical recording system, developed by the Institution of Undersea Technology, National Sun Yat-sen University, Taiwan, is
capable of photographically surveying the seafloor in near scope and acquiring sound data. The real time data transference, including photography
(optics) and reflection measurement (acoustics) can be implemented in the
same site simultaneously. The deep-towed camera near the bottom was used
in several experiments in the southwestern sea off Taiwan in 2014 to acquire
acoustic LFM signal sent by surface shipboard source as incident signal as
well as the seafloor reflections at frequency bands within 4–6 kHz. The error
produced by compensating the roll-off of altitude of vehicle (propagation
loss) can be eliminated, which is considered as near bottom self-calibrated
measurement for normal reflection coefficient. The collected reflection coefficients were used to inverting the sediment properties with using the Effective Density Fluid model (EDFM), manifested by the coring and camera
images. [This work is sponsored by the Ministry of Science and Technology
of Taiwan.]
In the limit as the roughness vanishes, the solution for the pressure scattered by a rough surface of infinite extent should reduce to the image
2179
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2179
2p TUE. PM
of modes throughout the calculation of the acoustic field. A verification of
the inner product is presented by comparing results from its implementation
in a simple mode model to that of a closed-form solution for the acoustic
wedge environment. A solution to the more general problem of variable bottom slope, which involves a decomposition of bathymetric profiles into a
sequence of wedge environments, will also be discussed. The overall goal of
this research is the development and implementation of a rigorous shallow
water acoustic propagation solution which executes in a time window to
support tactical applications.
4:00
4:15
2pUW12. Backscattering from an obstacle immersed in an oceanic
waveguide covered with ice. Natalie S. Grigorieva (St. Petersburg State
Electrotech. Univ., 5 Prof. Popova Str., St. Petersburg 197376, Russian Federation, nsgrig@natalie.spb.su), Daria A. Mikhaylova, and Dmitriy B.
Ostrovskiy (JSC “Concern Oceanpribor”, St. Petersburg, Russian
Federation)
2pUW13. Emergence of striation patterns in acoustic signals reflected
from dynamic surface waves. Youngmin Choo, Woojae Seong (Seoul
National Univ., 1, Gwanak-ro, Gwanak-gu, Seoul, Seoul 151 - 744, South
Korea, sks655@snu.ac.kr), and Heechun Song (Scripps Inst. of Oceanogr.,
Univ. of California, San Diego, CA)
The presentation describes the theory and implementation issues of
modeling of the backscattering from an obstacle immersed in a homogeneous, range-independent waveguide covered with ice. An obstacle is assumed
to be spherical, rigid or fluid body. A bottom of the waveguide and an ice
cover are fluid, attenuating half-space. The properties of an ice cover and a
scatterer may coincide. To calculate the scattering coefficients of a sphere
[R. M. Hackman et al., J. Acoust. Soc. Am. 84, 1813–1825 (1988)], the normal mode evaluation is applied. A number of normal modes forming the
backscattering field is determined by a given directivity of the source. The
obtained analytical expression for the backscattered field is applied to evaluate its dependence on source frequency, depth of a water layer, bottom and
ice properties, and distance between the source and obstacle. Two cases are
analyzed and compared: when the upper boundary of a waveguide is soundsoft and when a water layer is covered with ice. Computational results are
obtained in a wide frequency range 8–12 kHz for conditions of a shallow
water testing area. [Work supported by Russian Ministry of Educ. and Sci.,
Grant 02.G25.31.0058.]
A striation pattern can emerge in high-frequency acoustic signals interacting with dynamic surface waves. The striation pattern is analyzed using a
ray tracing algorithm for both a sinusoidal and a rough surface. With a
source or receiver close to the surface, it is found that part of the surface on
either side of the specular reflection point can be illuminated by rays, resulting in time-varying later arrivals in channel impulse response that form the
striation pattern. In contrast to wave focusing associated with surface wave
crests, the striation occurs due to reflection off convex sections around
troughs. Simulations with a sinusoidal surface show both an upward
(advancing) and downward (retreating) striation patterns that depend on the
surface-wave traveling direction and the location of the illuminated area. In
addition, the striation length is determined mainly by the depth of the source
or receiver, whichever is closer in range to the illuminated region. Even
with a rough surface, the striation emerges in both directions. However,
broadband (7–13 kHz) simulations in shallow water indicate that the longer
striation in one direction is likely pronounced against a quiet noise background, as observed from at-sea experimental data. The simulation is
extended for various surface wave spectra and it shows consistent patterns.
TUESDAY EVENING, 28 OCTOBER 2014
8:00 P.M. TO 9:30 P.M.
OPEN MEETINGS OF TECHNICAL COMMITTEES
The Technical Committees of the Acoustical Society of America will hold open meetings on Tuesday, Wednesday, and Thursday
evenings. On Tuesday the meetings will begin at 8:00 p.m., except for Engineering Acoustics which will hold its meeting starting at
4:30 p.m. On Thursday evening, the meetings will begin at 7:30 p.m.
These are working, collegial meetings. Much of the work of the Society is accomplished by actions that originate and are taken in these
meetings including proposals for special sessions, workshops, and technical initiatives. All meeting participants are cordially invited to
attend these meetings and to participate actively in the discussion.
Committees meeting on Tuesday are as follows:
Engineering Acoustics (4:30 p.m.)
Acoustical Oceanography
Architectural Acoustics
Physical Acoustics
Speech Communication
Structural Acoustics and Vibration
2180
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Santa Fe
Indiana G
Marriott 7/8
Indiana C/D
Marriott 3/4
Marriott 1/2
168th Meeting: Acoustical Society of America
2180
WEDNESDAY MORNING, 29 OCTOBER 2014
MARRIOTT 7/8, 8:20 A.M. TO 11:45 A.M.
Session 3aAA
Architectural Acoustics and Noise: Design and Performance of Office Workspaces in High Performance
Buildings
Kenneth P. Roy, Chair
Armstrong World Industries, 2500 Columbia Ave., Lancaster, PA 17604
Chair’s Introduction—8:20
Invited Papers
8:25
3aAA1. Architecture and acoustics … form and function—What comes 1st? Kenneth Roy (Armstrong World Industries, 2500
Columbia Ave., Lancaster, PA 17604, kproy@armstrong.com)
3a WED. AM
When I first studied architecture, it was expected that “form fits function” was pretty much a mantra to design. But is that the case
today or has it ever been when acoustics are concerned? Numerous post occupancy studies of worker satisfaction with office IEQ indicate that things are not as they should be. And, as a matter of fact, high performance green buildings seem to fair much worse than normal office buildings when acoustic quality is considered. So what are we doing wrong—maybe the Gensler Workplace Study and other
related studies could shed light on what is wrong, and how we might think differently about office design. From an acousticians viewpoint it’s all about “acoustic comfort” meaning the right amount of intelligibility, privacy, and distraction for the specific work function.
Times change and work functions change, so maybe we should be looking for a new mantra … like “function drives form.” We may
also want to consider that office space may need to include a “collaboration zone” where teaming takes place, a “focus zone” where concentrated thought can take place, and a “privacy zone” where confidential discussions can take place. Each of these requires different
architecture and acoustic performance.
8:45
3aAA2. Acoustics in collaborative open office environments. John J. LoVerde, Samantha Rawlings, and David W. Dong (Veneklasen
Assoc., 1711 16th St., Santa Monica, CA 90404, jloverde@veneklasen.com)
Historically, acoustical design for open office environments focuses on creating workspaces that maximize speech privacy and minimize aural distractions. Hallmark elements of the traditional open office environment include barriers, sound-absorptive surfaces, and
consideration of workspace orientation, size, and background sound level. In recent years, development of “collaborative” office environments has been desired, which creates an open work setting, allowing immediate visual and aural communication between team
members. This results in reducing the size of workstations, lowering barriers, and reducing distance between occupants. Additionally,
group meeting areas have also become more open, with the popularization of “huddle zones” where small groups hold meetings in an
open space adjacent to workstations rather than within enclosed conference rooms. Historically, this type of office environment would
have poor acoustical function, with limited speech privacy between workstations and minimal attenuation of distracting noises, leading
to occupant complaints. However, these collaborative open office environments function satisfactorily and seem to be preferred by occupants and employers alike. This paper investigates the physical acoustical parameters of collaborative open office spaces.
9:05
3aAA3. Lessons learned in reconciling high performance building design with acoustical comfort. Valerie Smith and Ethan Salter
(Charles M. Salter Assoc., 130 Sutter St., Fl. 5, San Francisco, CA 94104, valerie.smith@cmsalter.com)
In today’s diverse workplace, “the one size fits all” approach to office design is becoming less prevalent. The indoor environmental
quality of the workplace is important to owners and occupants. Architects are developing innovative ways to encourage interaction and
collaboration while also increasing productivity. Many of these ideas are at odds with the traditional acoustical approaches used for
office buildings. Employees are asking for, and designers are incorporating, amenities such as kitchens, game rooms, and collaboration
spaces into offices. Architects and end users are becoming increasingly aware of acoustics in their environment. The U.S. General Services Administration (GSA) research documents, as well as those from other sources, discusses the importance of acoustics in the workplace. Private companies are also creating acoustical standards documents for use in design of new facilities. As more buildings strive to
achieve sustainable benchmarks (whether corporate, common green building rating systems such as LEED, or code-required) the understanding of the need for acoustical items (such as sound isolation, speech privacy, and background noise) also become critical. The challenge is how to reconcile sustainable goals with acoustical features. This presentation discusses several of the approaches that our firm
has recently used in today’s modern office environment.
2181
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2181
Contributed Papers
9:25
9:40
3aAA4. I can see clearly now, but can I also hear clearly now too? Patricia Scanlon, Richard Ranft, and Stephen Lindsey (Longman Lindsey, 1410
Broadway, Ste. 508, New York, NY 10018, patricias@longmanlindsey.
com)
3aAA5. Acoustics in an office building. Sergio Beristain (IMA, ESIME,
IPN, P.O.Box 12-1022, Narvarte, Mexico City 03001, Mexico, sberista@
hotmail.com)
The trend in corporate workplace has been away from closed plan gypsum board offices to open plan workstations and offices with glass fronts,
sliding doors, and clearstories or glass fins in the wall between offices.
These designs are often a kit of parts supplied by manufacturers, who offer
minimal information on the sound transmission that will be achieved in
practice. This results in end users who are often misled into believing they
will enjoy a certain level of speech privacy in their offices. Our presentation
will discuss the journey from benchmarking the NIC rating of an existing
office construction, reviewing the STC ratings for various glass front
options, evaluating details including door frame, door seals, intersection
between office demising walls and front partition systems. We will then
present how this information is transferred to the client, allowing them to
make an informed decision on the construction requirements for their new
space. We will highlight the difference in acoustical environment between
what one might expect from reading manufacturer’s literature, and what is
typically achieved in practice.
New building techniques tend to make better use of materials, temperature, and energy, besides costs. A building company had to plan the adaptation of a very old building with the purpose to install private offices of
different sizes in each floor, in order to take advantage of a large solid construction, reducing building time, total weight, etc., while at the same time
fulfilling new requirements related with comfort, general quality, functionality, and economy. Among several other topics, sound and vibrations had to
be considered during the process, including noise control and speech privacy, because a combination of private rooms and open plan offices were
needed, as well as limiting environmental vibrations. Aspects such as the
use of light weight materials and the installation of many climate conditioning systems were needed, which were dealt along the project in the search
for a long lasting and low maintenance costs construction.
9:55–10:10 Break
Invited Papers
10:10
3aAA6. A case history in architectural acoustics: Security, acoustics, the protection of personally identifiable information (PII),
and accessibility for the disabled. Donna A. Ellis (The Div. of Architecture and Eng., The Social Security Administration, 415 Riggs
Ave., Severna Park, MD 21146, Donna.a.ellis@ssa.gov)
This paper discusses the re-design of a field office to enhance the protection of Personally Identifiable Information (PII), physical security, and accessibility for the disabled at the Social Security Administration (SSA) field office in Roxbury, MA. The study, and its
results can be used at federal, civil, and private facilities where transaction window type public interviews occur. To protect the public
and its staff, the SSA has mandated heightened security requirements in all field offices. The increased security measures include: Installation of barrier walls to provide separation between the public and private zones; maximized lines of sight, and increased speech privacy for the protection of PII. This paper discusses the use of the Speech Transmission Index (STI) measurement method used to
determine the post construction intelligibility of speech through the transaction window, the acoustical design of the windows and their
surrounding area, how appropriate acoustic design helps safeguard personal and sensitive information so that it may be securely communicated verbally, as well as improved access for the disabled community, especially the hearing impaired.
10:30
3aAA7. High performance medical clinics: Evaluation of speech privacy in open-plan offices and examination rooms. Steve Pettyjohn (The Acoust. & Vib. Group, Inc., 5765 9th Ave., Sacramento, CA CA, spettyjohn@acousticsandvibration.com)
Speech privacy evaluations of open plan doctors’ offices and examination rooms were done at two clinics. One was in Las Vegas
and the other in El Dorado Hills. The building were designed to put doctors closer to patients and for a cost savings. ASTM E1130,
ASTM E336, and NRC guidelines were used to evaluate these spaces. For E1130, sound is produced at the source location with calibrated speakers, then measurements are made at receiver positions. The speaker faces the receiver. Only open plan furniture separated
the source from the receiver. The examination rooms used partial height walls with a single layer of gypsum board on each face. Standard doors without seals were used. CAC 40 rated ceiling tile were installed. The cubicle furniture included sound absorption and was 42
to 60 in. tall. The Privacy Index was quite low, ranging from 30 to 66%. The NIC rating of the walls without doors ranged from 38 to
39, giving PI ratings of 83 to 84%. With a door, the NIC rating was 30 to 31 with PI ratings of 72. These results do not meet the requirements of the Facility Guideline Institute or ANSI 12 Working Group 44.
2182
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2182
10:50
3aAA8. Exploring the impacts of consistency in sound masking. Niklas Moeller and Ric Doedens (K.R. Moeller Assoc. Ltd., 1050
Pachino Court, Burlington, ON L7L 6B9, Canada, rdoedens@logison.com)
Electronic sound masking systems control the noise side of the signal-to-noise ratio in interior environments. Their effectiveness
relates directly to how consistently the specified masking curve is achieved. Current system specifications generally allow a relatively
wide range in performance, in large part reflecting expectations set by legacy technologies. This session presents a case study of sound
masking measurements and speech intelligibility calculations conducted in office spaces. These are used as a foundation to discuss the
impacts of local inconsistencies in the masking sound and to begin a discussion of appropriate performance requirements for masking
systems.
11:10
3aAA9. Evaluating the effect of prominent tones in noise on human task performance. Joonhee Lee and Lily M. Wang (Durham
School of Architectural Eng. and Construction, Univ. of Nebraska - Lincoln, 1110 S. 67th St., Omaha, NE 68182-0816, joonhee.lee@
huskers.unl.edu)
Current noise guidelines for the acoustic design of offices generally specify limits on loudness and sometimes spectral shape, but do
not typically address the presence of tones in noise as may be generated by building services equipment. Numerous previous studies
indicate that the presence of prominent tones is a significant source of deteriorating indoor environmental quality. Results on how prominent tones in background noise affect human task performance, though, are less conclusive. This paper presents results from recent studies at Nebraska on how tones in noise may influence task performance in a controlled office-like environment. Participants were asked
to complete digit span tasks as a measure of working memory capacity, while exposed to assorted noise signals with tones at varying frequencies and tonality levels. Data on the percent correct and reaction time in which participants responded to the task are analyzed statistically. The results can provide guidance for setting limits on the tonality levels in offices and other spaces in which building users must
be task-productive.
11:30
3aAA10. Optimal design of multi-layer microperforated sound absorbers. Nicholas Kim, Yutong Xue, and J. S. Bolton (Ray W. Herrick Labs.,
School of Mech. Eng., Purdue Univ., 177 S. Russell St., West Lafayette, IN,
kim505@purdue.edu)
Microperforated polymer films can offer and effective solution when it
is desired to design fiber-free sound absorption systems. The acoustic performance of the film is determined by hole size and shape, by the surface
porosity, by the mass per unit area of the film, and by the depth of the backing air layer. Single sheets can provide good absorption over a one of two
2183
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
octave range, but if absorption over a broader range is desired, it is necessary to use multilayer treatments. Here the design of a multilayer sound
absorption system is described, where the film is considered to have a finite
mass per unit area and also to have conical perforations. It will be shown
that it is possible to design compact absorbers that yield good performance
over the whole speech interference range. In the course of the optimization
it has been found that there is a tradeoff between cone angle and surface porosity. The design of lightweight, multilayer functional absorbers will also
be described, and it will be shown, for example, that it is possible to design
systems that simultaneously possess good sound absorption and barrier
characteristics.
168th Meeting: Acoustical Society of America
2183
3a WED. AM
Contributed Paper
WEDNESDAY MORNING, 29 OCTOBER 2014
LINCOLN, 8:25 A.M. TO 11:45 A.M.
Session 3aAB
Animal Bioacoustics: Predator–Prey Relationships
Simone Baumann-Pickering, Cochair
Scripps Institution of Oceanography, University of California San Diego, 9500 Gilman Dr, La Jolla, CA 92093
Ana Sirovic, Cochair
Scripps Institution of Oceanography, 9500 Gilman Drive MC 0205, La Jolla, CA 92093-0205
Chair’s Introduction—8:25
Invited Papers
8:30
3aAB1. Breaking the acoustical code of ants: The social parasite’s pathway. Francesca Barbero, Luca P. Casacci, Emilio Balletto,
and Simona Bonelli (Life Sci. and Systems Biology, Univ. of Turin, Via Accademia Albertina 13, Turin 10123, Italy, francesca.barbero@unito.it)
Ant colonies represent a well-protected and stable environment (temperature, humidity) where essential resources are stored (e.g.,
the ants themselves, their brood, stored food). To maintain their social organization, ants use a variety of communication channels, such
as the exchange of chemical and tactile signals, as well as caste specific stridulations (Casacci et al. 2013 Current Biology 23, 323–327).
By intercepting and manipulating their host’s communication code, about 10,000 arthropod species live as parasites and exploit ant
nests. Here, we review results of our studies on Maculinea butterflies, a group of social parasites which mimic the stridulations produced
by their host ants to promote (i) their retrieval into the colony (adoption: Sala et al. 2014, PLoS ONE 9(4), e94341), (ii) their survival
inside the nest/brood chambers (integration: Barbero et al. 2009 J. Exp. Biol. 218, 4084–4090), or (iii) their achievement of the highest
possible social status within the colony’s hierarchy (full integration: Barbero et al. 2009, Science 323, 782–785). We strongly believe
that the study of acoustic communication in ants will bring about significant advances in our understanding of the complex mechanisms
underlying the origin, evolution, and stabilization of many host–parasite relationships.
8:50
3aAB2. How nestling birds acoustically monitor parents and predators. Andrew G. Horn and Martha L. Leonard (Biology, Dalhousie Univ., Life Sci. Ctr., 1355 Oxford St., PO Box 15000, Halifax, NS B3H 4R2, Canada, aghorn@dal.ca)
The likelihood that nestling songbirds survive until leaving the nest depends largely on how well they are fed by parents and how well
they escape detection by predators. Both factors in turn are affected by the nestling’s begging display, a combination of gaping, posturing,
and calling that stimulates feedings from parents but can also attract nest predators. If nestlings are to be fed without being eaten themselves, they must beg readily to parents but avoid begging when predators are near the nest. Here we describe experiments to determine
how nestling tree swallows, Tachycineta bicolor, use acoustic cues to detect the arrival of parents with food and to monitor the presence of
predators, in order to beg optimally relative to their need for food. We also discuss how their assessments vary in relation to two constraints:
their own poor perceptual abilities and ambient background noise. Together with similar work on other species, our research suggests that
acoustically conveyed information on predation risk has been an important selective force on parent-offspring communication. More generally, how birds acoustically monitor their environment to avoid predation is an increasingly productive area of research.
9:10
3aAB3. Acoustic preferences of frog-biting midges in response to intra- and inter-specific signal variation. Ximena Bernal (Dept.
of Biological Sci., Purdue Univ., 915 W. State St., West Lafayette, IN 47906, xbernal@purdue.edu)
Eavesdropping predators and parasites intercept mating signals emitted by their prey and host gaining information that increases the
effectiveness of their attack. This kind of interspecific eavesdropping is widespread across taxonomic groups and sensory modalities. In
this study, sound traps and a sound imaging device system were used to investigate the acoustic preferences of frog-biting midges, Corethrella spp. (Corethrellidae). In these midges, females use the advertisement call produced by male frogs to localize them and obtained
a blood meal. As in mosquitoes (Culicidae), a closely related family, female midges require blood from their host for egg production.
The acoustic preferences of the midges were examined in the wild in response to intra- and interspecific call variation. When responding
ungara frogs (Engystomops pustulosus), frogs producing vocalizations with higher call complexity and call rate were
to call variation in t
preferentially attacked by the midges. T
ungara frog calls were also preferred by frog-biting midges over the calls produced by a sympatric frog of similar size, the hourglass frog (Dendrosophus ebbracatus). The role of call site selection in multi-species aggregations is
explored in relation to the responses of frog-biting midges. In addition, the use of acoustic traps and sound imaging devices to investigate
eavesdropper-victim interactions are discussed.
2184
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2184
9:30
9:45
3aAB4. Foraging among acoustic clutter and competition: Vocal behavior of paired big brown bats. Michaela Warnecke (Psychol. and Brain
Sci., The Johns Hopkins Univ., 3400 N Charles St., Baltimore, MD 21218,
michaela.warnecke@jhu.edu), Chen Chiu, Wei Xian (Psychol. and Brain
Sci., The Johns Hopkins Univ., Baltimore, MD), Clement Cechetto
(AGROSUP, Inst. Nationale Superieur des Sci. Agronomique, Dijon,
France), and Cynthia F. Moss (Psychol. and Brain Sci., The Johns Hopkins
Univ., Baltimore, MD)
3aAB5. Sensory escape from a predator–prey arms race: Low amplitude biosonar beats moth hearing. Holger R. Goerlitz (Acoust. and Functional Ecology, Max Planck Inst. for Ornithology, Eberhard-Gwinner-Str,
Seewiesen 82319, Germany, hgoerlitz@orn.mpg.de), Hannah M. ter Hofstede (Biological Sci., Dartmouth College, Hanover, NH), Matt Zeale, Gareth Jones, and Marc W. Holderied (School of Biological Sci., Univ. of
Bristol, Bristol, United Kingdom)
In their natural environment, big brown bats forage for small insects in
open spaces, as well as in the presence of acoustic clutter. While searching
and hunting for prey, these bats experience sonar interference not only from
densely cluttered environments, but also through calls from other conspecifics foraging close-by. Previous work has shown that when two bats fly in
a relatively open environment, one of them may go silent for extended periods of time (Chiu et al. 2008), which may serve to minimize such sonar interference between conspecifics. Additionally, big brown bats have been
shown to adjust frequency characteristics of their vocalizations to avoid
acoustic interference from conspecifics (Chiu et al., 2009). It remains an
open question, however, in what way environmental clutter and the presence
of conspecifics influence the bat’s call behavior. By recording multichannel
audio and video data of bats engaged in insect capture in an open and a cluttered space, we quantified the bats’ vocal behavior. Bats were flown individually and in pairs in an open and cluttered room, and the results of this study
shed light on the strategies animals employ to negotiate a complex and
dynamic environment.
Ultrasound-sensitive ears evolved in many nocturnal insects, including
some moths, to detect bat echolocation calls and evade capture. Although
there is evidence that some bats emit echolocation calls that are inconspicuous to eared moths, it is difficult to determine whether this was an adaptation to moth hearing or originally evolved for a different purpose. Here we
present the first example of an echolocation counterstrategy to overcome
prey hearing at the cost of reduced detection distance, providing an example
of a predator outcompeting its prey despite the life-dinner-principle. Aerialhawking bats generally emit high-amplitude echolocation calls to maximize
detection range. Using comparative acoustic flight-path tracking of free-flying bats, we show that the barbastelle, Barbastella barbastellus, emits calls
that are 10 to 100 times lower in amplitude than those of other aerial hawking bats. Model calculations demonstrate that only bats emitting such lowamplitude calls hear moth echoes before their calls are conspicuous to
moths. We confirm that the barbastelle remains undetected by moths until
close and preys mainly on eared moths, using moth neurophysiology in the
field and fecal DNA analysis. This adaptive stealth echolocation allows the
barbastelle to access food resources that are difficult to catch for high-intensity bats.
10:00–10:20 Break
Invited Papers
10:20
3aAB6. Cues, creaks, and decoys: Using underwater acoustics to study sperm whale interactions with the Alaskan black cod
longline fishery. Aaron Thode (SIO, UCSD, 9500 Gilman Dr, MC 0238, La Jolla, CA 92093-0238, athode@ucsd.edu), Janice Straley
(Univ. of Alaska, Southeast, Sitka, AK), Lauren Wild (Sitka Sound Sci. Ctr., Sitka, AK), Jit Sarkar (SIO, UCSD, La Jolla, CA), Victoria
O’Connell (Sitka Sound Sci. Ctr., Sitka, AK), and Dan Falvey (Alaska Longline Fisherman’s Assoc., Sitka, AK)
For decades off SE Alaska, sperm whales have located longlining fishing vessels and removed, or “depredated,” black cod from the
hauls. In 2004, the Southeast Alaska Sperm Whale Avoidance Project (SEASWAP) began deploying passive acoustic recorders on longline fishing gear in order to identify acoustic cues that may alert whales to fishing activity. It was found that when hauling, longlining
vessels generate distinctive cavitation sounds, which served to attract whales to the haul site. The combined use of underwater recorders
and video cameras also confirmed that sperm whales generated “creak/buzz” sounds while depredating, even under good visual conditions. By deploying recorders with federal sablefish surveys over two years, a high correlation was found between sperm whale creak
rate detections and visual evidence for depredation. Thus passive acoustics is now being used as a low-cost, remote sensing method to
quantify depredation activity in the presence and absence of various deterrents. Two recent developments will be discussed in detail: the
development and field testing of acoustic “decoys” as a potential means of attracting animals away from locations of actual fishing activity, and the use of “TadPro” cameras to provide combined visual and acoustic observations of longline deployments. [Work supported
by NPRB, NOAA, and BBC.]
10:40
3aAB7. Follow the food: Effects of fish and zooplankton on the behavioral ecology of baleen whales. Joseph Warren (Stony Brook
Univ., 239 Montauk Hwy, Southampton, NY 11968, joe.warren@stonybrook.edu), Susan E. Parks (Dept. of Biology, Syracuse Univ.,
Syracuse, NY), Heidi Pearson (Univ. of Alaska, Southeast, Juneau, AK), and Kylie Owen (Univ. of Queensland, Gatton, QLD,
Australia)
Active acoustics were used to collect information on the type, distribution, and abundance of baleen whale prey species such as zooplankton and fish at fine spatial (sub-meter) and temporal (sub-minute) scales. Unlike other prey measurement methods, scientific
echosounder surveys provide prey data at a resolution similar to what a predator would detect in order to efficiently forage. Data from
2185
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2185
3a WED. AM
Contributed Papers
several studies around the world shows that differences in prey type or distribution result in distinctly different baleen whale foraging
behaviors. Humpback whales in coastal waters of Australia altered their foraging pattern depending on the presence and abundance of
baitfish or krill. In Southeast Alaska, humpback whales foraged cooperatively or independently depending on prey type and abundance.
Humpback whales in the Northwest Atlantic with multiple prey species available foraged on an energetically costly (and presumably
rewarding) species. The vertical and horizontal movements of North Atlantic right whales in Cape Cod Bay were strongly correlated
with very dense aggregations of copepods. In all of these cases, active acoustics were used to estimate numerical densities of the prey,
which provides quantitative information about the energy resource available to foraging animals.
Contributed Papers
11:00
3aAB8. Association of low oxygen waters with the depths of acoustic
scattering layers in the Gulf of California and implications for the success of Humboldt squid (Dosidicus gigas). David Cade (BioSci., Stanford
Univ., 120 Oceanview Boulevard, Pacific Grove, CA 93950, davecade@
stanford.edu) and Kelly J. Benoit-Bird (CEOAS, Oregon State Univ., Corvallis, OR)
The ecology in the Gulf of California has undergone dramatic changes
over the past century as Humboldt squid (Dosidicus gigas) have become a
dominant predator in the region. The vertical overlap between acoustic scattering layers, which consist of small pelagic organisms that make up the
bulk of D. gigas prey, and regions of severe hypoxia have led to a hypothesis linking the shoaling of oxygen minimum zones over the past few decades
to compression of acoustic scattering layers, which in turn would promote
the success of D. gigas. We tested this hypothesis by looking for links
between specific oxygen values and acoustic scattering layer boundaries.
We applied an automatic layer detection algorithm to shipboard
echosounder data from four cruises in the Gulf of California. We then used
CTD data and a combination of logistic modeling, contingency tables, and
linear correlations with parameter isolines to determine which parameters
had the largest effects on scattering layer boundaries. Although results were
inconsistent, we found scattering layer depths to be largely independent of
the oxygen content in the water column, and the recent success of D. gigas
in the Gulf of California is therefore not likely to be attributable to the
effects of shoaling oxygen minimum zones on acoustic scattering layers.
11:15
3aAB9. Understanding the relationship between ice, primary producers,
and consumers in the Bering Sea. Jennifer L. Miksis-Olds (Appl. Res.
Lab, Penn State, PO Box 30, Mailstop 3510D, State College, PA 16804,
jlm91@psu.edu) and Stauffer A. Stauffer (Office of Res. and Development,
US Environ. Protection Agency, Washington, DC)
Technology has progressed to the level of allowing for investigations of
trophic level interactions over time scales of months to years which were
previously intractable. A combination of active and passive acoustic technology has been integrated into sub-surface moorings on the Eastern Bering
Sea shelf, and seasonal transition measurements were examined to better
understand how interannual variability of hydrographic conditions, phytoplankton biomass, and acoustically derived consumer abundance and community structure are related. Ocean conditions were significantly different in
2012 compared to relatively similar conditions in 2009, 2010, and 2011.
2186
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Differences were largely associated with variations in sea ice extent, thickness, retreat timing, and water column stratification. There was a high
degree of variability in the relationships between different classes of consumers and hydrographic condition, and evidence for intra-consumer interactions and trade-offs between different size classes was apparent.
Phytoplankton blooms in each year stimulated different components of the
consumer population. Acoustic technology now provides the opportunity to
explore the ecosystem dynamics in a remote, ice-covered region that was
previously limited to ship-board measurements during ice-free periods. The
new knowledge we are gaining from remote, long-term observations is
resulting in a re-examination of previously proposed ecosystem theories
related to the Bering Sea.
11:30
3aAB10. Temporal and spatial patterns of marine soundscape in a
coastal shallow water environment. Shane Guan (Office of Protected
Resources, National Marine Fisheries Service, 1315 East-West Hwy.,
SSMC-3, Ste. 13728, Silver Spring, MD 20910, shane.guan@noaa.gov),
Tzu-Hao Lin (Inst. of Ecology & Evolutionary Biology, National Taiwan
Univ., Taipei, Taiwan), Joseph F. Vignola (Dept. of Mech. Eng., The Catholic Univ. of America, Washington, MD), LIen-Siang Chou (Inst. of Ecology
& Evolutionary Biology, National Taiwan Univ., Taipei, Taiwan), and John
A. Judge (Dept. of Mech. Eng., The Catholic Univ. of America, Washington, DC)
Underwater acoustic recordings were made at two coastal shallow water
locations, Yunlin (YL) and Waishanding (WS), off Taiwan between June
and December 2012. The purpose of the study was to establish soundscape
baselines and characterize the acoustic habitat of the critically endangered
Eastern Taiwan Strait Chinese white dolphin by investigating: (1) major
contributing sources that dominant the soundscape, (2) temporal, spatial,
and spectral patterns of the soundscape, and (3) correlations of known sources and their potential effects on dolphins. Results show that choruses from
croaker fish (family Sciaenidae) were dominant sound sources in the 1.2–
2.4 kHz frequency band for both locations at night, and noises from container ships in the 150–300 Hz frequency band define the relative higher
broadband sound levels at YL. In addition, extreme temporal variation in
the 150–300 Hz frequency band were observed at WS, which was shows to
be linked to the tidal cycle and current velocity. Furthermore, croaker choruses are found to be most intense around the time of high tide at night, but
not so around the time of low tide. These results illustrate interrelationships
among different biotic, abiotic, and anthropogenic environmental elements
that shape the unique fine-scale soundscape in a coastal environment.
168th Meeting: Acoustical Society of America
2186
WEDNESDAY MORNING, 29 OCTOBER 2014
INDIANA E, 8:00 A.M. TO 11:55 A.M.
Session 3aAO
Acoustical Oceanography, Underwater Acoustics, and Education in Acoustics: Education in Acoustical
Oceanography and Underwater Acoustics
Andone C. Lavery, Cochair
Applied Ocean Physics and Engineering, Woods Hole Oceanographic Institution, 98 Water Street, MS 11, Bigelow 211,
Woods Hole, MA 02536
Preston S. Wilson, Cochair
Mech. Eng., Univ. of Texas at Austin, 1 University Station, C2200, Austin, TX 78712-0292
Arthur B. Baggeroer, Cochair
Mechanical and Electrical Engineering, Massachusetts Inst. of Technology, Room 5-206, MIT, Cambridge, MA 02139
Chair’s Introduction—8:00
Invited Papers
3a WED. AM
8:05
3aAO1. Ocean acoustics education—A perspective from 1970 to the present. Arthur B. Baggeroer (Mech. and Elec. Eng., Massachusetts Inst. of Technol., Rm. 5-206, MIT, Cambridge, MA 02139, abb@boreas.mit.edu)
A very senior ocean acoustician is attributed with the quote to the effect “one does not start in ocean acoustics, but rather ends up in
it.” This may well summarize the issues confronting education in ocean acoustics. Acoustics were part of the curriculum in physics
departments, whereas now it is spread across many departments. Acoustics and perhaps ocean acoustics are most often found in mechanical or ocean engineering departments, but seldom in physics. Almost all our pioneers from the WWII era were educated in physics and
some more recently in engineering departments. Yet, only a few places maintained in depth curricula in ocean acoustics. Most education
was done by one on one mentoring. Now the number of students is diminishing, whether because of perception of employment opportunities or the number of available assistantships is uncertain. ONR is the major driver in ocean acoustics for supporting graduate students.
The concern about this is hardly new. Twenty plus years ago this was codified as part of the so called “Lackie Report” establishing ocean
acoustics as “Navy unique” giving it a priority as a “Navy National Need” (NNR). With fewer students enrolled in ocean acoustics
administrators at universities are really balking at sponsoring faculty slots, so there are very significant issues arising for an education in
ocean acoustics. Perhaps, reverting to the original model of fundamental training in a related discipline followed by on the job training
may be the only option for the future.
8:25
3aAO2. Joint graduate education program: Massachusetts Institute of Technology and Woods Hole Oceanographic Institution.
Timothy K. Stanton (Dept. Appl. Ocean. Phys. & Eng., Woods Hole Oceanographic Inst., Woods Hole, MA 02543, tstanton@whoi.edu)
The 40 + year history of this program will be presented, with a focus on the underwater acoustics and signal processing component.
Trends in enrollment will be summarized.
8:35
3aAO3. Graduate studies in underwater acoustics at the University of Washington. Peter H. Dahl, Robert I. Odom, and Jeffrey A.
Simmen (Appl. Phys. Lab. and Mech. Eng. Dept., Univ. of Washington, Mech. Eng., 1013 NE 40th St., Seattle, WA 98105, dahl@apl.
washington.edu)
The University of Washington through its Departments of Mechanical and Electrical Engineering (College of Engineering), Department of Earth and Space Sciences, and School of Oceanography (College of the Environment), and by way of its Applied Physics Laboratory, which links all four of these academic units, offers a diverse graduate education experience in underwater acoustics. A summary
is provided of the research infrastructure, primarily made available through Applied Physics Laboratory, which allows for ocean going
and arctic field opportunities, and course options offered through the four units that provide the multi-disciplinary background essential
for graduate training in the field of underwater acoustics. Students in underwater acoustics can also mingle in or extend their interests
into medical acoustics research. Degrees granted include both the M.S and Ph.D.
2187
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2187
8:45
3aAO4. Acoustical Oceanography and Underwater Acoustics; their role in the Pennsylvania State University Graduate Program
in Acoustics. David Bradley (Penn State Univ., PO Box 30, State College, PA 16870, dlb25@psu.edu) and Victor Sparrow (Penn State
Univ., University Park, PA)
The Pennsylvania State University Graduate Program in Acoustics has a long and successful history in Acoustics Education. A brief
history together with the current status of the program will be discussed. An important aspect of the program has been the strong role of
the Applied Research Laboratory, both in support of the program as well as for the graduate students enrolled. Presentation includes
details of course content, variability to fit student career goals and program structure, including resident and distance education opportunities. The future of the program at Penn State will also be addressed.
8:55
3aAO5. Ocean acoustics at the University of Victoria. Ross Chapman (School of Earth and Ocean Sci., Univ. of Victoria, 3800 Finnerty Rd., Victoria, BC V8P5C2, Canada, chapman@uvic.ca)
This paper describes the academic program in Ocean Acoustics and Acoustical Oceanography at the University of Victoria in Canada. The program was established when a Research Chair in Ocean Acoustics consisting of two faculty members was funded in 1995 by
the Canadian Natural Sciences and Engineering Research Council (NSERC). The Research Chair graduate program offered two courses
in Ocean Acoustics, and courses in Time Series Analysis and Inverse Methods. Funding for students was obtained entirely through partnership research programs with Canadian marine industry, the Department of National Defence in Canada and the Office of Naval
Research. The program was successful in graduating around 30 M.Sc. and Ph.D. students to date, about half of whom were Canadians.
Notably, all the students obtained positions in marine industry, government, or academia after their degrees. The undergraduate program
consisted of one course in Acoustical Oceanography at the senior level (3rd year) that was designed to appeal to students in physics,
biology, and geology. The course attracted about 30 students each time, primarily from biology. The paper concludes with perspectives
on difficulties in operating an academic program with a low critical mass of faculty and in isolation from colleagues in the research
field.
9:05
3aAO6. Ocean acoustics away from the ocean. David R. Dowling (Mech. Eng., Univ. of Michigan, 1231 Beal Ave., Ann Arbor, MI
48109-2133, drd@umich.edu)
Acoustics represents a small portion of the overall educational effort in engineering and science, and ocean acoustics is one of many
topic areas in the overall realm of acoustics. Thus, maintaining teaching and research efforts involving ocean acoustics is challenging
but not impossible, even at a university that is more than 500 miles from the ocean. This presentation describes the author’s two decades
of experience in ocean acoustics education and research. Success is possible by first attracting students to acoustics, and then helping
them wade into a research topic in ocean acoustics that overlaps with their curiosity, ambition, or both. The first step occurs almost naturally since college students’ experience with their ears and voice provides intuition and motivation that allows them to readily grasp
acoustic concepts and to persevere through mathematical courses. The second step is typically no more challenging since ocean acoustics is a leading and fascinating research area that provides stable careers. Plus, there are even some advantages to studying ocean acoustics away from the ocean. For example, matched-field processing, a common ocean acoustic remote sensing technique, appears almost
magical to manufacturing or automotive engineers when applied to assembly line and safety problems involving airborne sound.
9:15
3aAO7. Office of Naval Research special research awards in ocean acoustics. Robert H. Headrick (Code 32, Office of Naval Res.,
875 North Randolph St., Arlington, VA 22203, bob.headrick@navy.mil)
The Ocean Acoustics Team of the Office of Naval Research manages the Special Research Awards that support graduate traineeship,
postdoctoral fellowship, and entry-level faculty awards in ocean acoustics. The graduate traineeship awards provide for study and
research leading to a doctoral degree and are given to individuals who have demonstrated a special aptitude and desire for advanced
training in ocean acoustics or the related disciplines of undersea signal processing, marine structural acoustics and transducer materials
science. The postdoctoral fellowship and entry-level faculty awards are similarly targeted. These programs were started as a component
of the National Naval Responsibility in Ocean Acoustics to help ensure a stable pipeline of talented individuals would be available to
support the needs of the Navy in the future. They represent only a fraction of the students, postdocs, and early faculty researchers that
are actively involved the basic research supported by the Ocean Acoustics Program. A better understanding of the true size of the pipeline and the capacity of the broader acoustics related Research and Development community to absorb the output is needed to maintain
a balance in priorities for the overall Ocean Acoustics Program.
9:25
3aAO8. Underwater acoustics education at the University of Texas at Austin. Marcia J. Isakson (Appl. Res. Labs., The Univ. of
Texas at Austin, 10000 Burnet Rd., Austin, TX 78713, misakson@arlut.utexas.edu), Mark F. Hamilton (Mech. Eng. Dept. and Appl.
Res. Labs., The Univ. of Texas at Austin, Austin, TX), Clark S. Penrod, Frederick M. Pestorius (Appl. Res. Labs., The Univ. of Texas at
Austin, Austin, TX), and Preston S. Wilson (Mech. Eng. Dept. and Appl. Res. Labs., The Univ. of Texas at Austin, Austin, TX)
The University of Texas at Austin has supported education and research in acoustics since the 1930s. The Cockrell School of Engineering currently offers a wide range of graduate courses and two undergraduate courses in acoustics, not counting the many courses in
hearing, speech, seismology, and other areas of acoustics at the university. An important adjunct to the academic program in acoustics
has been the Applied Research Laboratories (ARL). Spun off in 1945 from the WW II Harvard Underwater Sound Laboratory (1941–
1949) and founded as the Defense Research Laboratory, ARL is one of five University Affiliated Research Centers formally recognized
by the US Navy for their prominence in underwater acoustics research and development. ARL is an integral part of UT Austin, and this
2188
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2188
symbiotic combination of graduate and undergraduate courses, and laboratory and field work, provides one of the leading underwater
acoustics education programs in the nation. In this talk, the underwater acoustics education program will be described with special emphasis on the underwater acoustics course and its place in the larger acoustics program. Statistics on education, funding, and placement
of graduate students in the program will also be presented.
9:35
3aAO9. Acoustical Oceanography and Underwater Acoustics Graduate Programs at the Scripps Institution of Oceanography of
the University of California, San Diego. William A. Kuperman (Scripps Inst. of Oceanogr., Univ. of California, San Diego, Marine
Physical Lab., La Jolla, CA 92093-0238, wkuperman@ucsd.edu)
The Scripps Institution of Oceanography (SIO) of the University of California, San Diego (UCSD), has graduate programs in all
areas of acoustics that intersect oceanography. These programs are associated mostly with internal SIO divisions that include the Marine
Physical Laboratory, Physical Oceanography, Geophysics, and Biological Oceanography as well as SIO opportunities for other UCSD
graduate students in the science and engineering departments. Course work includes basic wave physics, graduate mathematics, acoustics and signal processing, oceanography and biology, digital signal processing, and geophysics/seismology. Much of the emphasis at
SIO includes at-sea experience. Recent examples of thesis research has been in marine mammal acoustics, ocean tomography and seismic/acoustic inversion methodology, acoustical signal processing, ocean ambient noise inversion, ocean/acoustic exploration, and acoustic sensing of the air-sea interaction. An overview of the SIO/UCSD graduate program is presented.
9:45
3aAO10. Underwater acoustics education at Portland State University. Martin Siderius and Lisa M. Zurk (Elec. and Comput. Eng.,
Portland State Univ., 1900 SW 4th Ave., Portland, OR 97201, siderius@pdx.edu)
3a WED. AM
The Northwest Electromagnetics and Acoustics Research Laboratory (NEAR-Lab) is in the Electrical and Computer Engineering
Department at Portland State University (PSU) in Portland, Oregon. The NEAR-Lab was founded in 2005 and is co-directed by Lisa M.
Zurk and Martin Siderius. A primary interest is underwater acoustics, and students at undergraduate and graduate levels (occasionally
also high school students) regularly participate in research. This is synergistic with underwater acoustics education at PSU, which
includes a course curriculum that provides opportunities for theoretical and experimental research and multiple course offerings at both
the undergraduate and graduate level. The research generally involves modeling and analysis of acoustic propagation and scattering,
acoustic signal processing, algorithm development, environmental acoustics, and bioacoustics. The lab maintains a suite of equipment
for experimentation including hydrophone arrays, sound projectors, a Webb Slocum glider, an electronics lab, and an acoustic tank.
Large-scale experiments that include student participation have been routinely conducted by successful collaborations such as with the
APL-University of Washington, NATO Centre for Maritime Research and Experimentation, and the University of Hawaii. In this talk,
the state of the PSU underwater acoustics program will be described along with the courses offered, research activities, experimental
program, collaborations, and student success.
9:55
3aAO11. Underwater acoustics education in Harbin Engineering University. Desen Yang, Xiukun Li, and Yang Li (Acoust. Sci.
and Technol. Lab., Harbin Eng. Univ., Harbin, Heilongjiang Province, China, dsyang@hrbeu.edu.cn)
College of Underwater Acoustic Engineering in Harbin Engineering University is the earliest institution engaging in underwater
acoustics education in Chinese universities, which has complete types of high education training levels and subject directions. There are
124 teachers in the college engaging in underwater acoustics research, of which there are 30 professors and 36 associate professors. The
developments of underwater acoustic transducer technology, underwater positioning and navigation technology, underwater target
detecting technology, underwater acoustic communication technique, multi-beam echo sounding technique, and high resolution image
sonar technique new theory and technology of underwater acoustic reach the leading level in China. Every year, the college attracts
more than 200 excellent students whose entrance examination marks is 80 points higher than the key fraction stroke. There are three education program levels in this specialty (undergraduate-level, graduate-level, and Ph.D.-level), and students may study underwater acoustics within any of our three programs, besides which, the college has special education programs for foreign students. Graduate
employments are underwater acoustic institution, electronic institution, communication company, and IT enterprise. In this paper,
descriptions of underwater acoustics education programs, curriculum systems, and teaching contents of acoustics courses will be
introduced.
10:05–10:20 Break
Contributed Papers
10:20
3aAO12. Graduate education in underwater acoustics, transduction,
and signal processing at UMass Dartmouth. David A. Brown (Elec. and
Comput. Eng., Univ. of Massachusetts Dartmouth, 151 Martine St., Fall
River, MA 02723, dbAcoustics@cox.net), John Buck, Karen Payton, and
Paul Gendron (Elec. and Comput. Eng., Univ. of Massachusetts Dartmouth,
Dartmouth, MA)
The University of Massachusetts Dartmouth established a Ph.D. degree
in Electrical Engineering with a specialization in Marine Acoustics in 1996,
building on the strength of the existing M.S. program. Current enrollment in
2189
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
these programs include 26 M.S. students and 16 Ph.D. students. The program offers courses and research opportunities in the area of underwater
acoustics, transduction, and signal processing. Courses include the Fundamentals of Acoustics, Random Signals, Underwater Acoustics, Introduction
to Transducers, Electroacoustic Transduction, Digital Signal Processing,
Detection Theory, and Estimation Theory. The university’s indoor underwater acoustic test and calibration facility is one of the largest in academia
and supports undergraduate and graduate thesis and sponsored research. The
university also owns three Iver-2 fully autonomous underwater vehicles.
The graduate program capitalizes on collaborations with many marine technology companies resident at the university’s Advanced Technology and
168th Meeting: Acoustical Society of America
2189
Manufacturing Center (ATMC) and the nearby Naval Undersea Warfare
Center in Newport, RI. The presentation will highlight recent theses and dissertations, course offerings, and industry and government collaborations
that support underwater acoustics research.
10:30
3aAO13. Ocean acoustics at the University of Rhode Island. Gopu R.
Potty and James H. Miller (Dept. of Ocean Eng., Univ. of Rhode Island, 115
Middleton Bldg., Narragansett, RI 02882, potty@egr.uri.edu)
The undergraduate and graduate program in Ocean Engineering at the
University of Rhode Island is one of the oldest such programs in the United
States. This program offers Bachelors, Masters (thesis and non-thesis
options), and Ph.D. degrees. At the undergraduate level, students are
exposed to ocean acoustics through a number of required and elective
courses, laboratory and field work, and capstone projects. Examples of student projects will be presented. At the graduate level, students can specialize
in several areas including geoacoustic inversion, propagation modeling, marine mammal acoustics, ocean acoustic instrumentation, transducers, etc. A
historical review of the evolution of ocean acoustics education in the department will be presented. This will include examples of some of the research
carried out by different faculty and students over the years, enrollment
trends, collaborations, external funding, etc. Many graduates from the program hold faculty positions at a number of universities in the US and
abroad. In addition, graduates from the ocean acoustics program at URI are
key staff at many companies and organizations. A number of companies
have spun off the program in the areas of forward-looking sonar, sub-bottom
profiling, and other applications. The opportunities and challenges facing
the program will be summarized.
10:40
3aAO14. An underwater acoustics program far from the ocean: The Georgia Tech case. Karim G. Sabra (Mech. Eng., Georgia Inst. of Technol., 771
Ferst Dr., NW, Atlanta, GA 30332-0405, karim.sabra@me.gatech.edu)
The underwater acoustics education program at the Georgia Institute of
Technology (Georgia Tech) is run by members of the Acoustics and Dynamics research area group from the School of Mechanical Engineering.
2190
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
We will briefly review the scope of this program in terms of education and
research activities as well as discuss current challenges related to the future
of underwater acoustics education.
10:50
3aAO15. Graduate education in ocean acoustics at Rensselaer Polytechnic Institute. William L. Siegmann (Dept. of Mathematical Sci., Rensselaer
Polytechnic Inst., 110 Eighth St., Troy, NY 12180-3590, siegmw@rpi.
edu)
Doctoral and master’s students in Rensselaer’s Department of Mathematical Sciences have had opportunities for research in Ocean Acoustics
since 1957. Since then only one or two faculty members at any time were
directly involved with OA education. Consequently, collaboration with colleagues at other centers of OA research has been essential. The history will
be briefly reviewed, focusing on the education of a small group of OA doctoral students in an environment with relatively limited institutional resources. Graduate education in OA at RPI has persisted because of sustained
support by the Office of Naval Research.
11:00–11:45 Panel Discussion
11:45
3aAO16. Summary of panel discussion on education in Acoustical
Oceanography and Underwater Acoustics. Andone C. Lavery (Appl.
Ocean Phys. and Eng., Woods Hole Oceanographic Inst., 98 Water St., MS
11, Bigelow 211, Woods Hole, MA 02536, alavery@whoi.edu)
Following the presentations by the speakers in session, a panel discussion will offer the platform for those in the audience, particularly those from
Institutions and Universities that did not formally participate in the session
but have active education programs in Acoustical Oceanography and/or
Underwater Acoustics, to ask relevant questions and contribute to the
assessment of the national health of education in the fields of Acoustical
Oceanography and Underwater Acoustics. A summary of the key points presented in the special sessions and panel discussion is provided.
168th Meeting: Acoustical Society of America
2190
WEDNESDAY MORNING, 29 OCTOBER 2014
INDIANA A/B, 8:00 A.M. TO 11:30 A.M.
Session 3aBA
Biomedical Acoustics: Kidney Stone Lithotripsy
Tim Colonius, Cochair
Mechanical Engineering, Caltech, 1200 E. California Blvd., Pasadena, CA 91125
Wayne Kreider, Cochair
CIMU, Applied Physics Laboratory, University of Washington, 1013 NE 40th Street, Seattle, WA 98105
Contributed Papers
3aBA1. Comparable clinical outcomes with two lithotripters having
substantially different acoustic characteristics. James E. Lingeman,
Naeem Bhojani (Urology, Indiana Univ. School of Medicine, 1801 N. Senate Blvd., Indianapolis, IN 46202, jlingeman@iuhealth.org), James C. Williams, Andrew P. Evan, and James A. McAteer (Anatomy and Cell Biology,
Indiana Univ. School of Medicine, Indianapolis, IN)
A consecutive case study was conducted to assess the clinical performance of the Lithogold, an electrohydraulic lithotripter having a relatively
low P + and broad focal width (FW) (~20 MPa, ~20 mm), and the electromagnetic Storz-SLX having higher P + and narrower FW (~50 MPa, 3–4
mm). Treatment was at 60 SW/min with follow-up at ~2 weeks. Stone free
rate (SFR) was defined as no residual fragments remaining after single session SWL. SFR was similar for the two lithotripters (Lithogold 29/76 =
38.2%; SLX 69/142 = 48.6% p = 0.15), with no difference in outcome for renal stones (Lithogold 20/45 = 44.4%; SLX 33/66 = 50%, p = 0.70) or stones
in the ureter (Lithogold 9/31 = 29%; SLX 36/76 = 47.4%, p = 0.08). Stone
size did not differ between the two lithotripters for patients who were not
stone free (9.163.7 mm for Lithogold vs. 8.563.5 mm for SLX, P = 0.42),
but the stone-free patients in the Lithogold group had larger stones on average than the stone-free patients treated with the SLX (7.662.5 mm vs.
6.263.2 mm, P = 0.005). The percentage of stones that did not break was
similar (Lithogold 10/76 = 13.2%; SLX 23/142 = 16.2%). These data present a realistic picture of clinical outcomes using modern lithotripters, and
although the acoustic characteristics of the Lithogold and SLX differ considerably, outcomes were similar. [NIH-DK43881.]
8:15
3aBA2. Characterization of an electromagnetic lithotripter using transient acoustic holography. Oleg A. Sapozhnikov, Sergey A. Tsysar (Phys.
Faculty, Moscow State Univ., Leninskie Gory, Moscow 119991, Russian
Federation, oa.sapozhnikov@gmail.com), Wayne Kreider (Ctr. for Industrial and Medical Ultrasound, Appl. Phys. Lab., Univ. of Washington, Seattle, WA), Guangyan Li (Dept. of Anatomy and Cell Biology, Indiana Univ.
School of Medicine, Indianapolis, IN), Vera A. Khokhlova (Ctr. for Industrial and Medical Ultrasound, Appl. Phys. Lab., Univ. of Washington, Seattle, WA), and Michael R. Bailey (Dept. of Urology, Univ. of Washington
Medical Ctr., Seattle, WA)
Shock wave lithotripters radiate high intensity pulses that are focused on
a kidney stone. High pressure, short rise time, and path-dependent nonlinearity make characterization in water and extrapolation to tissue difficult.
Here acoustic holography is applied for the first time to characterize a lithotripter. Acoustic holography is a method to determine the distribution of
acoustic pressure on the surface of the source (source hologram). The electromagnetic lithotripter characterized in this effort is a commercial model
(Dornier Compact S, Dornier MedTech GmbH, Wessling, Germany) with
6.5 mm focal width. A broadband hydrophone (HGL-0200, sensitive diameter 200 mm, Onda Corp., Sunnyvale, CA) was used to sequentially measure
2191
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
the field over a set of points in a plane in front of the source. Following the
previously developed transient holography approach, the recorded pressure
field was numerically back-propagated to the source surface and then used
for nonlinear forward propagation to predict waveforms in different points
in the focal region. Pressure signals predicted from the source hologram
coincide well with the waveforms measured by a fiber optic hydrophone.
Moreover, the method provides an accurate boundary condition from which
the field in tissue can be simulated. [Work supported by RSF 14-15-00665
and NIH R21EB016118, R01EB007643, and DK043881.]
8:30
3aBA3. Multiscale model of comminution in shock wave lithotripsy.
Sorin M. Mitran (Mathematics, Univ. of North Carolina, CB 3250, Chapel
Hill, NC 27599-3250, mitran@amath.unc.edu), Georgy Sankin, Ying
Zhang, and Pei Zhong (Mech. Eng. and Mater. Sci., Duke Univ., Durham,
NC)
A previously introduced model for stone comminution in shock wave
lithotripsy is extended to include damage produced by cavitation. At the
macroscopic, continuum level a 3D elasticity model with time-varying material constants capturing localized damage provides the overall stress field
within kidney stone simulants. Regions of high stress are identified and a
mesoscopic crack propagation is used to dynamically update localized damage. The crack propagation model in turn is linked with a microscopic grain
dynamics model. Continuum stresses and surface pitting is provided by a
multiscale cavitation model (see related talk). The overall procedure is capable of tracking stone fragments and surface cavitation of the fragments
through several levels of breakdown. Computed stone fragment distributions
are compared to experimental results. [Work supported by NIH through
5R37DK052985-18.]
8:45
3aBA4. Exploring the limits of treatment used to invoke protection
from extracorporeal shock wave lithotripsy induced injury. Bret A. Connors, Andrew P. Evan, Rajash K. Handa, Philip M. Blomgren, Cynthia D.
Johnson, James A. McAteer (Anatomy and Cell Biology, IU School of Medicine, Medical Sci. Bldg., Rm. 5055, 635 Barnhill Dr., Indianapolis, IN
46202, bconnors@iupui.edu), and James E. Lingeman (Urology, IU School
of Medicine, Indianapolis, IN)
Previous studies with our juvenile pig model have shown that a clinical
dose of 2000 shock waves (SWs) (Dornier HM-3, 24 kV, 120 SWs/min)
produces a lesion ~3–5% of the functional renal volume (FRV) of the SWtreated kidney. This injury was significantly reduced (to ~0.4% FRV) when
a priming dose of 500 low-energy SWs immediately preceded this clinical
dose, but not when using a priming dose of 100 SWs [BJU Int. 110, E1041
(2012)]. The present study examined whether using only 300 priming dose
SWs would initiate protection against injury. METHODS: Juvenile pigs
were treated with 300 SW’s (12 kV) delivered to a lower pole calyx using a
HM-3 lithotripter. After a pause of 10 s, 2000 SWs (24 kV) were delivered
168th Meeting: Acoustical Society of America
2191
3a WED. AM
8:00
to that same kidney. The kidneys were then perfusion-fixed and processed
to quantitate the size of the parenchymal lesion. RESULTS: Pigs (n = 9)
treated using a protocol with 300 low-energy priming dose SWs had a lesion
measuring 0.8460.43% FRV (mean 6 SE). This lesion was smaller than
that seen with a clinical dose of 2000 SWs at 24 kV. CONCLUSIONS: A
treatment protocol including 300 low-energy priming dose SWs can provide
protection from injury during shock wave lithotripsy. [Research supported
by NIH grant P01 DK43881.]
9:00
3aBA5. Shockwave lithotripsy with renoprotective pause is associated
with vasoconstriction in humans. Franklin Lee, Ryan Hsi, Jonathan D.
Harper (Dept. of Urology, Univ. of Washington School of Medicine, Seattle,
WA), Barbrina Dunmire, Michael Bailey (Cntr.Industrial and Medical
Ultrasound, Appl. Phys. Lab, Univ. of Washington, 1013 NE 40th St., Seattle, WA 98105, bailey@apl.washington.edu), Ziyue Liu (Dept. of Biostatistics, Indiana Univ. School of Medicine, Indianapolis, Washington), and
Mathew D. Sorensen (Dept. of Urology, Dept. of Veteran Affairs Medical
Ctr., Seattle, WA)
A pause early in shock wave lithotripsy (SWL) increased vasoconstriction as measured by resistive index (RI) during treatment and mitigated renal injury in an animal model. The purpose of our study was to investigate
whether RI rose during SWL in humans. Prospectively recruited patients
underwent SWL of renal stones with a Dornier Compact S lithotripter. The
renal protective protocol consisted of treatment at 1 Hz and slow power
ramping for the initial 250 shocks followed by a 2 min pause. RI was measured using ultrasound prior to treatment, after 250 shocks, after 750 shocks,
after 1500 shocks, and after SWL. A linear mixed-effects model was used to
compare RI at the different time points and to account for additional covariates in fifteen patients. RI was significantly higher than baseline for all time
points 250 shocks and after. Age, gender, body mass index, and treatment
side were not significantly associated with RI. Monitoring for a rise in RI
during SWL is possible and may provide real-time feedback as to when the
kidney is protected. [Work supported by NIH DK043881, NSBRI through
NASA NCC 9-58, and resources from the VA Puget Sound Health Care
System.]
9:15
3aBA6. Renal shock wave lithotripsy may be a risk factor for earlyonset hypertension in metabolic syndrome: A pilot study in a porcine
model. Rajash Handa (Anatomy & Cell Biology, Indiana Univ. School of
Medicine, 635 Barnhill Dr., MS 5035, Indianapolis, IN 46202-5120,
rhanda@iupui.edu), Ziyue Liu (Biostatistics, Indiana Univ. School of Medicine, Indianapolis, IN), Bret Connors, Cynthia Johnson, Andrew Evan
(Anatomy & Cell Biology, Indiana Univ. School of Medicine, Indianapolis,
IN), James Lingeman (Kidney Stone Inst., Indiana Univ. Health Methodist
Hospital, Indianapolis, IN), David Basile, and Johnathan Tune (Cellular &
Integrative Physiol., Indiana Univ. School of Medicine, Indianapolis,
IN)
A pilot study was conducted to assess whether extracorporeal shock
wave lithotripsy (SWL) treatment of the kidney influences the onset and severity of metabolic syndrome (MetS)—a cluster of conditions that includes
central obesity, insulin resistance, impaired glucose tolerance, dyslipidemia,
and hypertension. Methods: Three-month-old juvenile female Ossabaw miniature pigs were treated with either SWL (2000 SWs, 24 kV, 120 SWs/min
using the HM3 lithotripter; n = 2) or sham-SWL (no SWs; n = 2). SWs were
targeted to the upper pole of the left kidney so as to model treatment that
would also expose the pancreas—an organ involved in blood glucose homeostasis—to SWs. The pigs were then instrumented for direct measurement
of arterial blood pressure via implanted radiotelemetry devices, and later fed
a hypercaloric atherogenic diet for ~7 months to induce MetS. The development of MetS was assessed from intravenous glucose tolerance tests.
Results: The progression and severity of MetS were similar in the shamtreated and SWL-treated groups. The only exception was arterial blood pressure, which remained relatively constant in the sham-treated pigs and rose
toward hypertensive levels in SW-treated pigs. Conclusions. These preliminary results suggest that renal SWL appears to be a risk factor for earlyonset hypertension in MetS.
9:30–9:45 Break
2192
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
9:45
3aBA7. Modeling vascular injury due to shock-induced bubble collapse
in lithotripsy. Vedran Coralic and Tim Colonius (Mech. Eng., Caltech,
1200 E. California Blvd., Pasadena, CA 91125, colonius@caltech.edu)
Shock-induced collapse (SIC) of preexisting bubbles is investigated as a
potential mechanism for vascular injury in shockwave lithotripsy (SWL).
Preexisting bubbles exist under normal physiological conditions and grow
larger and more numerous with ongoing treatment. We compute the threedimensional SIC of a bubble using the multi-component Euler equations,
and determine the resulting three-dimensional finite-strain deformation field
in the material surrounding the collapsing bubble. We propose a criterion
for vessel rupture and estimate the minimum bubble size, across clinical
SWL pressures, which could result in rupture of microvasculature. Postprocessing of the results and comparison to viscoelastic models for spherical
bubble dynamics demonstrate that our results are insensitive to a wide range
of estimated viscoelastic tissue properties during the collapse phase. During
the jetting phase, however, viscoelastic effects are non-negligible. The minimum bubble size required to rupture a vessel is then estimated by adapting a
previous model for the jet’s penetration depth as a function of tissue
viscosity.
10:00
3aBA8. Multiscale model of cavitation bubble formation and breakdown. Isaac Nault, Sorin M. Mitran (Mathematics, Univ. of North Carolina,
CB3250, Chapel Hill, NC, naulti@live.unc.edu), Georgy Sankin, and Pei
Zhong (Mech. Eng. and Mater. Sci., Duke Univ., Durham, NC)
Cavitation damage is responsible for initial pitting of kidney stone surfaces, damage that is thought to play an important role in shock wave lithotripsy. We introduce a multiscale model of the formation of cavitation
bubbles in water, and subsequent breakdown. At a macroscopic, continuum
scale cavitation is modeled by the 3D Euler equations with a Tait equation
of state. Adaptive mesh refinement is used to provide increased resolution at
the liquid/vapor boundary. Cells with both liquid and vapor phases are
flagged by the continuum solver for mesoscale, kinetic modeling by a lattice
Boltzmann description capable of capturing non-equilibrium behavior (e.g.,
phase change, energetic jet impingement). Isolated and interacting two-bubble configurations are studied. Computational simulation results are compared with high-speed experimental imaging of individual bubble dynamics
and bubble–bubble interaction. The model is used to build a statistical
description of multiple-bubble interaction, with input from cavitation cloud
imaging. [Work supported by NIH through 5R37DK052985-18.]
10:15
3aBA9. Preliminary results of the feasibility to reposition kidney stones
with ultrasound in humans. Jonathan D. Harper, Franklin Lee, Susan
Ross, Hunter Wessells (Dept. of Urology, Univ. of Washington School of
Medicine, Seattle, WA), Bryan W. Cunitz, Barbrina Dunmire, Michael Bailey (Ctr.Industrial and Medical Ultrasound, Appl. Phys. Lab, Univ. of
Washington, 1013 NE 40th St., Seattle, WA 98105, bailey@apl.washington.
edu), Jeff Thiel (Dept. of Radiology, Univ. of Washington School of Medicine, Seattle), Michael Coburn (Dept. of Urology, Baylor College of Medicine, Houston, TX), James E. Lingeman (Dept. of Urology, Indiana Univ.
School of Medicine, Indianapolis, IN), and Mathew Sorensen (Dept. of
Urology, Dept. of Veteran Affairs Medical Ctr., Seattle)
Preliminary investigational use of ultrasound to reposition human kidney stones is reported. The three study arms include: de novo stones, postlithotripsy fragments, and large stones within the preoperative setting. A
pain questionnaire is completed immediately prior to and following propulsion. A maximum of 40 push attempts are administered. Movement is classified as no motion, movement with rollback or jiggle, or movement to a new
location. Seven subjects have been enrolled and undergone ultrasonic propulsion to date. Stones were identified, targeted, and moved in all subjects.
Subjects who did not have significant movement were in the de novo arm.
None of the subjects reported pain associated with the treatment. One subject in the post-lithotripsy arm passed two small stones immediately following treatment corresponding to the two stones displaced from the interpolar
region. Three post-lithotripsy subjects reported passage of multiple small
fragments within two weeks of treatment. In four subjects, ultrasonic
168th Meeting: Acoustical Society of America
2192
width resulted in an underestimation of 0.5 6 1.7 mm (p < 0.001). A posterior acoustic shadow was seen in the majority of stones and was a more
accurate measure of stone size. This would provide valuable information for
stone management. [Work supported by NIH DK43881 and DK092197, and
NSBRI through NASA NCC 9-58.]
10:30
11:00
3aBA10. Nonlinear saturation effects in ultrasound fields of diagnostictype transducers used for kidney stone propulsion. Maria M. Karzova
(Phys. Faculty, Dept. of Acoust., M.V. Lomonosov Moscow State Univ.,
Leninskie Gory 1/2, Moscow 119991, Russian Federation, masha@acs366.
phys.msu.ru), Bryan W. Cunitz (Ctr. for Industrial and Medical Ultrasound,
Appl. Phys. Lab., Univ. of Washington, Seattle, WA), Petr V. Yuldashev
(Phys. Faculty, M.V. Lomonosov Moscow State Univ., Moscow, Russian
Federation), Vera A. Khokhlova, Wayne Kreider (Ctr. for Industrial and
Medical Ultrasound, Appl. Phys. Lab., Univ. of Washington, Seattle, WA),
Oleg A. Sapozhnikov (Phys. Faculty, M.V. Lomonosov Moscow State
Univ., Moscow, Russian Federation), and Michael R. Bailey (Dept. of Urology, Univ. of Washington Medical Ctr., Seattle, WA)
3aBA12. Development and testing of an image-guided prototype system
for the comminution of kidney stones using burst wave lithotripsy.
Bryan Cunitz (Appl. Phys. Lab., Univ. of Washington, 1013 NE 40th St.,
Seattle, WA 98105, bwc@apl.washington.edu), Adam Maxwell (Dept. of
Urology, Univ. of Washington Medical Ctr., Seattle, WA), Wayne Kreider,
Oleg Sapozhnikov (Appl. Phys. Lab, Univ. of Washington, Seattle, WA),
Franklin Lee, Jonathan Harper, Matthew Sorenson (Dept. of Urology, Univ.
of Washington Medical Ctr., Seattle, WA), and Michael Bailey (Appl. Phys.
Lab, Univ. of Washington, Seattle, WA)
A novel therapeutic application of ultrasound for repositioning kidney
stones is being developed. The method uses acoustic radiation force to expel
mm-sized stones or to dislodge even larger obstructing stones. A standard
diagnostic 2.3 MHz C5-2 array probe has been used to generate pushing
acoustic pulses. The probe comprises 128 elements equally spaced at the 55
mm long convex cylindrical surface with 41.2 mm radius of curvature. The
efficacy of the treatment can be increased by using higher transducer output
to provide stronger pushing force; however, nonlinear acoustic saturation
effect can be a limiting factor. In this work, nonlinear propagation effects
were analyzed for the C5-2 transducer using a combined measurement and
modeling approach. Simulations were based on the 3D Westervelt equation;
the boundary condition was set to match low power measurements. Focal
waveforms simulated for several output power levels were compared with
the fiber-optic hydrophone measurements and were found in good agreement. It was shown that saturation effects do limit the acoustic pressure in
the focal region of the transducer. This work has application to standard
diagnostic probes and imaging. [Work supported by RSF 14-12-00974, NIH
EB007643, DK43881 and DK092197, and NSBRI through NASA NCC 958.]
10:45
3aBA11. Evaluating kidney stone size in children using the posterior
acoustic shadow. Franklin C. Lee, Jonathan D. Harper, Thomas S. Lendvay
(Urology, Univ. of Washington, Seattle, WA), Ziyue Liu (Biostatistics, Indiana Univ. School of Medicine , Indianapolis, IN), Barbrina Dunmire (Appl.
Phys. Lab, Univ. of Washington, 1013 NE 40th St, Seattle, WA 98105,
mrbean@uw.edu), Manjiri Dighe (Radiology, Univ. of Washington, Seattle,
WA), Michael R. Bailey (Appl. Phys. Lab, Univ. of Washington, Seattle,
WA), and Mathew D. Sorensen (Urology, Dept. of Veteran Affairs Medical
Ctr., Seattle, WA)
Ultrasound, not x-ray, is preferred for imaging kidney stones in children;
however, stone size determination is less accurate with ultrasound. In vitro
we found stone sizing was improved by measuring the width of the acoustic
shadow behind the stone. We sought to determine the prevalence and accuracy of the acoustic shadow in pediatric patients. A retrospective analysis
was performed of all initial stone events at a children’s hospital over the last
10 years. Included subjects had a computed tomography (CT) scan and renal
ultrasound within 3 months of each other. The width of the stone and acoustic shadow were measured on ultrasound and compared to the stone size as
determined by CT. Thirty-seven patients with 49 kidney stones were
included. An acoustic shadow was seen in 85% of stones evaluated. Stone
width resulted in an average overestimation of 1.2 6 2.2 mm while shadow
2193
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Burst wave lithotripsy is a novel technology that uses focused, sinusoidal
bursts of ultrasound to fragment kidney stones. Prior research laid the
groundwork to design an extracorporeal, image-guided probe for in-vivo
testing and potentially human clinical testing. Toward this end, a 12-element
330 kHz array transducer was designed and built. The probe frequency, geometry, and shape were designed to break stones up to 1 cm in diameter into
fragments <2mm. A custom amplifier capable of generating output bursts
up to 3 kV was built to drive the array. To facilitate image guidance, the
transducer array was designed with a central hole to accommodate co-axial
attachment of an HDI P4-2 probe. Custom B-mode and Doppler imaging
sequences were developed and synchronized on a Verasonics ultrasound
engine to enable real-time stone targeting and cavitation detection, Preliminary data suggest that natural stones will exhibit Doppler “twinkling” artifact in the BWL focus and that the Doppler power increases as the stone
begins to fragment. This feedback allows accurate stone targeting while
both types of imaging sequences can also detect cavitation in bulk tissue that
may lead to injury. [Work supported by NIH grants DK043881, EB007643,
EB016118, T32 DK007779, and NSBRI through NASA NCC 9-58.]
11:15
3aBA13. Removal of residual bubble nuclei to enhance histotripsy kidney stone erosion at high rate. Alexander P. Duryea (Biomedical Eng.,
Univ. of Michigan, 2131 Gerstacker Bldg., 2200 Bonisteel Blvd., Ann
Arbor, MI 48109, duryalex@umich.edu), William W. Roberts (Urology,
Univ. of Michigan, Ann Arbor, MI), Charles A. Cain, and Timothy L. Hall
(Biomedical Eng., Univ. of Michigan, Ann Arbor, MI)
Previous work has shown that histotripsy can effectively erode model
kidney stones to tiny, sub-millimeter debris via a cavitational bubble cloud
localized on the stone surface. Similar to shock wave lithotripsy, histotripsy
stone treatment displays a rate-dependent efficacy, with pulses applied at
low repetition frequency producing more efficient erosion compared to
those applied at high repetition frequency. This is attributed to microscopic
residual cavitation bubble nuclei that can persist for hundreds of milliseconds following bubble cloud collapse. To mitigate this effect, we have
developed low amplitude (MI<1) acoustic pulses to actively remove residual nuclei from the field. These bubble removal pulses utilize the Bjerknes
forces to stimulate the aggregation and subsequent coalescence of remnant
nuclei, consolidating the population from a very large number to a countably
small number of remnant bubbles within several milliseconds. Incorporation
of this bubble removal scheme in histotripsy model stone treatments performed at high rate (100 pulses/second) produced drastic improvement in
treatment efficiency, with an average erosion rate increase of 12-fold in
comparison to treatment without bubble removal. High speed imaging indicates that the influence of remnant nuclei on the location of bubble cloud
collapse is the dominant contributor to this disparity in treatment efficacy.
168th Meeting: Acoustical Society of America
2193
3a WED. AM
propulsion identified a collection of stones previously characterized as a single stone on KUB and ultrasound. There have been no treatment related
adverse events reported with mean follow-up of 3 months. [Trial supported
by NSBRI through NASA NCC 9-58. Development supported by NIH
DK043881and DK092197.]
WEDNESDAY MORNING, 29 OCTOBER 2014
MARRIOTT 9/10, 8:00 A.M. TO 11:50 A.M.
Session 3aEA
Engineering Acoustics and Structural Acoustics and Vibration: Mechanics of Continuous Media
Andrew J. Hull, Cochair
Naval Undersea Warfare Center, 1176 Howell St, Newport, RI 02841
J. Gregory McDaniel, Cochair
Mechanical Engineering, Boston Univ., 110 Cummington St., Boston, MA 02215
Invited Papers
8:00
3aEA1. Fundamental studies of zero Poisson ratio metamaterials. Elizabeth A. Magliula (Div. Newport, Naval Undersea Warfare
Ctr., 1176 Howell St., Bldg. 1302, Newport, RI 02841, elizabeth.magliula@navy.mil), J. Gregory McDaniel, and Andrew Wixom
(Mech. Eng. Dept., Boston Univ., Boston, MA)
As material fabrication advances, new materials with special properties will be possible to accommodate new design boundaries. An
emerging and promising field of investigation is to study the basic phenomena of materials with a negative Poisson ratio (NPR). This
work seeks to develop zero Poisson ratio (ZPR) metamaterials for use in reducing acoustic radiation from compressional waves. Such a
material would neither contract or expand laterally when compressed or stretched, and therefore not radiate sound. Previous work has
provided procedures for creating NPR copper foam through transformation of the foam cell structure from a convex polyhedral shape to
a concave “re-entrant” shape. A ZPR composite will be developed and analyzed in an effort to achieve desired wave propagation characteristics. Dynamic investigations have been conducted using ABAQUS, in which a ZPR is placed under load to observe displacement
behavior. Inspection of the results at 1 kHz and 5 kHz show that the top and bottom surfaces experience much less displacement compared to respective conventional reference layer build-ups. However, at 11 kHz small lateral displacements were experienced at the outer
surfaces. Results indicate that the net zero Poisson effect was successfully achieved at frequencies where half the wavelength is greater
than the thickness.
8:20
3aEA2. Scattering by targets buried in elastic sediment. Angie Sarkissian, Saikat Dey, Brian H. Houston (Code 7130, Naval Res.
Lab., Code 7132, 4555 Overlook Ave. S.W., Washington, DC 20375, angie.sarkissian@nrl.navy.mil), and Joseph A. Bucaro (Excet,
Inc., Springfield, VA)
Scattering results are presented for targets of various shapes buried in elastic sediment with a plane wave incident from air above.
The STARS3D finite element program recently extended to layered, elastic sediments is used to compute the displacement field just
below the interface. Evidence of the presence of Rayleigh waves is observed in the elastic sediment and an algorithm based on the Rayleigh waves subtracts the contribution of the Rayleigh waves to simplify the resultant scattering pattern. Results are presented for scatterers buried in uniform elastic media as well as layered media. [This work was supported by ONR.]
8:40
3aEA3. Response shaping and scale transition in dynamic systems with arrays of attachments. Joseph F. Vignola, Aldo A. Glean
(Mech. Eng., The Catholic Univ. of America, 620 Michigan Ave., NE, Washington, DC 20064, vignola@cua.edu), John Sterling (Carderock Div., Naval Surface Warfare Ctr., West Bethesda, MD), and John A. Judge (Mech. Eng., The Catholic Univ. of America, Washington, DC)
Arrays of elastic attachments can be design to act as energy sinks in dynamic systems. This presentation describes design strategies
for drawing off mechanical energy to achieve specific objectives such as mode suppression and response tailoring in both extended and
discrete systems. The design parameters are established using numerical simulations for both propagating and standing compressional
waves in a one-dimensional system. The attachments were chosen to be cantilevers so that higher modes would have limited interaction
with the higher modes of the primary structure. The two cases considered here are concentrated groups of cantilevers and spatial distributions of similar cantilevers. Relationships between the number and placement of the attachments and their masses and frequency distributions are of particular interest, along with the energy density distribution between the primary structure and the attachments. The
simulations are also used to show how fabrication error degrades performance and how energy scale transition can be managed to maintain linear behavior.
2194
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2194
9:00
3aEA4. Accelerated general method for computing noise effects in arrays. Heather Reed, Jeffrey Cipolla, Mahesh Bailakanavar, and
Patrick Murray (Weidlinger Assoc., 40 Wall St 18th Fl., New York, NY 10005, heather.reed@wai.com)
Noise in an acoustic array can be defined as any unwanted signal, and understanding how noise interacts with a structural system is
paramount for optimal design. For example, in an underwater vehicle we may want to understand how structural vibrations radiate
through a surrounding fluid; or an engineer may want to evaluate the level of sound inside a car resulting from the turbulent boundary
layer (TBL) induced by a moving vehicle. This talk will discuss a means of modeling noise at a point of interest (e.g., at a sensor location) stemming from a known source by utilizing a power transfer function between the source and the point of interest, a generalization
of the work presented in [1]. The power transfer function can be readily computed from the acoustic response to an incident wave field,
requiring virtually no additional computation. The acoustic solution may be determined via analytic frequency domain approaches or
through a finite element analysis, enabling the noise solution to be a fast post processing exercise. This method is demonstrated by modeling the effects of a TBL pressure and noise induced by structural vibrations on a sensor array embedded in an elastic, multi-layer solid.
Additionally, uncertainties in the noise model can be easily quantified through Monte Carlo techniques due to the fast evaluation of the
noise spectrum. Ko, S.H. and Schloemer, H.H. “Flow noise reduction techniques for a planar array of hydrophones,” J. Acoust. Soc.
Am. 92, 3409 (1992).
9:20
3aEA5. Response of distributed fiber optic sensor cables to spherical wave incidence. Jeffrey Boisvert (NAVSEA Div. Newport,
1176 Howell St., Newport, RI 02841, cboisvertj@cox.net)
A generalized multi-layered infinite-length fiber optic cable is modeled using the exact theory of three-dimensional elasticity in cylindrical coordinates. A cable is typically composed of a fiber optic (glass) core surrounded by various layered materials such as plastics,
metals, and elastomers. The cable is excited by an acoustic spherical wave radiated by a monopole source at an arbitrary location in the
acoustic field. For a given source location and frequency, the radial and axial strains within the cable are integrated over a desired sensor
zone length to determine the optical phase sensitivity using an equation that relates the strain distribution in an optical fiber to changes
in the phase of an optical signal. Directivity results for the cable in a free-field water environment are presented at several frequencies
for various monopole source locations. Some comparisons of the sensor directional response resulting from nearfield (spherical wave)
incidence and farfield (plane wave) incidence are made. [Work supported by NAVSEA Division Newport ILIR Program.]
3a WED. AM
9:40
3aEA6. Testing facility concepts for the material characterization of porous media consisting of relatively limp foam and stiff
fluid. Michael Woodworth and Jeffrey Cipolla (ASI, Weidlinger Assoc., Inc., 1825 K St NW, #350, Washington, DC 20006, michael.
woodworth@wai.com)
Fluid filled foams are important components of acoustical systems. Most are made up of a stiff skeleton medium relative to the fluid
considered, usually air. Biot’s theory of poroelasticity is appropriate for characterizing and modeling these foams. The use of relatively
stiff fluid (such as water) and limp foam media pose a greater challenge. Recently modifications to Biot’s theory have generated the mechanical relationships required to model these systems. Necessary static material properties for the model can be obtain through in vacuo
measurement. Frequency dependent properties are more difficult to obtain. Traditional impedance tube methods suffer from fluid structure interaction when the bulk modulus of the fluid media approaches that of the waveguide. The current investigation derives the theory
for, and investigates the feasibility of, several rigid impedance tube alternatives for characterizing limp foams in stiff fluid media. Alternatives considered include a sufficiently rigid impedance tube, a pressure relief impedance tube and, the most promising, a piston excited
oscillating chamber of small aspect ratio. The chamber concept can recover the descriptive properties of a porous medium described by
Biot’s theory or by complex-impedance equivalent-fluid models. The advantages of this facility are small facility size, low cost, and
small sample size.
10:00–10:20 Break
10:20
3aEA7. Adomian decomposition identifies an approximate analytical solution for a set of coupled strings. David Segala (Naval
Undersea Warfare Ctr., 1176 Howell St., Newport, RI 02841, david.segala@navy.mil)
The use of Adomian decomposition method (ADM) has been successfully applied in various applications across the applied mechanics and mathematics community. Originally, Adomian developed this method to derive analytical approximate solutions to nonlinear
functional equations. It was shown that the solution to the given nonlinear functional equation can be approximated by an infinite series
solution of the linear and nonlinear terms, provided the nonlinear terms are represented by a sum of series of Adomian polynomials.
Here, ADM is used to derive an approximate analytical solution to a set of partial differential equations (PDEs) describing the motion of
two coupled strings that lie orthogonal to each other. The PDEs are derived using Euler-Lagrange equations of motion. The ends of the
strings are pinned and the strings are coupled with a nonlinear spring. A finite element model of the system is developed to provide a
comparative baseline. Both the finite element model and analytical solution were driven by an initial displacement condition. The results
from both the FEA and analytical solution were compared at six different equally spaced time points over the course of a 1.2 second
simulation.
2195
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2195
10:40
3aEA8. Comprehensive and practical explorations of nonlinear energy harvesting from stochastic vibrations. Ryan L. Harne and
Kon-Well Wang (Mech. Eng., Univ. of Michigan, 2350 Hayward St., 2250 GG Brown Bldg., Ann Arbor, MI 48109-2125, rharne@
umich.edu)
Conversion of ambient vibrational energies to electrical power is a recent, popular motivation for research that seeks to realize selfsustaining electronic systems including biomedical implants and remote wireless structural sensors. Many vibration resources are stochastic with spectra concentrated at extremely low frequencies, which is a challenging bandwidth to target in the design of compact, resonant electromechanical harvesters. Exploitation of design-based nonlinearities has uncovered means to reduce and broaden a
harvester’s frequency range of greatest sensitivity to be more compatible with ambient spectra, thus dramatically improving energy conversion performance. However, studies to date draw differing conclusions regarding the viability of the most promising nonlinear harvesters, namely, those designed around the elastic stability limit, although the investigations present findings having limited verification.
To help resolve the outstanding questions about energy harvesting from stochastic vibrations using systems designed near the elastic stability limit, this research integrates rigorous analytical, numerical, and experimental explorations. The harvester architecture considered
is a cantilever beam, which is the common focus of contemporary studies, and evaluates critical, practical factors involved for its effective implementation. From the investigations, the most favorable incorporations of nonlinearity are identified and useful design guidelines are proposed.
11:00
3aEA9. Response of infinite length bars and beams with periodically varying area. Andrew J. Hull and Benjamin A. Cray (Naval
Undersea Warfare Ctr., 1176 Howell St., Newport, RI 02841, andrew.hull@navy.mil)
This talk develops a solution method for the longitudinal motion of a rod or the flexural motion of a beam of infinite length whose
area varies periodically. The conventional rod or beam equation of motion is used with the area and moment of inertia expressed using
analytical functions of the longitudinal (horizontal) spatial variable. The displacement field is written as a series expansion using a periodic form for the horizontal wavenumber. The area and moment of inertia expressions are each expanded into a Fourier series. These are
inserted into the differential equations of motion and the resulting algebraic equations are orthogonalized to produce a matrix equation
whose solution provides the unknown wave propagation coefficients, thus yielding the displacement of the system. An example problem
of both a rod and beam are analyzed for three different geometrical shapes. The solutions to both problems are compared to results from
finite element analysis for validation. Dispersion curves of the systems are shown graphically. Convergence of the series solutions is
illustrated and discussed.
Contributed Papers
11:20
11:35
3aEA10. On the exact analytical solutions to equations of nonlinear
acoustics. Alexander I. Kozlov (Medical and biological Phys., Vitebsk State
Medical Univ., 27, Frunze Ave., Vitebsk 210023, Belarus, albapasserby@
yahoo.com)
3aEA11. A longitudinal shear wave and transverse compressional wave
in solids. ali Zorgani (LabTAU, INSERM, Univ. of Lyon, Bron, France),
Stefan Catheline (LabTAU, INSERM, Univ. of Lyon, 151 Cours Albert
Thomas, Lyon, France, stefan.catheline@inserm.fr), and Nicolas Benech
(Instituto de fisica, Facultad de ciencia, Montevideo, Uruguay)
Some different equations derived as second-order approximations to
complete system of equations of nonlinear acoustics of Newtonian media
(such as Lighthill-Westerwelt equation, Kuznetsov one, etc.) are usually
solved numerically or at least approximately. A general exact analytical
method of solution of these problems based on a short chain of changes of
variables is presented in the work. It is shown that neither traveling-wave
solutions nor classical soliton-like solutions obey these equations. There are
three types of possible forms of acoustical pressure depending on parameters of initial equation: so-called continuous shock (or diffusive soliton), a
monotonously decaying solution as well as a sectionally continuous periodic
one. Obtained results are in good qualitative agreement with previously published numerical calculations of different authors.
2196
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
What general definition can one give to elastic P- and S-wave, especially
when they are transversely and longitudinally polarized respectively? This
question is the main motivation of the analysis of the Green’s function
reported in this letter. By separating the Green’s function in a divergence
free and a rotational free terms, not only a longitudinal S-wave but also a
transversal P-wave are described. These waves are shown to be parts of the
solution of the wave equation known as coupling terms. Similarly to surface
water wave, they are divergence and rotational free. Their special motion is
carefully described and illustrated.
168th Meeting: Acoustical Society of America
2196
WEDNESDAY MORNING, 29 OCTOBER 2014
MARRIOTT 6, 9:00 A.M. TO 11:00 A.M.
Session 3aID
Student Council, Education in Acoustics and Acoustical Oceanography: Graduate Studies in Acoustics
(Poster Session)
Zhao Peng, Cochair
Durham School of Architectural Engineering and Construction, University of Nebraska-Lincoln, 1110 S. 67th Street, Omaha,
NE 68182
Preston S. Wilson, Cochair
Mech. Eng., The University of Texas at Austin, 1 University Station, C2200, Austin, TX 78712
Whitney L. Coyle, Cochair
The Pennsylvania State University, 201 Applied Science Building, University Park, PA 16802
All posters will be on display from 9:00 a.m. to 11:00 a.m. To allow contributors an opportunity to see other posters, contributors of
odd-numbered papers will be at their posters from 9:00 a.m. to 10:00 a.m. and contributors of even-numbered papers will be at their
posters from 10:00 a.m. to 11:00 a.m.
Invited Papers
3a WED. AM
3aID1. The Graduate Program in Acoustics at The Pennsylvania State University. Victor Sparrow and Daniel A. Russell (Grad.
Prog. Acoust., Penn State, 201 Appl. Sci. Bldg., University Park, PA 16802, vws1@psu.edu)
In 2015, the Graduate Program in Acoustics at Penn State will be celebrating 50 years as the only program in the United States offering the Ph.D. in Acoustics as well as M.S. and M.Eng. degrees in Acoustics. An interdisciplinary program with faculty from a variety of
academic disciplines, the Acoustics Program is administratively aligned with the College of Engineering and closely affiliated with the
Applied Research Laboratory. The research areas include: ocean acoustics, structural acoustics, signal processing, aeroacoustics, thermoacoustics, architectural acoustics, transducers, computational acoustics, nonlinear acoustics, marine bioacoustics, noise and vibration
control, and psychoacoustics. The course offerings include fundamentals of acoustics and vibration, electroacoustic transducers, signal
processing, acoustics in fluid media, sound-structure interaction, digital signal processing, experimental techniques, acoustic measurements and data analysis, ocean acoustics, architectural acoustics, noise control engineering, nonlinear acoustics, ultrasonic NDE, outdoor
sound propagation, computational acoustics, flow induced noise, spatial sound and 3D audio, marine bioacoustics, and the acoustics of
musical instruments. Penn State Acoustics graduates serve widely throughout military and government labs, academic institutions, consulting firms, and industry. This poster will summarize faculty, research areas, facilities, student demographics, successful graduates,
and recent enrollment and employment trends.
3aID2. Graduate studies in acoustics and noise control in the School of Mechanical Engineering at Purdue University. Patricia
Davies, J. Stuart Bolton, and Kai Ming Li (Ray W. Herrick Labs., School of Mech. Eng., Purdue Univ., 177 South Russell St., West Lafayette, IN 47907-2099, daviesp@purdue.edu)
The acoustics community at Purdue University will be described with special emphasis on the graduate program in Mechanical Engineering (ME). Purdue is home to around 30 faculty who study various aspects of acoustics and related disciplines, and so, there are
many classes to choose from as graduate students structure their plans of study to complement their research activities and to broaden
their understanding of the various aspects of acoustics. In Mechanical Engineering, the primary emphasis is on understanding noise generation, noise propagation, and the impact of noise on people, as well as development of noise control strategies, experimental techniques, and noise and noise impact prediction tools. The ME acoustics research is conducted at the Ray W. Herrick Laboratories, which
houses several large acoustics chambers that are designed to facilitate testing of a wide array mechanical systems, reflecting the Laboratories’ long history of industry-relevant research. Complementing the acoustics research, Purdue has vibrations, dynamics, and electromechanical systems research programs and is home to a collaborative group of engineering and psychology professors who study human
perception and its integration into engineering design. There are also very strong ties between ME acoustics faculty and faculty in Biomedical Engineering and Speech Language and Hearing Sciences.
3aID3. Acoustics program at the University of Rhode Island. Gopu R. Potty, James H. Miller, Brenton Wallin (Dept. of Ocean Eng.,
Univ. of Rhode Island, 115 Middleton Bldg., Narragansett, RI 02882, potty@egr.uri.edu), Charles E. White (Naval Undersea Warfare
Ctr., Newport, RI), and Jennifer Giard (Marine Acoust., Inc., Middletown, RI)
The undergraduate and graduate program in Ocean Engineering at the University of Rhode Island is one of the oldest such programs
in the United States. This program offers Bachelors, Masters (thesis and non-thesis options), and Ph.D. degrees in Ocean Engineering.
The Ocean Engineering program has a strong acoustic component both at the undergraduate and graduate level. At the graduate level,
students can specialize in several areas including geoacoustic inversion, propagation modeling, marine mammal acoustics, ocean
2197
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2197
acoustic instrumentation, transducers, etc. Current acoustics related research activities of various groups will be presented. Information
regarding the requirements of entry into the program will be provided. Many graduates from the program hold faculty positions at a
number of universities in the United States and abroad. In addition, graduates from the ocean acoustics program at URI are key staff at
many companies and organizations. The opportunities and challenges facing the program will be summarized.
3aID4. Graduate education and research in architectural acoustics at Rensselaer Polytechnic Institute. Ning Xiang, Jonas
Braasch, and Todd Brooks (Graduate Program in Architectural Acoust., School of Architecture, Rensselaer Polytechnic Inst., Troy, NY
12180, xiangn@rpi.edu)
The rapid pace of change in the fields of architectural-, physical-, and psycho-acoustics has constantly advanced the Graduate Program in Architectural Acoustics from its inception in 1998 with an ambitious mission of educating future experts and leaders in architectural acoustics. Recent years we have reshaped its pedagogy using “STEM” (science, technology, engineering, and mathematics)
methods, including intensive, integrative hands-on experimental components that fuse theory and practice in a collaborative environment. Our pedagogy enables graduate students from a broad range of fields to succeed in this rapidly changing field. The graduate program has attracted graduate students from a variety of disciplines including individuals with B.S., B.A., or B.Arch. degrees in
Engineering, Physics, Mathematics, Computer Science, Electronic Media, Sound Recording, Music, Architecture, and related fields.
RPI’s Graduate Program in Architectural Acoustics has since graduated more than 100 graduates with both M.S. and Ph.D. degrees.
Along with faculty members they have also actively 0contributed to the program’s research in architectural acoustics, psychoaoustics,
communication acoustics, signal processing in acoustics as well as our scientific exploration at the intersection of cutting edge research
and traditional architecture/music culture. This paper shares the growth and evolution of the graduate program.
3aID5. Graduate training opportunities in the hearing sciences at the University of Louisville. Pavel Zahorik, Jill E. Preminger
(Div. of Communicative Disord., Dept. of Surgery, Univ. of Louisville School of Medicine, Psychol. and Brain Sci., Life Sci. Bldg. 317,
Louisville, KY 40292, pavel.zahorik@louisville.edu), and Christian E. Stilp (Dept. of Psychol. and Brain Sci., Univ. of Louisville,
Louisville, KY)
The University of Louisville currently offers two branches of training opportunities for students interested in pursuing graduate training in the hearing sciences: A Ph.D. degree in experimental psychology with concentration in hearing science, and a clinical doctorate in
audiology (Au.D.). The Ph.D. degree program offers mentored research training in areas such as psychoacoustics, speech perception,
spatial hearing, and multisensory perception, and guarantees students four years of funding (tuition plus stipend). The Au.D. program is
a 4-year program designed to provide students with the academic and clinical background necessary to enter audiologic practice. Both
programs are affiliated with the Heuser Hearing Institute, which, along with the University of Louisville, provides laboratory facilities
and clinical populations for both research and training. An accelerated Au.D./Ph.D. training program that integrates key components of
both programs for training of students interested in clinically based research is under development. Additional information is available
at http://louisville.edu/medicine/degrees/audiology and http://louisville.edu/psychology/graduate/vision-hearing.
3aID6. Graduate studies in acoustics, Speech and Hearing at the University of South Florida, Department of Communication Sciences and Disorders. Catherine L. Rogers (Dept. of Commun. Sci. and Disord., Univ. of South Florida, USF, 4202 E. Fowler Ave.,
PCD1017, Tampa, FL 33620, crogers2@usf.edu)
This poster will provide an overview of programs and opportunities for students who are interested in learning more about graduate
studies in the Department of Communication Sciences and Disorders at the University of South Florida. Ours is a large and active
department, offering students the opportunity to pursue either basic or applied research in a variety of areas. Current strengths of the
research faculty in the technical areas of Speech Communication and Psychological and Physiological Acoustics include the following:
second-language speech perception and production, aging, hearing loss and speech perception, auditory physiology, and voice acoustics
and voice quality. Entrance requirements and opportunities for involvement in student research and professional organizations will also
be described.
3aID7. Graduate programs in Hearing and Speech Sciences at Vanderbilt University. G. Christopher Stecker and Anna C. Diedesch
(Hearing and Speech Sci., Vanderbilt Univ. Medical Ctr., 1215 21st Ave. South, Rm. 8310, Nashville, TN 37232-8242, g.christopher.
stecker@vanderbilt.edu)
The Department of Hearing and Speech Sciences at Vanderbilt University is home to several graduate programs in the areas of Psychological and Physiological Acoustics and Speech Communication. Programs include the Ph.D. in Audiology, Speech-Language Pathology, and Hearing or Speech Science, Doctor of Audiology (Au.D.), and Master’s programs in Speech-Language Pathology and
Education of the Deaf. The department is closely affiliated with Vanderbilt University’s Graduate Program in Neurobiology. Several
unique aspects of the research and training environment in the department provide exceptional opportunities for students interested in
studying the basic science as well as clinical-translational aspects of auditory function and speech communication in complex environments. These include anechoic and reverberation chambers capable of multichannel presentation, the Dan Maddox Hearing Aid Laboratory, and close connections to active Audiology, Speech-Pathology, Voice, and Otolaryngology clinics. Students interested in the
neuroscience of communication utilize laboratories for auditory and multisensory neurophysiology and neuroanatomy, human electrophysiology and neuroimaging housed within the department and at the neighboring Vanderbilt University Institute for Imaging Science.
Finally, department faculty and students engage in numerous engineering and industrial collaborations, which benefit from our home
within Vanderbilt University and setting in Music City, Nashville. Tennessee.
2198
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2198
3aID8. Underwater acoustics graduate study at the Applied Physics Laboratory, University of Washington. Robert I. Odom
(Appl. Phys. Lab, Univ. of Washington, 1013 NE 40th St., Seattle, WA 98105, odom@apl.washington.edu)
With faculty representation in the Departments of Electrical Engineering, and Mechanical Engineering within the College of Engineering, the School of Oceanography, and the Department of Earth and Space Sciences within the College of the Environment, underwater acoustics at APL-UW touches on topics as diverse as long range controlled source acoustics, very low frequency seismics,
sediment acoustics, marine mammal vocalizations, and noise generated by industrial activities such as pile driving, among other things.
Graduate studies leading to both M.S. and Ph.D. degrees are available. Examples of projects currently being pursued and student opportunities are highlighted in this poster.
3aID9. Graduate acoustics at Brigham Young University. Timothy W. Leishman, Kent L. Gee, Tracianne B. Neilsen, Scott D. Sommerfeldt, Jonathan D. Blotter, and William J. Strong (Brigham Young Univ., N311 ESC, Provo, UT 84602, tbn@byu.edu)
Graduate studies in acoustics at Brigham Young University prepare students for jobs in industry, research, and academia by complementing in-depth coursework with publishable research. In the classroom, a series of five graduate-level core courses provides students
with a solid foundation in core acoustics principles and practices. The associated lab work is substantial and provides hands-on experience in diverse areas of acoustics: calibration, directivity, scattering, absorption, Doppler vibrometry, lumped-element mechanical systems, equivalent circuit modeling, arrays, filters, room acoustics measurements, active noise control, and near-field acoustical
holography. In addition to coursework, graduate students complete independent research projects with faculty members. Recent thesis
and dissertation topics have included active noise control, directivity of acoustic sources, room acoustics, radiation and directivity of
musical instruments, energy-based acoustics, aeroacoustics, propagation modeling, nonlinear propagation, and high-amplitude noise
analysis. In addition to their individual projects, graduate students often serve as peer mentors to undergraduate students on related projects and often participate in field experiments to gain additional experience. Students are expected to develop their communication
skills, present their research at multiple professional meetings, and publish it in peer-reviewed acoustics journals. In the past five years,
nearly all graduate students have published at least one refereed paper.
3a WED. AM
3aID10. Acoustics-related research in the Department of Speech and Hearing Sciences at Indiana University. Tessa Bent, Steven
Lulich, Robert Withnell, and William Shofner (Dept. of Speech and Hearing Sci., Indiana Univ., 200 S. Jordan Ave., Bloomington, IN
47405, tbent@indiana.edu)
In the Department of Speech and Hearing Sciences at Indiana University, there are many highly active laboratories that conduct
research on a wide range of areas in acoustics. Four of these laboratories are described below. The Biophysics Lab (PI: Robert Withnell)
focuses on the mechanics of hearing. Acoustically based signal processing and data acquisition provide experimental data for modelbased analysis of peripheral sound processing. The Comparative Perception Lab (PI: William Shofner) focuses on how the physical features of complex sounds are related to their perceptual attributes, particularly pitch and speech. Understanding behavior and perception
in animals, particularly in chinchillas, is an essential component of the research. The Speech Production Laboratory (PI: Steven Lulich)
conducts research on imaging of the tongue and oral cavity, speech breathing, and acoustic modeling of the whole vocal/respiratory
tract. Laboratory equipment includes 3D/4D ultrasound, digitized palate impressions, whole-body and inductive plethysmography, electroglottography, oral and nasal pressure and flow recordings, and accelerometers. The Speech Perception Lab (PI: Tessa Bent) focuses
on the perceptual consequences of phonetic variability in speech, particularly foreign-accented speech. The main topics under investigation are perceptual adaptation, individual differences in word recognition, and developmental speech perception.
3aID11. Biomedical research at the image-guided ultrasound therapeutics laboratories. Christy K. Holland (Internal Medicine,
Univ. of Cincinnati, 231 Albert Sabin Way, CVC 3935, Cincinnati, OH 45267-0586, Christy.Holland@uc.edu), T. Douglas Mast (Biomedical Eng., Univ. of Cincinnati, Cincinnati, OH), Kevin J. Haworth, Kenneth B. Bader, Himanshu Shekhar, and Kirthi Radhakrishnan
(Internal Medicine, Univ. of Cincinnati, Cincinnati, OH)
The Image-guided Ultrasound Therapeutic Laboratories (IgUTL) are located at the University of Cincinnati in the Heart, Lung, and
Vascular Institute, a key component of efforts to align the UC College of Medicine and UC Health research, education, and clinical programs. These extramurally funded laboratories, directed by Prof. Christy K. Holland, are comprised of graduate and undergraduate students, postdoctoral fellows, principal investigators, and physician-scientists with backgrounds in physics and biomedical engineering,
and clinical and scientific collaborators in fields including cardiology, neurosurgery, neurology, and emergency medicine. Prof. Holland’s research focuses on biomedical ultrasound including sonothrombolysis, ultrasound-mediated drug and bioactive gas delivery, development of echogenic liposomes, early detection of cardiovascular diseases, and ultrasound-image guided tissue ablation. The
Biomedical Ultrasonics and Cavitation Laboratory within IgUTL, directed by Prof. Kevin J. Haworth, employs ultrasound-triggered
phase-shift emulsions (UPEs) for image-guided treatment of cardiovascular disease, especially thrombotic disease. Imaging algorithms
incorporate both passive and active cavitation detection. The Biomedical Acoustics Laboratory within IgUTL, directed by Prof. T.
Douglas Mast, employs ultrasound for monitoring thermal therapy, ablation of cancer and vascular targets, transdermal drug delivery,
and noninvasive measurement of tissue deformation.
2199
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2199
3aID12. Graduate acoustics education in the Cockrell School of Engineering at The University of Texas at Austin. Michael R.
Haberman (Appl. Res. Labs., The Univ. of Texas at Austin, Austin, TX), Neal A. Hall (Elec. and Comp. Eng. Dept., The Univ. of Texas
at Austin, Austin, TX), Mark F. Hamilton (Mech. Eng. Dept., The Univ. of Texas at Austin, 1 University Station, C2200, Austin, TX
78712), Marcia J. Isakson (Appl. Res. Labs., The Univ. of Texas at Austin, Austin, TX), and Preston S. Wilson (Mech. Eng. Dept., The
Univ. of Texas at Austin, Austin, TX, pswilson@mail.utexas.edu)
While graduate study in acoustics takes place in several colleges and schools at The University of Texas at Austin (UT Austin),
including Communication, Fine Arts, Geosciences, and Natural Sciences, this poster focuses on the acoustics program in Engineering.
The core of this program resides in the Departments of Mechanical Engineering (ME) and Electrical and Computer Engineering (ECE).
Acoustics faculty in each department supervise graduate students in both departments. One undergraduate and seven graduate acoustics
courses are cross-listed in ME and ECE. Instructors for these courses include staff at Applied Research Laboratories at UT Austin, where
many of the graduate students have research assistantships. The undergraduate course, taught every fall, begins with basic physical
acoustics and proceeds to draw examples from different areas of engineering acoustics. Three of the graduate courses are taught every
year: a two-course sequence on physical acoustics, and a transducers course. The remaining four graduate acoustics courses, taught in
alternate years, are on nonlinear acoustics, underwater acoustics, ultrasonics, and architectural acoustics. An acoustics seminar is held
most Fridays during the long semesters, averaging over ten per semester since 1984. The ME and ECE departments both offer Ph.D.
qualifying exams in acoustics.
3aID13. Graduate studies in Ocean Acoustics in the Massachusetts Institute of Technology and Woods Hole Oceanographic Institution Joint Program. Andone C. Lavery (Appl. Ocean Phys. and Eng., Woods Hole Oceanographic Inst., 98 Water St., MS 11, Bigelow 211, Woods Hole, MA 02536, alavery@whoi.edu)
An overview of graduate studies in Ocean Acoustics within the framework of the Massachusetts Institute of Technology (MIT) and
Woods Hole Oceanographic Institution (WHOI) Joint Program is presented, including a brief history of the program, facilities, details of
the courses offered, alumni placing, funding opportunities, and current program status, faculty members and research. Emphasis is given
to the key role of the joint strengths provided by MIT and WHOI, the strong sea-going history of the program, and the potential for
highly interdisciplinary research.
3aID14. Graduate studies in acoustics at the University of Notre Dame. Christopher Jasinski and Thomas C. Corke (Aerosp. and
Mech. Eng., Univ. of Notre Dame, 54162 Ironwood Rd., South Bend, IN 46635, chrismjasinski@gmail.com)
The University of Notre Dame department of Aerospace and Mechanical Engineering is conducting cutting edge research in aeroacoustics, structural vibration, and wind turbine noise. Expanding facilities are housed at two buildings of the Hessert Laboratory for
Aerospace Engineering and include two 25 kW wind turbines, a Mach 0.6 wind tunnel, and an anechoic wind tunnel. Several faculty
members conduct research related to acoustics and multiple graduate level courses are offered in general acoustics and aeroacoustics.
This poster presentation will give an overview of the current research activities, laboratory facilities, and graduate students and faculty
involved at Notre Dame’s Hessert Laboratory for Aerospace Engineering.
3aID15. Graduate study in Architectural Acoustics within the Durham School at the University of Nebraska—Lincoln. Lily M.
Wang, Matthew G. Blevins, Zhao Peng, Hyun Hong, and Joonhee Lee (Durham School of Architectural Eng. and Construction, Univ. of
Nebraska-Lincoln, 1110 South 67th St., Omaha, NE 68182-0816, lwang4@unl.edu)
Persons interested in pursuing graduate study in architectural acoustics are encouraged to consider joining the Architectural Engineering Program within the Durham School of Architectural Engineering and Construction at the University of Nebraska—Lincoln
(UNL). Among the 21 ABET-accredited Architectural Engineering (AE) programs across the United States, the Durham School’s program is one of the few that offers graduate engineering degree programs (MAE, MS, and PhD) and one of only two that offers an area of
concentration in architectural acoustics. Acoustics students in the Durham School benefit both from the multidisciplinary environment
in an AE program and from our particularly strong ties to the building industry, since three of the largest architectural engineering companies in the United States are headquartered in Omaha, Nebraska. Descriptions will be given on the graduate-level acoustics courses,
newly renovated acoustic lab facilities, the research interests and achievements of our acoustics faculty and students, and where our
graduates are to date. Our group is also active in extracurricular activities, particularly through the University of Nebraska Acoustical
Society of America Student Chapter. More information on the “Nebraska Acoustics Group” at the Durham School may be found online
at http://nebraskaacousticsgroup.org/.
3aID16. Pursuing the M.Eng. in acoustics through distance education from Penn State. Daniel A. Russell and Victor W. Sparrow
(Graduate Program in Acoust., Penn State Univ., 201 Appl. Sci. Bldg, University Park, PA 16802, drussell@engr.psu.edu)
Since 1987, the Graduate Program in Acoustics at Penn State has been providing remote access to graduate level education leading
to the M.Eng. degree in Acoustics. Course lecture content is currently broadcast as a live-stream via Adobe Connect to distance students
scattered throughout North America and around the world, while archived recordings allow distance students to access lecture material
at their convenience. Distance Education students earn the M.Eng. in Acoustics degree by completing 30 credits of coursework (six
required core courses and four electives) and writing a capstone paper. Courses offered for distance education students include: fundamentals of acoustics and vibration, electroacoustic transducers, signal processing, acoustics in fluid media, sound and structure interaction, digital signal processing, aerodynamic noise, acoustic measurements and data analysis, ocean acoustics, architectural acoustics,
noise control engineering, nonlinear acoustics, outdoor sound propagation, computational acoustics, flow induced noise, spatial sound
2200
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2200
and 3D audio, marine bioacoustics, and acoustics of musical instruments. This poster will summarize the distance education experience
leading to the M.Eng. degree in Acoustics from Penn State showcasing student demographics, capstone paper topics, enrollment statistics and trends, and the success of our graduates.
3aID17. Graduate studies in acoustics at Northwestern University. Ann Bradlow (Linguist, Northwestern Univ., 2016 Sheridan Rd.,
Evanston, IL IL, abradlow@northwestern.edu)
Northwestern University has a vibrant and highly interdisciplinary community of acousticians. Of the 13 ASA technical areas, three
have strong representation at Northwestern: Speech Communication, Psychological and Physiological Acoustics, and Musical Acoustics.
Sound-related work is conducted across a wide range of departments including Linguistics (in the Weinberg College of Arts and Sciences), Communication Sciences & Disorders, and Radio/Television/Film (both in the School of Communication), Electrical Engineering
& Computer Science (in the McCormick School of Engineering), Music Theory & Cognition (in the Bienen School of Music), and Otolaryngology (in the Feinberg School of Medicine). In addition, The Knowles Hearing Center involves researchers and labs across the
university dedicated to the prevention, diagnosis and treatment of hearing disorders. Specific acoustics research topics across the university range from speech perception and production across the lifespan and across languages, dialect and socio-indexical properties of
speech, sound design, machine perception of music and audio, musical communication, the impact of long-term musical experience on
auditory encoding and representation, auditory perceptual learning, and the cellular, molecular, and genetic bases of hearing function.
We invite you to visit our poster to learn more about the “sonic boom” at Northwestern University!
WEDNESDAY MORNING, 29 OCTOBER 2014
SANTA FE, 9:00 A.M. TO 11:45 A.M.
Session 3aMU
3a WED. AM
Musical Acoustics: Topics in Musical Acoustics
Jack Dostal, Chair
Physics, Wake Forest University, P.O. Box 7507, Winston-Salem, NC 27109
Contributed Papers
9:00
9:15
3aMU1. Study of free reed attack transients using high speed video.
Spencer Henessee (Phys., Coe College, GMU #447, 1220 First Ave. NE,
Cedar Rapids, IA 52402, sahenessee@coe.edu), Daniel M. Wolff (Univ. of
North Carolina at Greensboro, Greensboro, NC), and James P. Cottingham
(Phys., Coe College, Cedar Rapids, IA)
3aMU2. Detailed analysis of free reed initial transients. Daniel M. Wolff
(Univ. of North Carolina at Greensboro, 211 McIver St. Apt. D, Greensboro,
NC 27403, dmwolff@uncg.edu), Spencer Henessee, and James P. Cottingham (Phys., Coe College, Cedar Rapids, IA)
Earlier methods of studying the motion of free reeds have been augmented with the use of high-speed video, resulting in a more detailed picture
of reed oscillation, especially the initial transients. Displacement waveforms
of selected points on the reed tongue image can be obtained using appropriate tracking software. The waveforms can be analyzed for the presence of
higher modes of vibration and other features of interest in reed oscillation,
and they can be used in conjunction with displacement or velocity waveforms obtained by other means, along with finite element simulations, to
obtain detailed information about reed oscillation. The high speed video
data has a number of advantages. It can provide a two-dimensional image of
the motion of any point tracked on the reed tongue, and the freedom to
change the points selected for tracking provides flexibility in data acquisition. In addition, the high speed camera is capable of simultaneous triggering of other motion sensors as well as oscilloscopes and spectrum analyzers.
Some examples of the use of high speed video are presented and some difficulties in the use of this technique are discussed. [Work partially supported
by US National Science Foundation REU Grant PHY-1004860.]
2201
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
The motion of the reed tongue in early stages of the attack transient has
been studied in some detail for reeds from a reed organ. Oscillation waveforms were obtained using a laser vibrometer system, variable impedance
transducer proximity sensors, and high speed video with tracking software.
Typically, the motion of the reed tongue begins with an initial displacement
of the equilibrium position, often accompanied by a few cycles of irregular
oscillation. This is followed by a short transitional period in which the amplitude of oscillation gradually increases and the frequency stabilizes at the
steady state oscillation frequency. In the next stage, the amplitude of oscillation continues to increase to the steady state value. The spectra derived from
the waveforms in each stage have been analyzed, showing that the second
transverse mode and the first torsional mode are both observed in the transient, with the amplitude of the torsional mode apparently especially significant in the earlier stages of oscillation. Comparison of reed tongues of
different design have been made to explore the role of the torsional mode
the initial excitation. Finite element simulations have been used to aid in the
verification and interpretation of some of the results. [Work supported by
US National Science Foundation REU Grant PHY-1004860.]
168th Meeting: Acoustical Society of America
2201
9:30
10:30
3aMU3. Comparison of traditional and matched grips: Rhythmic
sequences played in jazz drumming. E. K. Ellington Scott (Oberlin College, OCMR2639, Oberlin College, Oberlin, OH 44074, escott@oberlin.
edu) and James P. Cottingham (Phys., Coe College, Cedar Rapids, IA)
3aMU6. Temporal analysis, manipulation, and resynthesis of musical
vibrato. Mingfeng Zhang, Gang Ren, Mark Bocko (Dept. Elec. and Comput. Eng., Univ. of Rochester, Rochester, NY 14627, mzhang43@hse.rochester.edu), and James Beauchamp (Dept. Elec. and Comput. Eng., Univ. of
Illinois at Urbana–Champaign, Urbana, IL)
Traditional and matched grips have been compared using a series of
measurements involving rhythmic sequences played by experienced jazz
drummers using each of the two grips. Rhythmic sequences played on the
snare drum were analyzed using high speed video as well as other measurement techniques including laser vibrometry and spectral analysis of the
sound waveforms. The high speed video images, used with tracking software, allow observation of several aspects of stick-drum head interaction.
These include two-dimensional trajectories of the drum stick tip, a detailed
picture of the stick-drum head interaction, and velocities of both the stick
and the drum head during the contact phase of the stroke. Differences
between the two grips in timing during the rhythmic sequences were investigated, and differences in sound spectrum were also analyzed. Some factors
that may be player dependent have been explored, such as the effect of tightness of the grip, but an effort has been made to concentrate on factors that
are independent of the player. [Work supported by US National Science
Foundation REU Grant PHY-1004860.]
9:45
3aMU4. A harmonic analysis of oboe reeds. Julia Gjebic, Karen Gipson
(Phys., Grand Valley State Univ., 10255 42nd Ave., Apt. 3212, Allendale,
MI 49401, gjebicj@mail.gvsu.edu), and Marlen Vavrikova (Music and
Dance, Grand Valley State Univ., Allendale, MI)
Because oboists make their own reeds to satisfy personal and physiological preferences, no two reed-makers construct their reeds in the same manner, just as no two oboe players have the same sound. The basic structure of
an oboe reed consists of two curved blades of the grass Arundo donax bound
to a conical metal tube (a staple) such that the edges of the blades meet and
vibrate against one another when stimulated by a change in the surrounding
pressure. While this basic structure is constant across reed-makers, the physical measurements of the various portions of the reed (tip, spine, and heart)
resulting from the final stage of reed-making (scraping) can vary significantly between individual oboists. In this study, we investigated how the
physical structure of individual reeds relates to the acoustic spectrum. We
performed statistical analyses to discern which areas of the finished reed
influence the harmonic series most strongly. This information is of great interest to oboists as it allows them quantitative insight into how their individual scrape affects their overall tone quality and timbre.
10:00
3aMU5. Modeling and numerical simulation of a harpsichord. Rossitza
Piperkova, Sebastian Reiter, Martin Rupp, and Gabriel Wittum (Goethe Ctr.
for Sci. Computing, Goethe Univ. Frankfurt, Kettenhofweg 139, Frankfurt
am Main 60325, Germany, Wittum@gcsc.uni-frankfurt.de)
This research studies what influences various properties of a soundboard
may have upon the acoustic feedback to gain a better understanding about
the relevance of different properties regarding the sound characteristics. It
may also help to improve the quality of simulations. We did a modal analysis of a real soundboard of a harpsichord using a Laser-Doppler-Vibrometer
and also simulated several models of the very same soundboard in three
space dimensions using the simulation software UG4. The used models of
the sound board differed from each other by changing or skipping several
properties and components. Then, we compared the simulated vibration patterns with the patterns measured on the real sound board to gain a better
understanding about their influences on the vibrations. In particular, we
used models with and without soundboard bars and bridge, but also were
using different thicknesses for the soundboard itself.
10:15–10:30 Break
2202
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Vibrato is an important music performance technique for both voice and
various music instruments. In this paper, a signal processing framework for
vibrato analysis, manipulation and resynthesis is presented. In the analysis
part, music vibrato is treated as a generalized descriptor of music timbre and
the signal magnitude and instantaneous frequency is implemented as temporal features. Specifically, the magnitude track shows the dynamic variations
of audio loudness, and the frequency track shows the frequency deviations
varying with time. In the manipulation part, several manipulation methods
for the magnitude track and the frequency track is implemented. The tracking results are manipulated in both the time- and the frequency-domain.
These manipulation methods are implemented as an interactive process to
allow musicians to manually adjust the processing parameters. In the resynthesis part, the simulated vibrato audio is created using sinusoidal resynthesis process. The resynthesis part serves three purpose: to imitate human
music performance, to migrate sonic features across music performances,
and to serve as creative audio design tools, e.g., to create non-existing
vibrato characteristics. The source audio from human music performance
and the resynthesize audio is compared using subjective listening tests to
validate our proposed framework.
10:45
3aMU7. Shaping musical vibratos using multi-modal pedagogical interactions. Mingfeng Zhang, Fangyu Ke (Dept. Elec. and Comput. Eng., Univ.
of Rochester, Rochester, NY 14627, mzhang43@hse.rochester.edu), James
Beauchamp (Dept. Elec. and Comput. Eng., Univ. of Illinois at Urbana–
Champaign, Urbana, IL), and Mark Bocko (Dept. Elec. and Comput. Eng.,
Univ. of Rochester, Rochester, NY)
The music vibrato is termed a “pulsation in pitch, intensity, and timbre”
because of its effectiveness in artistic rendering. However, this sonic trick is
largely still a challenge in music pedagogy across music conservatories. In
classroom practice, music teachers use demonstration, body gestures, and
metaphors to convey their artistic intentions and the modern computer tools
are seldom employed. In our proposed framework, we use musical vibrato
visualization and sonification tools as a multi-modal computer interface for
pedagogical purpose. Specifically, we compare master performance audio
with student performance audio using signal analysis tools. Then, we obtain
various similarity measures based on these signal analysis results. Based on
these similarity measures we implement multi-modal interactions for music
students to shape their music learning process. The visualization interface is
based on audio features including dynamics, pitch and timbre. The sonifications interface is based on recorded audio and synthesized audio. To
enhance the music relevance of our proposed framework, both visualization
and sonification tools are targeted to serve a musical communicating to convey musical concepts in an intuitive manner. The proposed framework is
evaluation using subjective ratings from music students and objective
assessment of measurable training goals.
11:00
3aMU8. Absolute memory for popular songs is predicted by auditory
working memory ability. Stephen C. Van Hedger, Shannon L. Heald,
Rachelle Koch, Howard C. Nusbaum (Psych., The Univ. of Chicago, 5848
S. University Ave., Beecher 406, Chicago, IL 60637, stephen.c.hedger@
gmail.com),
While most individuals do not possess absolute pitch (AP)—the ability
to name an isolated musical note in absence of a reference note—they do
show some limited memory for absolute pitch of melodies. For example,
most individuals are able to recognize when a well-known song has been
subtly pitch shifted. Presumably, individuals are able to select the correct
absolute pitch at above-chance levels because well-known songs are frequently heard at a consistent pitch. In the current studies, we ask whether
individual differences in absolute pitch judgments for people without AP
can be explained by general differences in auditory working memory.
168th Meeting: Acoustical Society of America
2202
including sonority prototypes, prototype transposition levels, and register
specific distortions. Notably, true difference tones—audible difference tones
unsustainable apart from a sounding multiphonic—are found to be register
specific, not sonority specific; suggesting that physical locations (rather than
harmonic contexts) underpin these sounds.
Working memory capacity has been shown to predict the perceptual fidelity
of long-term category representations in vision; thus, it is possible that auditory working memory capacity explains individual differences in recognizing the tuning of familiar songs. We found that participants were reliably
above chance in classifying popular songs as belonging to the correct or
incorrect key. Moreover, individual differences in this recognition performance were predicted by auditory working memory capacity, even after controlling for overall music experience and stimulus familiarity. Implications
for the interaction between working memory and AP are discussed.
11:30
3aMU10. Linear-response reflection coefficient of the recorder air-jet
amplifier. John C. Price (Phys., Univ. of Colorado, 390 UCB, Boulder, CO
80309, john.price@colorado.edu), William Johnston (Phys., Colorado State
Univ., Fort Collins, CO), and Daniel McKinnon (Chemical Eng., Univ. of
Colorado, Boulder, CO)
11:15
3aMU9. Constructing alto saxophone multiphonic space. Keith A. Moore
(Music, Columbia Univ., 805 W Church St., Savoy, Illinois 10033,
kam101@columbia.edu)
Multiphonics are sonorities with two or more independent tones arising
from instruments, or portions of instruments, associated with the production
of single pitches. Since the 1960s multiphonics have been probed in two
ways. Acousticians have explored the role of nonlinearity in multiphonic
sound production (Benade 1976; Backus 1978; Keefe & Laden 1991) and
musicians have created instrumental catalogs of multiphonic sounds (Bartolozzi 1967; Rehfeldt 1977; Kientzy 1982; Levine 2002). These lines of inquiry have at times been combined (Veale & Mankopf 1994). However, a
meta-level analysis has not yet emerged from this work that answers basic
questions such as how many kinds of multiphonics are found on one particular instrument and which physical conditions underlie such variety. The
present paper suggests a database driven approach to the problem, producing a “quantitative resonant frequency curve” that shows every audible
appearance of each frequency in a large—if not permutationally exhaustive—set of alto saxophone multiphonics. Compelling data emerges,
WEDNESDAY MORNING, 29 OCTOBER 2014
MARRIOTT 3/4, 8:45 A.M. TO 12:00 NOON
Session 3aNS
Noise and ASA Committee on Standards: Wind Turbine Noise
Nancy S. Timmerman, Cochair
Nancy S. Timmerman, P.E., 25 Upton Street, Boston, MA 02118
Robert D. Hellweg, Cochair
Hellweg Acoustics, 13 Pine Tree Rd., Wellesley, MA 02482
Paul D. Schomer, Cochair
Schomer and Associates Inc., 2117 Robert Drive, Champaign, IL 61821
Kenneth Kaliski, Cochair
RSG Inc., 55 Railroad Row, White River Junction, VT 05001
Invited Papers
8:45
3aNS1. Massachusetts Wind Turbine Acoustics Research Project—Goals and preliminary results. Kenneth Kaliski, David Lozupone (RSG Inc., 55 RailRd. Row, White River Junction, VT 05001, ken.kaliski@rsginc.com), Peter McPhee (Massachusetts Clean
Energy Ctr., Boston, MA), Robert O’Neal (Epsilon Assoc., Maynard, MA), John Zimmerman (Northeast Wind, Waterbury, VT), Kieth
Wilson (Keith Wilson, Hanover, NH), and Carol Rowan-West (Massachusetts Dept. of Environ. Protection, Boston, MA)
The Commonwealth of Massachusetts (USA) has 43 operating wind turbine projects of 100 kW or more. At several of these projects,
noise complaints have been made to state authorities. The Massachusetts Clean Energy Center, which provides funding for early stage
analysis and development of wind power projects, and the Massachusetts Department of Environmental Protection, which regulates
2203
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2203
3a WED. AM
Steady-state oscillations in a duct flute, such as the recorder, are controlled by (1) closing tone holes and (2) adjusting the blowing pressure or
air-jet velocity. The acoustic amplitude in steady-state cannot be controlled
independent of the jet velocity, because it is determined by the gain saturation properties of the air-jet amplifier. Consequently, the linear-response
gain of the air-jet amplifier has only very rarely been studied [Thwaites and
Fletcher, J. Acoust. Soc. Am. 74, 400–408 (1983)]. Efforts have focused
instead on the more complex gain-saturated behavior, which is controlled
by vortex shedding at the labium. We replace the body of a Yamaha YRT304B tenor recorder with a multi-microphone reflectometer and measure the
complex reflection coefficient of the head at small acoustic amplitudes as a
function of air-jet velocity and acoustic frequency. We find that the gain
(reflection coefficient magnitude) has a maximum value of 2.5 at a Strouhal
number of 0.3 (jet transit time divided by acoustic period), independent of
jet velocity. Surprisingly, the frequency where the gain peaks for a given
blowing pressure is not close to the in-tune pitch of a note that is played at
the same blowing pressure.
noise, launched the project to increase understanding of (1) wind turbine acoustic impacts, taking into account variables such as wind
turbine size, technology, wind speed, topography and distance, and (2) the generation, propagation, and measurement of sound around
wind turbine projects, to inform policy-makers on how pre- and post-construction wind turbine noise studies should be conducted. This
study involved the collection of detailed sound and meteorological data at five locations. The resulting database and interim reports contain information on infrasound and audible frequencies, including amplitude modulation, tonality, and level. Analyses will include how
the effects of wind shear and other variables may affect these parameters. Preliminary findings reflect the effects of meteorological conditions on wind turbine sound generation and propagation.
9:05
3aNS2. Wind turbine annoyance—A clue from acoustic room modes. William K. Palmer (TRI-LEA-EM, 76 SideRd. 33-34 Saugeen,
RR 5, Paisley, ON N0G2N0, Canada, trileaem@bmts.com)
When one admits that they do not know all the answers and sets out to listen to the stories of people annoyed by wind turbines, the
clues can seem confusing. Why would some people report that they could get a better night’s sleep in an outdoor tent, rather than their
bedroom? Others reported that they could sleep better in the basement recreation room of their home, than in bedrooms. That made little
sense either. A third mysterious clue came from acoustic measurements at homes nearby wind turbines. Analysis of the sound signature
revealed low frequency spikes, but at amplitudes well below those expected to cause annoyance. The clues merged while studying the
acoustic room modes in a home, to reveal a remarkable hypothesis as to the cause of annoyance from wind turbines. In rooms where
annoyance was felt, the frequencies flagged by room mode calculations and the low frequency spikes observed from wind turbine measurements coincided. This paper will discuss the research and the results, which revealed a finding that provides a clue to the annoyance,
and potentially even a manner of providing limited relief.
9:25
3aNS3. A perspective on wind farm complaints and the Acoustical Society of America’s public policy. Paul D. Schomer (Schomer
and Assoc., Inc., 2117 Robert Dr., Champaign, IL 61821, schomer@SchomerAndAssociates.com) and George Hessler (Hessler Assoc.,
Haymarket, VA)
Worldwide, hundreds of wind farms have been built and commissioned. A sizeable fraction of these have had some complaints about
wind farm noise, perhaps 10 to 50%. A smaller percentage of wind farms have engendered more widespread complaints and claims of
adverse health effects, perhaps 1 to 10%. And in the limit (0 to 1%), there have been very widespread, vociferous complaints and in
some cases people have abandoned their houses. Some advocates for potentially affected communities have opined that many will be
made ill, while living miles from the nearest turbine, and some, who are wind power advocates, have opined that there is no possibility
anyone can possibly be made ill from wind turbine acoustic emissions. In an attempt to ameliorate this frequently polarized situation,
the ASA has established a public policy statement that calls for the development of a balanced research agenda to establish facts, where
“balanced” means the research should resolve issues for all parties with a material interest, and all parties should have a seat at the table
where the research plans are developed. This paper presents some thoughts and suggestions as to how this ASA public policy statement
can be nurtured and brought to fruition.
9:45
3aNS4. Balancing the research approach on wind turbine effects through improving psychological factors that affect community
response. Brigitte Schulte-Fortkamp (Inst. of Fluid Mech. and Eng. Acoust., TU Berlin, Einsteinufer 25, Berlin 101789, Germany, b.
schulte-fortkamp@tu-berlin.de)
There is a substantial need to find a balanced approach to deal with people’s concern about wind turbine effects. Indeed, the psychological factors that affect community response will be an important facet in this complete agenda development. Many of these relevant
issues are related to the soundscape concept which was adopted as an approach to provide a more holistic evaluation of “noise” and its
effects on the quality of life. Moreover, the soundscape technique uses a variety of investigation techniques, taxonomy and measurement
methods. This is a necessary protocol to approach a subject or phenomenon, to improve the validity of the research or design outcome
and to reduce the uncertainty of relying only on one approach. This presentation will use recent data improving the understanding about
the role of psychoacoustic parameters going beyond equivalent continuous sound level in wind turbine affects in order to discuss relevant psychological factors based on soundscape techniques.
10:05–10:25 Break
Contributed Papers
10:25
3aNS5. Measurement and synthesis of wind turbine infrasound. Bruce
E. Walker (Channel Islands Acoust., 676 W Highland Dr., Camarillo, CA
93010, noiseybw@aol.com) and Joseph W. Celano (Newson-Brown Acoust.
LLC, Santa Monica, CA)
As part of an ongoing investigation into the putative subjective effects
of sub-20 Hz acoustical emissions from large industrial wind turbines, measurement techniques for faithful capture of emissions waveforms have been
2204
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
developed and reported. To evaluate perception thresholds, Fourier synthesis and high fidelity low-frequency playback equipment has been used to
duplicate in a residential-like listening environment the amplitudes and
wave slopes of the actual emissions, with pulsation rate in the range of 0.5–
1.0 per second. Further, the amplitudes and slopes of the synthesized waves
can be parametrically varied and the harmonic phases “scrambled” to assess
the relative effects on auditory and other subjective responses. Measurement
and synthesis system details and initial subjective response results will be
shown.
168th Meeting: Acoustical Society of America
2204
10:40
sound output of individual turbines. In this study, a combined approach of
the Finite Element Method (FEM) and Parabolic Equation (PE) method is
employed to predict the sound levels from a wind turbine. In the prediction
procedure, the near field acoustic data is obtained by means of a computational fluid dynamic program which serves as a good starting field of sound
propagation. It is then possible to advance wind turbine noise in range by
using the FEM/PE marching algorithm. By incorporating the simulated turbulence profiles near wind turbine, more accurate predictions of sound field
in realistic atmospheric conditions are obtained.
3aNS6. Propagation of wind turbine noise through the turbulent atmosphere. Yuan Peng, Nina Zhou, Jun Chen, and Kai Ming Li (Mech. Eng.,
Purdue Univ., 177 South Russel St., West Lafayette, IN 47907-2099,
peng45@purdue.edu)
It is well known that turbulence can cause fluctuations in the resulting
sound fields. In the issue of wind turbine noise, such effect is non-negligible
since either the inflow turbulence from nearby turbine wakes or the atmospheric turbulence generated by rotating turbine blades can increase the
10:55–12:00 Panel Discussion
WEDNESDAY MORNING, 29 OCTOBER 2014
INDIANA C/D, 8:20 A.M. TO 11:30 A.M.
Session 3aPA
Physical Acoustics, Underwater Acoustics, Structural Acoustics and Vibration, and Noise: Acoustics of Pile
Driving: Models, Measurements, and Mitigation
Kevin M. Lee, Cochair
Applied Research Laboratories, The University of Texas at Austin, 10000 Burnet Road, Austin, TX 78758
3a WED. AM
Mark S. Wochner, Cochair
AdBmTechnologies, 1605 McKinley Ave., Austin, TX 78702
Invited Papers
8:20
3aPA1. Understanding effects of man-made sound on fishes and turtles: Gaps and guidelines. Arthur N. Popper (Biology, Univ. of
Maryland, Biology/Psych. Bldg., College Park, MD 20742, apopper@umd.edu) and Anthony D. Hawkins (Loughine Ltd, Aberdeen,
United Kingdom)
Mitigating measures may be needed to protect animals and humans that are exposed to sound from man-made sources. In this context, the levels of man-made sound that will disrupt behavior or physically harm the receiver should drive the degree of mitigation that
is needed. If a particular sound does not affect an animal adversely, then there is no need for mitigation! The problem then is to know
the sound levels that can affect the receiving animal. For most marine animals, there are relatively few data to develop guidelines that
can help formulate the levels at which mitigation is needed. In this talk, we will review recent guidelines for fishes and turtles. Since so
much remains to be determined in order to make guidelines more useful, it is important that priorities be set for future research. The
most critical data, with broadest implications for marine life, should be obtained first. This paper will also consider the most critical gaps
and present recommendations for future research.
8:40
3aPA2. The relationship between underwater sounds generated by pile driving and fish physiological responses. Michele B. Halvorsen (CSA Ocean Sci. Inc., 8502 SW Kanner Hwy, Stuart, FL 334997, mhalvorsen@conshelf.com)
Assessment of fish physiology after exposure to impulsive sound has been limited by quantifying physiological injuries, which range
from mortal to recoverable. A complex panel of injuries was reduced to a single metric by a model called the Fish Index of Trauma.
Over several years, six species of fishes from different morphological groupings, (e.g., physoclistous, physostomous, and lack of a swim
bladder) were studied. The onset of physiological tissue effect was determined across a range of cumulative sound exposure levels with
varying number of pile strikes. Follow up studies included investigation of healing from incurred injuries. The level of injury that animals expressed was influenced by their morphological grouping. Finally, investigation of the inner ear sensory hair cells showed that
damage occurred at higher sound exposure levels than when the onset of tissue injury would occur.
2205
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2205
9:00
3aPA3. A model to predict tissue damage in fishes from vibratory and impact pile driving. Mardi C. Hastings (George W. Woodruff School of Mech. Eng., Georgia Inst. of Technol., Atlanta, GA 30332-0405, mardi.hastings@gatech.edu)
Predicting effects of underwater pile driving on marine life requires coupling of pile source models with biological receiver models.
Fishes in particular are very vulnerable to tissue damage and hearing loss from pile driving activities, especially since they are often restricted to specific habitat sites and migratory routes. Cumulative sound exposure level is the metric used by government agencies for
sound exposure criteria to protect marine animals. In recent laboratory studies, physical injury and hearing loss in fish from simulated
impact pile driving signals have even been correlated with this metric. Mechanisms for injury and hearing loss in fishes, however,
depend on relative acoustic particle motion within the body of the animal, which can be disproportionately large in the vicinity of a pile.
Modeling results will be presented showing correlation of auditory tissue damage in three species of fish with relative particle motion
that can be generated 10–20 m from driving a 24-in diameter steel pile with an impact hammer. Comparative results with vibratory piling based on measured waveforms indicate that particle motion mechanisms may provide an explanation why the very large cumulative
sound exposure levels associated with vibratory pile driving do not produce tissue damage.
9:20
3aPA4. Pile driving pressure and particle velocity at the seabed: Quantifying effects on crustaceans and groundfish. James H.
Miller, Gopu R. Potty, and Hui-Kwan Kim (Ocean Eng., Univ. of Rhode Island, URI Bay Campus, 215 South Ferry Rd., Narragansett,
RI 02882, miller@egr.uri.edu)
In the United States, offshore wind farms are being planned and construction could begin in the near future along the East Coast of
the US. Some of the sites being considered are known to be habitat for crustaceans such as the American lobster, Homarus americanus,
which has a range from New Jersey to Labrador along the coast of North America. Groundfish such as summer flounder, Paralichthys
dentatus, and winter flounder, Pseudopleuronectes americanus, also are common along the East Coast of the US. Besides sharing the
seafloor in locations where wind farms are planned, all three of these species are valuable commercially. We model the effects on crustaceans, groundfish, and other animals near the seafloor due to pile driving. Three different waves are investigated including the compressional wave, shear wave and interface wave. A Finite Element (FE) technique is employed in and around the pile while a Parabolic
Equation (PE) code is used to predict propagation at long ranges from the pile. Pressure, particle displacement, and particle velocity are
presented as a function of range at the seafloor for a shallow water environment near Rhode Island. We will discuss the potential effects
on animals near the seafloor.
9:40
3aPA5. Finite difference computational modeling of marine impact pile driving. Alexander O. MacGillivray (JASCO Appl. Sci.,
2305–4464 Markham St., Victoria, BC V8Z7X8, Canada, alex@jasco.com)
Computational models based on the finite difference (FD) method can be successfully used to predict underwater pressure waves
generated by marine impact pile driving. FD-based models typically discretize the equations of motion for a cylindrical shell to model
the vibrations of a submerged pile in the time-domain. However, because the dynamics of a driven pile are complex, realistic models
must also incorporate physics of the driving hammer and surrounding acousto-elastic media into the FD formulation. This paper discusses several of the different physical phenomena involved, and shows some approaches to simulating them using the FD method.
Topics include dynamics of the hammer and its coupling to the pile head, transmission of axial pile vibrations into the soil, energy dissipation at the pile wall due to friction, acousto-elastic coupling to the surrounding media, and near-field versus far-field propagation modeling. Furthermore, this paper considers the physical parameters required for predictive modeling of pile driving noise in conjunction
with some practical considerations about how to determine these parameters for real-world scenarios.
10:00–10:20 Break
10:20
3aPA6. On the challenges of validating a profound pile driving noise model. Marcel Ruhnau, Tristan Lippert, Kristof Heitmann, Stephan Lippert, and Otto von Estorff (Inst. of Modelling and Computation, Hamburg Univ. of Technol., Denickestraße 17, Hamburg,
Hamburg 21073, Germany, mub@tuhh.de)
When predicting underwater sound levels for offshore pile driving by using numerical simulation models, appropriate model validation becomes of major importance. In fact, different parallel transmission paths for sound emission into the water column, i.e., pile-towater, pile-to-soil, and soil-to-water, make a validation at each of the involved interfaces necessary. As the offshore environment comes
with difficult and often unpredictable conditions, measurement campaigns are very time consuming and cost intensive. Model developers have to keep in mind that even thorough planning cannot overcome practical restrictions as well as technical limits and thus require
for a reasonable model balancing. The current work presents the validation approach chosen for a comprehensive pile driving noise
model—consisting of a near field finite element model as well as a far field propagation model—that is used for the prediction of noise
levels at offshore wind farms.
10:40
3aPA7. Underwater noise and transmission loss from vibratory pile driving. Peter H. Dahl and Dara M. Farrell (Appl. Phys. Lab.
and Mech. Eng. Dept., Univ. of Washington, 1013 NE 40th St., Seattle, WA 98105, dahl@apl.washington.edu)
High levels of underwater sound can be produced in vibratory pile driving that can carry regulatory implications. In this presentation,
observations of underwater noise from vibratory pile driving made with a vertical line array placed at range 17 m from the source (water
depth 7.5 m) are discussed, along with simultaneous measurements made at ranges of order 100 m. It is shown that the dominant spectral
2206
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2206
features are related to the frequency of the vibratory pile driving hammer (typically 15–35 Hz), producing spectral lines at intervals of
this frequency. Homomorphic analysis removes these lines to reveal the underlying variance spectrum. The mean square pressure versus
depth is subsequently studied in octave bands in view of the aforementioned spectral line property, with depth variation well modeled by
an incoherent sum of sources distributed over the water column. Adiabatic mode theory is used to model the range dependent local bathymetry, including the effect of elastic seabed, and comparisons are made with simultaneous measurements of the mean-square acoustic
pressure at ranges 200 and 400 m. This approach makes clear headway into the problem of predicting transmission loss versus range for
this method of pile driving.
Contributed Papers
11:00
surrounded by different arrays of resonators. The results indicate that airfilled resonators are a potential alternative to using encapsulated bubbles for
low frequency underwater noise mitigation. [Work supported by AdBm
Technologies.]
3aPA8. Using arrays of air-filled resonators to attenuate low frequency
underwater sound. Kevin M. Lee, Andrew R. McNeese (Appl. Res. Labs.,
The Univ. of Texas at Austin, 10000 Burnet Rd., Austin, TX 78758, klee@
arlut.utexas.edu), Preston S. Wilson (Mech. Eng. Dept. and Appl. Res.
Labs., The Univ. of Texas at Austin, Austin, TX), and Mark S. Wochner
(AdBm Technologies, Austin, TX)
11:15
This paper investigates the acoustic behavior of underwater air-filled
resonators that could potentially be used in an underwater noise abatement
system. The resonators are similar to Helmholtz resonators without a neck,
consisting of underwater inverted air-filled cavities with combinations of
rigid and elastic wall members, and they are intended to be fastened to a
framework to form a stationary array surrounding a noise source, such as a
marine pile driving operation, a natural resource production platform, or an
air gun array, or to protect a receiving area from outside noise. Previous
work has demonstrated the potential of surrounding low frequency sound
sources with arrays of large stationary encapsulated bubbles that can be
designed to attenuate sound levels over any desired frequency band and
with levels of reduction up to 50 dB [Lee and Wilson, Proceedings of Meeting on Acoustics 19, 075048 (2013)]. Open water measurements of underwater sound attenuation using resonators were obtained during a set of lake
experiments, where a low-frequency electromechanical sound source was
We present experiments on the dynamic buckling of slender rods axially
impacted by a projectile. By combining the results of Saint-Venant and elastic beam theory, we derive a preferred wavelength for the buckling instability, and experimentally verify the resulting scaling law for a range of
materials using high speed video analysis. The scaling law for the preferred
excitation mode depends on the ratio of the longitudinal speed of sound in
the beam to the impact speed of the projectile. We will briefly present the
imprint of this deterministic mechanism on the fragmentation statistics for
brittle beams.
WEDNESDAY MORNING, 29 OCTOBER 2014
MARRIOTT 1/2, 8:00 A.M. TO 9:20 A.M.
Session 3aSAa
Structural Acoustics and Vibration, Architectural Acoustics, and Noise: Vibration Reduction
in Air-Handling Systems
Benjamin M. Shafer, Chair
Technical Services, PABCO Gypsum, 3905 N 10th St, Tacoma, WA 98406
Chair’s Introduction—8:00
Invited Papers
8:05
3aSAa1. Vibration reduction in air handling systems. Angelo J. Campanella (Acculab, Campanella Assoc., 3201 Ridgewood Dr.,
Ohio, Hilliard, OH 43026, a.campanella@att.net)
Air handling units (AHU) mounted on elevated floors in old and new buildings can create floor vibrations that transmit through the
building structure to perturb nearby occupants and sensitive equipment such as electron microscopes. Vibration sources include rotating
fan imbalance and air turbulence. Isolation springs and the deflecting floor then create a two degree of freedom system. The analysis discussed here was originally published in “Sound and Vibration,” October 1987, pp. 26–30. Analysis parameters will be discussed along
with inertia block affects and spring design strategy for floors of finite mass.
2207
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2207
3a WED. AM
3aPA9. Axial impact driven buckling dynamics of slender beams. Josh
R. Gladden (Phys. & NCPA, Univ. of MS, 108 Lewis Hall, University, MS
38677, jgladden@olemiss.edu), Nestor Handzy, Andrew Belmonte (Dept.
of Mathematics, The Penn State Univ., University Park, PA), and E. Villermaux (Institut de Recherche sur les Phenomenes Hors Equilibre, Universite
de Provence, Marseille, France)
8:25
3aSAa2. Determining fan generated dynamic forces for use in predicting and controlling vibration and structure-borne noise
from air handling equipment. James E. Phillips (Wilson, Ihrig & Assoc., Inc., 6001 Shellmound St., Ste. 400, Emeryville, CA 94608,
jphillips@wiai.com)
Vibration measurements were conducted to determine the dynamic forces imparted by an operating fan to the floor of an existing
rooftop mechanical room. The calculated forces where then used as inputs to a Finite Element Analysis (FEA) computer model to predict the vibration and structure-borne noise in a future building with a similar fan. This paper summarizes the vibration measurements,
analysis of the measured data, the subsequent FEA analysis of the future building and the recommendations developed to control fan
generated noise and vibration in the future building.
8:45
3aSAa3. Vibration isolation of mechanical equipment: Case studies from light weight offices to casinos. Steve Pettyjohn (The
Acoust. & Vib. Group, Inc., 5765 9th Ave., Sacramento, CA CA, spettyjohn@acousticsandvibration.com)
Whether to vibration isolate HVAC equipment or not is often left to the discretion of the mechanical engineer or the equipment supplier. Leaving he isolators out saves money in materials and installation. The value of putting them is not so clear. The cost of not installing the isolators is seldom understood nor the cost of installing them later and the loss of trust by the client. Vibration is generated by
all rotating and reciprocating equipment. The resulting unbalanced forces ares seldom known with certainty nor are they quantified. This
paper explores the isolation of HVAC equipment on roof, penthouses and roofs without consideration for the stiffness of the structures
or resonances of other building elements. The influence of horizontal forces and the installation of the equipment to account for these
forces is seldom considered. The application of restraining forces must consider where the force is applied and what the moment arm is.
A quick review of the basic formulas will be given one-degree and multi-degree systems. Examples of problems that arose when the
vibration isolated was not considered will be presented for a variety of conditions. The corrective actions will be given also.
Contributed Paper
9:05
3aSAa4. Transition of steady air flow into an anharmonic acoustic
pulsed flow in a prototype reactor column: Experimental results and
mathematical modeling. Hasson M. Tavossi (Phys., Astronomy, & GeoSci., Valdosta State University, 2402 Spring Valley Cir, Valdosta, GA
31602, htavossi@valdosa.edu)
A prototype experimental setup is designed to convert steady air flow
into an oscillatory anharmonic acoustic pulsed flow, under special experimental conditions. The steady flow in a cylindrical reactor column of 3 m
height and 15 cm in diameter with a porous layer, transforms itself abruptly
into an oscillatory acoustic pulsed flow. Experimental results show the existence of a threshold for flow-rate, beyond which this transformation into
anharmonic oscillatory flow takes place. This change in flow regime is analogous to the phenomenon of bifurcation in a chaotic system, with abrupt
change from one energy state into another. Experimental results show that
the acoustic oscillations amplitude depends on system size. Preliminary
mathematical model will be presented that includes; relaxation oscillations,
non-equilibrium thermodynamics, and Joule-Thomson effect. The frequencies at peak amplitude for the acoustic vibrations in the reactor column are
expressed in terms of flow-rate, pressure-drop, viscosity, and dimensionless
characteristic numbers of the air flow in the system.
WEDNESDAY MORNING, 29 OCTOBER 2014
MARRIOTT 1/2, 10:00 A.M. TO 12:00 NOON
Session 3aSAb
Structural Acoustics and Vibration: General Topics in Structural Acoustics and Vibration
Benjamin Shafer, Chair
Technical Services, PABCO Gypsum, 3905 N 10th St., Tacoma, WA 98406
Contributed Papers
10:00
3aSAb1. Design of an experiment to measure unsteady shear stress and
wall pressure transmitted through an elastomer in a turbulent boundary layer. Cory J. Smith (Appl. Res. Lab., The Penn State Univ., 1109
Houserville Rd., State College, PA 16801, coryjonsmith@gmail.com), Dean
E. Capone, and Timothy A. Brungart (Graduate Program in Acoust., Appl.
Res. Lab., The Penn State Univ., State College, PA)
A flat plate that is exposed to a turbulent boundary layer (TBL) experiences unsteady velocity fluctuations which result in fluctuating wall
2208
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
pressures and shear stresses on the surface of the plate. There is an interest
in understanding how fluctuating shear stresses and normal pressures generated on the surface of an elastomer layer exposed to a TBL in water are
transmitted through the layer onto a rigid backing plate. Analytic models
exist which predict these shear stress and normal pressure spectra on the surface of the elastomer as well as those transmitted through the elastomer.
The design of a novel experiment is proposed which will utilize Surface
Stress Sensitive Films (S3F) to measure the fluctuating shear stress and
hydrophones to measure fluctuating normal pressure at the elastomer-plate
interface. These experimental measurements would then be compared to
168th Meeting: Acoustical Society of America
2208
10:15
3aSAb2. Exploration into the sources of error in the two-microphone
transfer function impedance tube method. Hubert S. Hall (Naval Surface
Warfare Ctr. Carderock Div., 9500 MacArthur Blvd., West Bethesda, MD
20817, hubert.hall@navy.mil), Joseph Vignola, John Judge (Dept. of Mech.
Eng., The Catholic Univ. of America, Washington, DC), and Diego Turo
(Dept. of Biomedical Eng., George Mason Univ., Fairfax, VA)
solutions for mitigating the noise and vibration in adjoining spaces due to
floor impact problems. Also discussed in this paper are the qualitative
results of some preliminary tests performed in order to better understand the
mechanics of impacts on floating floor assemblies.
11:00
3aSAb5. Stethoscope-based detection of detorqued bolts using impactinduced acoustic emissions. Joe Guarino (Mech. and Biomedical Eng.,
Boise State Univ., Boise, ID) and Robert Hamilton (civil Eng., Boise State
Univ., 1910 University Dr., Boise, ID 83725, rhamilton@boisestate.edu)
The two-microphone transfer function method has become the most
widely used method of impedance tube testing. Due to its measurement
speed and ease of implementation, it has surpassed the standing-wave ratio
method in popularity despite inherent frequency limitations due to tube geometry. Currently, the two-microphone technique is described in test standards ASTM E1050 and ISO 10534-2 to ensure accurate measurement.
However, while detailed for correct test execution, the standards contain
vague recommendations for a variety of measurement parameters. For
instance, it is only stated in ASTM E1050 that “tube construction shall be
massive so sound transmission through the tube wall is negligible.” To
quantify this requirement, damping of the tube was varied to determine how
different loss factor values effect measured absorption coefficient values.
Additional sources of error explored are the amount of required absorbing
material within the tube for reflective material measurements, additional calibration methods needed for test of excessive reflective materials, and alternate methods of combating microphone phase error and tube attenuation.
Non-invasive impact analysis can be used to detect loosened bolts in a
steel structure composed of construction-grade I beams. An electronically
enhanced stethoscope was used to acquire signals from a moderate to light
impact of a hammer on a horizontal steel I beam. Signals were recorded by
placing the diaphragm of the stethoscope on the flange of either the horizontal beam or the vertical column proximal to a bolted connection connecting
the two members. Data were taken using a simple open-loop method; the
input signal was not recorded, nor was it used to reference the output signal.
The bolted connection had eight bolts arranged in a standard configuration.
Using the “turn of the nut” standard outlined by the Research Council on
Structural Connections (RCSC, TDS-012 2-18-08), the bolted joint was
tested in three conditions: turn of the nut tight, finger tight, and loose. We
acquired time-based data from each of 52 patterns of the eight bolts in three
conditions of tightness. Results of both time and frequency-based analyses
show that open-loop responses associated with detorqued bolts vary in both
amplitude decay and frequency content. We conclude that a basic mechanism can be developed to assess the structural health of bolted joints.
Results from this project will provide a framework for further research,
including the analysis of welded joints using the same approach.
10:30
11:15
3aSAb3. Analysis of the forced response and radiation of a singledimpled beam with different boundary conditions. Kyle R. Myers and
Koorosh Naghshineh (Mech. & Aerosp. Eng., Western Michigan Univ.,
College of Eng. & Appl. Sci., 4601 Campus Dr., Kalamazoo, MI 49008,
kyle.r.myers@wmich.edu)
3aSAb6. Creep behavior of composite interlayer and its influence on
impact sound of floating floor. Tongjun Cho, Byung Kwan Oh, Yousok
Kim, and Hyo Seon Park (Architectural Eng., Yonsei Univ., Yonseino 50
Seodaemun-gu, Seoul 120749, South Korea, tjcho@yonsei.ac.kr)
Beading and dimpling via the stamping process has been used for decades to stiffen structures (e.g., beams, plates, and shells) against static loads
and buckling. Recently, this structural modification technique has been used
as a means to shift a structure’s natural frequencies and to reduce its radiated sound power. Most studies to date have modeled dimpled beams and
dimpled/beaded plates using the finite element method. In this research, an
analytical model is developed for a beam with any number of dimples using
Hamilton’s Principle. First, the natural frequencies and mode shapes are predicted for a dimpled beam in free transverse vibration. A comparison with
those obtained using the finite element method shows excellent agreement.
Second, the forced response of a dimpled beam is calculated for a given
input force. Mode shapes properly scaled from the forced response are used
in order to calculate the beam strain energy, thus demonstrating the effect of
dimpling on beam natural frequencies. Finally, some preliminary results are
presented on the changes in the radiation properties of dimpled beams.
10:45
3aSAb4. The impact of CrossFit training—Weight drops on floating
floors. Richard S. Sherren (Kinetics Noise Control, 6300 Irelan Pl., Dublin,
OH 43062, rsherren@kineticsnoise.com)
CrossFit training is a popular fitness training method. Some facilities
install lightweight plywood floating floor systems as a quick, inexpensive
method to mitigate impact generated noise and vibration into adjoining
spaces. Part of the CrossFit training regimen involves lifting significant
weight overhead, and then dropping the weight on the floor. The energy
transferred to the floor system can cause severe damage to floor surfaces
and structures; and, when using a lightweight floating floor system, even the
isolators can be damaged. This paper describes a spreadsheet based analytical model being used to study the effects of such impacts on various floor
systems. This study is a prelude to experiments that will be performed on a
full scale model test floor. The results of those experiments will be used to
verify the model so that it can be used as a design tool for recommending
2209
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Creep-induced changes in dynamic stiffness of resilient interlayer used
for floating floor is an important parameter of vibration isolator in long-term
use. Compressive creep behavior of a composite layer made from closedcell foam and fibrous material is investigated using a Findley equation-based
method recommended by International Organization for Standardization
(ISO). Quasi-static mechanical analysis is used to evaluate the dynamic
stiffness influenced by the creep-deformation of the composite layer. It is
shown in the present work that the long-term creep strain of the interlayer
under nominal load of the floor and furniture is within the zone where
dynamic stiffness increases. The changes in low frequency impact sound by
the long-term creep deformation are estimated through real scale laboratory
experiments and numerical vibro-acoustic analysis.
11:30
3aSAb7. Investigation of damping in the polymer concrete sleeper for
use in reduction of rolling noise from railway. SangKeun Ahn, Eunbeom
Jeon, Junhong Park, Hak-sung Kim (Mech. Eng., Hanyang Univ., 222,
Wangsimni-ro, Seongdong-gu, Appendix of Eng. Ctr., 211, Seoul 133-791,
South Korea, ask9156@hanyang.ac.kr), and Hyo-in Kho (Korea RailRd.
Res. Inst., Uiwang, South Korea)
The purpose of this study was to measure damping of various polymer
concretes to be used as sleepers for railway. The polymer concretes consisted of epoxy monomer, hardener and aggregates. Various polymer concrete specimens were made by changing epoxy resin weight ratio and curing
temperature. The dynamic properties of the polymer concrete specimens
were measured by using beam-transfer function method. To predict reduction performance of the polymer concrete sleepers, an infinite Timoshenko
beam model was investigated after applying measured concrete properties.
The moving loads from rotating wheels on railway due to different roughness were utilized in railway vibration analysis. The vibration response was
predicted from which the effects of supporting stiffness and loss factor of
sleeper were investigated. The radiated sound power was predicted using
calculated rail vibration response. Consequently, the sound power levels
168th Meeting: Acoustical Society of America
2209
3a WED. AM
models of unsteady shear and unsteady pressure spectra within a TBL for
purposes of model validation. This work will present the design of an
experiment to measure the unsteady pressure and unsteady shear at the elastomer-plate interface and the methodology for comparing the measured
results to the analytic model predictions.
during the injection process is one of the main contributors to engine combustion noise. This impact noise is generated during opening and closing by an injector rod operated by a solenoid. For design of an injector with reduced noise
generation, it is necessary to analyze its sound radiation mechanism and propose consequent evaluation method. Spectral and modal characteristics of the
injectors were measured through vibration induced by external hammer excitation. The injector modal characteristics were analyzed using a simple beam after analyzing its boundaries by complex transverse and rotational springs. To
evaluate impulsive sounds more effectively, Prony analysis of sounds was used
for verifying influence of injector modal characteristics.
were compared for rails supported by different polymer concrete sleepers.
The result of this study assists constructing low noise railway.
11:45
3aSAb8. Study on impulsive noise radiation from of gasoline direct injector. Yunsang Kwak and Junhong Park (Mech. Eng., Hanyang Univ., 515
FTC Hanyang Univ. 222, Wangsimni-ro Seongdong-gu, Seoul
ASI|KR|KS013|Seoul, South Korea, toy0511@hanmail.net)
A gasoline direct injection (GDI) engine uses its own injectors for high
pressure fuel supply to the combustion chamber. High frequency impact sound
WEDNESDAY MORNING, 29 OCTOBER 2014
MARRIOTT 5, 8:00 A.M. TO 12:00 NOON
Session 3aSC
Speech Communication: Vowels = Space + Time, and Beyond: A Session in Honor of Diane Kewley-Port
Catherine L. Rogers, Cochair
Dept. of Communication Sciences and Disorders, University of South Florida, USF, 4202 E. Fowler Ave., PCD1017, Tampa,
FL 33620
Amy T. Neel, Cochair
Dept. of Speech and Hearing Sci., Univ. of New Mexico, MSC01 1195, University of New Mexico, Albuquerque, NM 87131
Chair’s Introduction—8:00
Invited Papers
8:05
3aSC1. Vowels and intelligibility in dysarthric speech. Amy T. Neel (Speech and Hearing Sci., Univ. of New Mexico, MSC01 1195,
University of New Mexico, Albuquerque, NM 87131, atneel@unm.edu)
Diane Kewley-Port’s work in vowel perception under challenging listening conditions and in the relation between vowel perception
and production in second language learners has important implications for disordered speech. Vowel space area has been widely used as
an index of articulatory working space in speakers with hypokinetic dysarthria related to Parkinson disease (PD), with the assumption
that a larger vowel space is associated with higher speech intelligibility. Although many studies have reported acoustic measures of vowels in Parkinson disease, vowel identification and transcription tasks designed to relate changes in production with changes in perception
are rarely performed. This study explores the effect of changes in vowel production by six talkers with PD speaking at habitual and loud
levels of effort on listener perception. The relation among vowel acoustic measures (including vowel space area and measures of temporal and spectral distinctiveness), vowel identification scores, speech intelligibility ratings, and sentence transcription accuracy for speakers with dysarthria will be discussed.
8:25
3aSC2. Vowels in clear and conversational speech: Within-talker variability in acoustic characteristics. Sarah H. Ferguson and
Lydia R. Rogers (Commun. Sci. and Disord., Univ. of Utah, 390 South 1530 East, Rm. 1201, Salt Lake City, UT 84112, sarah.ferguson@hsc.utah.edu)
The Ferguson Clear Speech Database was developed for the first author’s doctoral dissertation, which was directed by Diane Kewley-Port at Indiana University. While most studies using the Ferguson Database have examined variability among the 41 talkers, the
present investigation considered within-talker differences. Specifically, this study examined the amount of variability each talker showed
among the 7 tokens of each of 10 vowels produced in clear versus conversational speech. Steady-state formant frequencies have been
measured for 5740 vowels in /bVd/ context using PRAAT, and a variety of measures of spread will be used to determine variability for
each vowel in each speaking style for each talker. Results will be compared to those of the only known previous study that included a
sufficiently large number of tokens for this type of analysis, an unpublished thesis from 1980. Based on this study, we predict that
within-token variability will be smaller in clear speech than in conversational speech.
2210
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2210
8:45
3aSC3. Understanding speech from partial information: The contributions of consonants and vowels. Daniel Fogerty (Commun.
Sci. and Disord., Univ. of South Carolina, 1621 Greene St., Columbia, SC 29208, fogerty@sc.edu)
In natural listening environments, speech is commonly interrupted by background noise. These environments require the listener to
extract meaningful speech cues from the partially preserved acoustic signal. A number of studies have now investigated the relative contribution of preserved consonant and vowel segments to speech intelligibility using an interrupted speech paradigm that selectively preserves these segments. Results have demonstrated that preservation of vowel segments results in greater intelligibility for sentences
compared to consonant segments, especially after controlling for preserved duration. This important contribution from vowels is specific
to sentence contexts and appears to result from suprasegmental acoustic cues. Converging evidence from acoustic and behavioral investigations suggests that these cues are primarily conveyed through temporal amplitude modulation of vocalic energy. Additional empirical evidence suggests that these temporal cues of vowels, conveying the rhythm and stress of speech, are important for interpreting
global linguistic cues about the sentence, such as involved in syntactic processing. In contrast, consonant contributions appear to be specific to lexical access regardless of the linguistic context. Work testing older adults with normal and impaired hearing demonstrates their
preserved sensitivity to contextual cues conveyed by vowels, but not consonants. [Work supported by NIH.]
9:05
3aSC4. Vowel intelligibility and the second-language learner. Catherine L. Rogers (Dept. of Commun. Sci. and Disord., Univ. of
South Florida, USF, 4202 E. Fowler Ave., PCD1017, Tampa, FL 33620, crogers2@usf.edu)
Diane Kewley-Port’s work has contributed to our understanding of vowel perception and production in a wide variety of ways, from
mapping the discriminability of vowel formants in conditions of minimal uncertainty to vowel processing in challenging conditions,
such as increased presentation rate and noise. From the results of these studies, we have learned much about the limits of vowel perception for normal-hearing listeners and the robustness of vowels in speech perception. Continuously intertwined with this basic research
has been its application to our understanding of vowel perception and vowel acoustics across various challenges, such as hearing impairment and second-language learning. Diane’s work on vowel perception and production by second-language learners and ongoing
research stemming from her influence will be considered in light of several factors affecting communicative success and challenge for
second-language learners. In particular, we will compare the influence of speaking style, noise, and syllable disruption on the intelligibility of vowels perceived and produced by native and non-native English-speaking listeners.
3a WED. AM
9:25
3aSC5. Vowel formant discrimination: Effects of listeners’ hearing status and language background. Chang Liu (Commun. Sci.
and Disord., The Univ. of Texas at Austin, 1 University Station A1100, Austin, TX 78712, changliu@utexas.edu)
The goal of this study was to examine effects of listeners’ hearing status (e.g., normal and impaired hearing) and language background (e.g., native and non-native) on vowel formant discrimination. Thresholds of formant discrimination were measured for F1 and
F2 of English vowels at 70 dB SPL for normal- (NH) and impaired-hearing (HI) listeners using a three-interval, two-alternative forcedchoice procedure with a two-down, one-up tracking algorithm. Formant thresholds of HI listeners were comparable to those of NH listeners for F1, but significantly higher than NH listeners for F2. Results of a further experiment indicated that an amplification of the F2
peak could markedly improve formant discrimination for HI listeners, but a simple amplification of the sound level did not provide any
benefit to them. On the other hand, another experiment showed that vowel density of listeners’ native language appeared to affect vowel
formant discrimination, i.e., more crowded vowel space of listeners’ native language, better their vowel formant discrimination. For
example, English-native listeners showed significantly lower thresholds of formant discrimination for both English and Chinese vowels
than Chinese-native listeners. However, the two groups of listeners had similar psychophysical capacity to discriminate formant frequency changes in non-speech sounds.
9:45
3aSC6. Consonant recognition in noise for bilingual children with simulated hearing loss. Kanae Nishi, Andrea C. Trevino (Boys
Town National Res. Hospital, 555 N. 30th St., Omaha, NE 68131, kanae.nishi@boystown.org), Lydia Rosado Rogers (Commun. Sci.
and Disord., Univ. of Utah, Omaha, Nebraska), Paula B. Garcia, and Stephen T. Neely (Boys Town National Res. Hospital, Omaha, NE)
The negative impacts of noisy listening environments and hearing loss on speech communication are known to be greater for children
and non-native speakers than adult native speakers. Naturally, the synergistic influence of listening environment and hearing loss is
expected to be greater for bilingual children than their monolingual or normal-hearing peers, but limited studies have explored this issue.
The present study compared the consonant recognition performance of highly fluent school-age Spanish-English bilingual children to that
of monolingual English-speaking peers. Stimulus materials were 13 English consonants embedded in three symmetrical vowel-consonantvowel (VCV) syllables. To control for variability in hearing loss profiles, mild-to-moderate sloping sensorineural hearing loss modeled after
Pittman & Stelmachowicz [Ear Hear 24, 198–205 (2003)] was simulated following the method used by Desloge et al. [Trends Amplification 16(1), 19–39 (2012)]. Listeners heard VCVs in quiet and in the background of speech-shaped noise with and without simulated hearing
loss. Overall performance and the recognition of individual consonants will be discussed in terms of the influence of language background
(bilingual vs. monolingual), listening condition, simulated hearing loss, and vowel context. [Work supported by NIH.].
10:05–10:20 Break
2211
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2211
10:20
3aSC7. Distributions of confusions for the 109 syllable constituents that make up the majority of spoken English. James D. Miller,
Charles S. Watson, and Roy Sillings (Res., Commun. Disord. Technol., Inc., 3100 John Hinkle Pl, Ste. 107, Bloomington, IN 47408,
jamdmill@indiana.edu)
Among the interests of Kewley-Port have been the perception and production of English Speech Sounds by native speakers of other
languages. ESL students from four language backgrounds (Arabic, Chinese, Korean, and Spanish) were enrolled in a speech perception
training program. Similarities and differences between these L1 groups in their primary confusions were determined for onsets, nuclei
and codas utilized in spoken English. An analysis in terms of syllable constituents is more meaningful than analyses in terms of phonemes as individual phonemes have differing articulatory and acoustic structures depending on their roles in the syllable and their phonetic environments. An important observation is that only a few of all the possible confusions that might occur do occur. Another
interesting characteristic of confusions among syllable constituents is that many more confusions are observed than those popularly
cited, e.g., the /r/ v /l/ for Japanese speakers. As noted by many, the perceptual problems encountered by learners of English are conditioned on the relations between the sound-structures of English with each talker’s L1. These data suggest that the intrinsic similarities
within of the sounds of English also play an important role.
10:40
3aSC8. Identification and response latencies for Mandarin-accented isolated words in quiet and in noise. Jonathan Dalby (Commun. Sci. and Disord., Indiana-Purdue, Fort Wayne, 2101 East Coliseum Blvd., Fort Wayne, IN 46805, dalbyj@ipfw.edu), Teresa Barcenas (Speech and Hearing Sci., Portland State Univ., Portland, OR), and Tanya August (Speech-Lang. Pathol., G-K-B Community
School District, Garrett, IN)
This study compared the intelligibility of native and foreign-accented American English speech presented in quiet and mixed with
two different levels of background noise. Two native American English speakers and two native Mandarin Chinese speakers for whom
English is a second language read three 50-word lists of phonetically balanced words (Stuart, 2004). The words were mixed with noise
at three different signal-to-noise levels—no noise (quiet), SNR + 10 dB (signal 10 dB louder than noise) and SNR 0 (signal and noise at
equal loudness). These stimuli were presented to ten native American English listeners who were simply asked to repeat the words they
heard the speakers say. Listener response latencies were measured. The results showed that for both native and accented speech,
response latencies increased as the noise level increased. For words identified correctly, response times to accented speech were longer
than for native speech but the noise conditions affected both types equally. For words judged incorrectly, however, the noise conditions
increased latencies for accented speech more than for native speech. Overall, these results support the notion that processing accented
speech requires more cognitive effort than processing native speech.
11:00
3aSC9. The contribution of vowels to auditory-visual speech recognition and the contributions of Diane Kewley-Port to the field
of speech communication. Carolyn Richie (Commun. Sci. & Disord., Butler Univ., 4600 Sunset Ave, Indianapolis, IN 46208, crichie@
butler.edu)
Throughout her career, Diane Kewley-Port has made enduring contributions to the field of Speech Communication in two ways—
through her research on vowels and through her mentoring. Diane has contributed greatly to current knowledge about vowel acoustics,
vowel discrimination and identification, and the role of vowels in speech recognition. Within that line of research, Richie & KewleyPort (2008) investigated the effects of visual cues to vowels on speech recognition. Specifically, we demonstrated that an auditory-visual
vowel-identification training program benefited sentence recognition under difficult listening conditions more than consonant-identification training and no training. In this presentation, I will describe my continuing research on the relationship between auditory-visual
vowel-identification training and listening effort, for adults with normal hearing. In this study, listening effort was measured in terms of
response time and participants were tested on auditory-visual sentence recognition in noise. I will discuss the ways that my current work
has been inspired by past research with Diane, and how her mentoring legacy lives on.
11:20
3aSC10. Individual differences in the perception of nonnative speech. Tessa Bent (Dept. of Speech and Hearing Sci., Indiana Univ.,
200 S. Jordan Ave., Bloomington, IN 47405, tbent@indiana.edu)
As a mentor, Diane Kewley-Port was attentive to each student’s needs and took a highly hands-on, individualized approach. In many
of her collaborative research endeavors, she has also taken a fine-grained approach toward both discovering individual differences in
speech perception and production as well as explaining the causes and consequences of this range of variation. I will present research
investigating several cognitive-linguistic factors that may contribute to individual differences in the perception of nonnative speech.
Recognizing words from nonnative talkers can be particularly difficult when combined with environmental degradation (e.g., background noise) or listener limitations (e.g., child listener). Under these conditions, the range of performance across listeners is substantially wider than observed under more optimal conditions. My work has investigated these issues in monolingual and bilingual adults
and children. Results have indicated that age, receptive vocabulary, and phonological awareness are predictive of nonnative word recognition. Factors supporting native word recognition, such as phonological memory, were less strongly associated with nonnative word
recognition. Together, these results suggest that the ability to accurately perceive nonnative speech may rely, at least partially, on different underlying cognitive-linguistic abilities than those recruited for native word recognition. [Work supported by NIH-R21DC010027.]
2212
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2212
11:40
3aSC11. Individual differences in sensory and cognitive processing across the adult lifespan. Larry E. Humes (Indiana Univ., Dept.
Speech & Hearing Sci., Bloomington, IN 47405-7002, humes@indiana.edu)
A recent large-scale (N = 245) cross-sectional study of threshold sensitivity and temporal processing in hearing, vision and touch for
adults ranging in age from 18 through 82 years of age questioned the long-presumed link between aging and declines in cognitive-processing [Humes, L.E., Busey, T.A., Craig, J. & Kewley-Port, D. (2013). Attention, Perception and Psychophysics, 75, 508–524]. The
results of this extensive psychophysical investigation suggested that individual differences in sensory processing across multiple tasks
and senses drive individual differences in cognitive processing in adults regardless of age. My long-time colleague at IU, Diane KewleyPort, was instrumental in the design, execution and interpretation of results for this large study, especially for the measures of auditory
temporal processing. The methods used and the results obtained in this study will be reviewed, with a special emphasis on the auditory
stimuli and tasks involved. The potential implications of these findings, including possible interventions, will also be discussed. Finally,
future research designed to better evaluate the direction of the association between sensory-processing and cognitive-processing deficits
will be described. [Work supported, in part, by research grant R01 AG008293 from the NIA.]
WEDNESDAY MORNING, 29 OCTOBER 2014
INDIANA G, 8:30 A.M. TO 10:00 A.M.
Session 3aSPa
Signal Processing in Acoustics: Beamforming and Source Tracking
Contributed Papers
8:30
3aSPa1. An intuitive look at the unscented Kalman filter. Edmund Sullivan (Res., prometheus, 46 Lawton Brook Ln., Portsmouth, RI 02871, bewegungslos@fastmail.fm)
The Unscented Kalman Filter or UKF is a powerful and easily used
modification to the Kalman filter that permits it use in the case of a nonlinear process or measurement model. Its power lies in its ability to allow the
mean and covariance of the data to be correctly passed through a nonlinearity, regardless of the form of the nonlinearity. There is a great deal of literature on the UKF that describes the method and gives instruction on its use,
but there are no clear descriptions on why it works. In this paper, we show
that by computing the mean and covariance as the expectations of a Gaussian process, passing the results through a nonlinearity and solving the
resulting integrals using Gauss-Hermite quadrature, the reason for the ability
of the UKF to maintain the correct mean and covariance is explained by the
fact that the Gauss-Hermite quadrature uses the same abscissas and weights
regardless of the form of the integrand.
8:45
3aSPa2. Tracking unmanned aerial vehicles using a tetrahedral microphone array. Geoffrey H. Goldman (U.S. Army Res. Lab., 2800 Powder
Mill Rd., Adelphi, MD 20783-1197, geoffrey.h.goldman.civ@mail.mil) and
R. L. Culver (Appl. Res. Lab., Penn State Univ., State College, PA)
Unmanned Aerial Vehicles (UAVs) present a difficult localization problem for traditional radar systems due to their small radar cross section and
relatively slow speeds. To help address this problem, the U.S. Army
Research Laboratory (ARL) is developing and testing acoustic-based detection and tracking algorithms for UAVs. The focus has been on detection,
bearing and elevation angle estimation using either minimum mean square
error or adaptive beamforming methods. A model-based method has been
implemented which includes multipath returns, and a Kalman filter has been
implemented for tracking. The acoustic data were acquired using ARL’s
2213
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
tetrahedral microphone array against several UAV’s. While the detection
and tracking algorithms perform reasonably well, several challenges remain.
For example, interference from other sources resulted in lower signal to interference ratio (SIR), which can significantly degrade performance. The
presence of multipath nearly always results in greater variance in elevation
angle estimates than in bearing angle estimates.
9:00
3aSPa3. An ultrasonic echo characterization approach based on particle
swarm optimization. Adam Pedrycz (Sonic/LWD, Schlumberger, 2-2-1
Fuchinobe, Sagamihara, Kanagawa 229-0006, Japan, APedrycz@slb.com),
Henri-Pierre Valero, Hiroshi Hori, Kojiro Nishimiya, Hitoshi Sugiyama,
and Yoshino Sakata (Sonic/LWD, Schlumberger, Sagamihara, Kanagawaken, Japan)
Presented is a hands-free approach for the extraction and characterization
of ultrasonic echoes embedded in noise. By means of model-based nondestructive evaluation approaches, echoes can be represented parametrically by arrival
time, amplitude, frequency, etc. Inverting for such parameters is a non-linear
task, usually employing gradient-based, least-squared minimization such as
Gauss-Newton (GN). To improve inversion stability, suitable initial echo parameter guesses are required which may not be possible under the presence of
noise. To mitigate this requirement, particle swarm optimization (PSO) is
employed in lieu of GN. PSO is a population-based optimization technique
wherein a swarm of particles explores a multidimensional search space of candidate solutions. Particles seek out the global optimum by iteratively moving to
improve their position by evaluating their individual performance as well as
that of the collective. Since the inversion problem is non-linear, multiple suboptimal solutions exist, and in this regard PSO has a much lower propensity of
becoming trapped in a local minima compared to gradient-based approaches.
Due to this, it is possible to omit initial guesses and utilize a broad search range
instead, which becomes far more trivial. Real pulse-echoes were used to evaluate the efficacy of the PSO approach under varying noise severity. In all cases,
PSO characterized the echo correctly while GN required an initial guess within
30% of the true value to converge.
168th Meeting: Acoustical Society of America
2213
3a WED. AM
R. Lee Culver, Chair
ARL, Penn State University, PO Box 30, State College, PA 16804
9:15
3aSPa4. Beamspace compressive spatial spectrum estimation on large
aperture acoustic arrays. Geoffrey F. Edelmann, Jeffrey S. Rogers, and
Steve L. Means (Acoust., Code 7160, U. S. Naval Res. Lab., 4555 Overlook
Ave SW, Code 7162, Washington, DC 20375, edelmann@nrl.navy.mil)
For large aperture sonar arrays, the number of acoustic elements can be
quite sizable and thus increase the dimensionality of the l1 minimization
required for compressive beamforming. This leads to high computational
complexity that scales by the cube of the number of array elements. Furthermore, in many applications, raw sensor outputs are often not available since
computation of the beamformer power is a common initial processing step
performed to reduce subsequent computational and storage requirements. In
this paper, a beamspace algorithm is presented that computes the compressive spatial spectrum from conventional beamformer output power. Results
from CALOPS-07 experiment will be presented and shown to significantly
reduce the computational load as well as increase robustness when detecting
low SNR targets. [This work was supported by ONR.]
9:30
3aSPa5. Experimental investigations on coprime microphone arrays for
direction-of-arrival estimation. Dane R. Bush, Ning Xiang (Architectural
Acoust., Rensselaer Polytechnic Inst., 2609 15th St., Troy, NY 12180, danebush@gmail.com), and Jason E. Summers (Appl. Res. in Acoust. LLC
(ARiA), Washington, DC)
Linear microphone arrays are powerful tools for determining the direction of a sound source. Traditionally, uniform linear arrays (ULA) have
inter-element spacing of half of the wavelength in question. This produces
the narrowest possible beam without introducing grating lobes—a form of
aliasing governed by the spatial Nyquist theorem. Grating lobes are often
undesirable because they make direction of arrival indistinguishable among
their passband angles. Exploiting coprime number theory, however, an array
can be arranged sparsely with fewer total elements, exceeding the aforementioned spatial sampling limit separation. Two sparse ULA sub-arrays with
coprime number of elements, when nested properly, each produce narrow
grating lobes that overlap with one another exactly in just one direction. By
combining the sub-array outputs it is possible to retain the shared beam
while mostly canceling the other superfluous grating lobes. This work
implements two coprime microphone arrays with different lengths and subarray spacings. Experimental beam patterns are shown to correspond with
simulated results even at frequencies above and below the array’s design
frequency. Side lobes in the directional pattern are inversely correlated with
bandwidth of analyzed signals.
9:45
3aSPa6. Shallow-water waveguide invariant parameter estimation and
source ranging using narrowband signals. Andrew Harms (Elec. and
Comput. Eng., Duke Univ., 129 Hudson Hall, Durham, NC 27708, andrew.
harms@duke.edu), Jonathan Odom (Georgia Tech Res. Inst., Durham,
North Carolina), and Jeffrey Krolik (Elec. and Comput. Eng., Duke Univ.,
Durham, NC)
This paper concerns waveguide invariant parameter estimation using narrowband underwater acoustic signals from multiple sources at known range, or
alternatively, the ranges of multiple sources assuming known waveguide invariant parameters. Previously, the waveguide invariant has been applied to estimate the range or bottom properties from intensity striations observed from a
single broadband signal. The difficulty in separating striations from multiple
broadband sources, however, motivates the use of narrowband components,
which generally have higher signal-to-noise ratios and are non-overlapping in
frequency. In this paper, intensity fluctuations of narrowband components are
shown to be related across frequency by a time-warping (i.e., stretching or contracting) of the intensity profile, assuming constant radial source velocity and
the waveguide invariant b. A maximum likelihood estimator for the range with
b known or for the invariant parameter b with known source range is derived,
as well as Cramer-Rao bounds on estimation accuracy assuming a Gaussian
noise model. Simulations demonstrate algorithm performance for constant radial velocity sources in a representative shallow-water ocean waveguide.
[Work supported by ONR.]
WEDNESDAY MORNING, 29 OCTOBER 2014
INDIANA G, 10:15 A.M. TO 12:00 NOON
Session 3aSPb
Signal Processing in Acoustics: Spectral Analysis, Source Tracking, and System Identification
(Poster Session)
R. Lee Culver, Chair
ARL, Penn State University, PO Box 30, State College, PA 16804
All posters will be on display from 10:15 a.m. to 12:00 noon. To allow contributors an opportunity to see other posters, contributors of
odd-numbered papers will be at their posters from 10:15 a.m. to 11:00 a.m. and contributors of even-numbered papers will be at their
posters from 11:00 a.m. to 12:00 noon.
Contributed Papers
3aSPb1. Improvement of the histogram in the degenerate unmixing estimation technique algorithm. Junpei Mukae, Yoshihisa Ishida, and Takahiro
Murakami (Dept. of Electronics and Bioinformatics, Meiji Univ., 1-1-1 Higashi-mita, Tama-ku, Kawasaki-shi 214-8571, Japan, ce41094@meiji.ac.jp)
A method of improving the histogram in the degenerate unmixing estimation technique (DUET) algorithm is proposed. The DUET algorithm is
one of the methods of blind signal separation (BSS). The BSS framework is
2214
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
to retrieve source signals from mixtures of them without a priori information
about the source signals and the mixing process. In the DUET algorithm,
the histogram of both the direction-of-arrivals (DOAs) and the distances is
formed from the mixtures which are observed using two sensors. And then,
signal separation is achieved using time-frequency masking based on the
histogram. Consequently, the capability for the DUET algorithm strongly
depends on the performance of the histogram. In general, the histogram is
degraded by the reverberation or the reflection of the source signals when
the DUET algorithm is applied in the real environment. Our approach is to
168th Meeting: Acoustical Society of America
2214
3aSPb2. Start point estimation of a signal in a frame. Anri Ota (Dept. of
Electronics and Bioinformatics, Meiji Univ., 1-1-1 Higashi-mita, Tama-ku,
Kawasaki-shi 214-8571, Japan, ce41017@meiji.ac.jp), Yoshihisa Ishida,
and Takahiro Murakami (Dept. of Electronics and Bioinformatics, Meiji
Univ., Kawashiki-shi, Japan)
An algorithm for start-point estimation of a signal from a frame is presented. In many applications of speech signal processing, the signal to be
processed is often segmented into several frames, and then the frames are
categorized into speech and non-speech frames. Instead, we focus on only
the frame in which the speech starts. To simplify the problem, we assume
that the speech is modeled by a number of complex sinusoidal signals.
When a complex sinusoidal signal that starts in a frame is observed, it can
be modeled as multiplication of a complex sinusoidal signal of which the
length is infinite and a window function that has finite duration in the time
domain. In the frequency domain, the spectrum of the signal of the frame is
given by the shifted spectrum of the window function. Sharpness of the
spectrum of the window function depends on the start point of the signal.
Hence, the start point of the signal is estimated by the sharpness of the
observed spectrum. This approach can be extended to the signal that consists
of a number of complex sinusoidal signals. Simulation results using artificially generated signals show the validity of our method.
3aSPb3. Examination and development of numerical methods and algorithms designed for the determination of an enclosure’s acoustical characteristics via the Schroeder Function. Miles Possing (Acoust., Columbia
College Chicago, 1260 N Dearborn, 904, Chicago, IL 60610, miles@possing.com)
A case study was conducted to measure the acoustical properties of a
church auditorium. While modeling the project using EASE 2.1, some problems arose when attempting to determine the reverberation time using the
Schroder Back Integrated Impulse Function within EASE 2.1. An auxiliary
investigation was launched aiming to better understand the Schroeder algorithm in order to produce a potentially improved version in MATLAB. It was
then theorized that the use of a single linear regression is not sufficient to
understand the nature of the decay, due to the non-linearity of the curve,
particularly during the initial decay. Rather, it is hypothesized that the use
of numerical methods to find instantaneous rates of change over the entire
initial decay along with a Savitsky-Golay Filter could possibly yield much
more robust, accurate results when attempting to derive the local reverberation time from reflectogram data.
3aSPb4. A modified direction-of-arrival estimation algorithm for acoustic vector sensors based on Unitary Root-MUSIC. Junyuan Shen, Wei Li,
Yuanming Guo, and Yongjue Chen (Electronics and Commun. Eng., Harbin
Inst. of Technol., XiLi University Town HIT C#101, Shenzhen, GuangDong
GD 755, China, Juny_Shen@hotmail.com)
A novel method applying for direction-of-arrival(DOA) using acoustic
vector sensors(AVS) based on Unitary Root-MUSIC algorithm(URM) is
proposed in this paper. AVS array has a characteristic named coherence
principle of sound pressure and velocity, which can significantly improve
2215
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
the detection performance of DOA by reducing the influence of white Gaussian noise. We apply this characteristic and the extra velocity information of
AVS to construct a modified covariance matrix. In particular, the modified
covariance matrix need not extend the dimension in calculation of AVS covariance matrix which means saving the computing time. In addition, we
combine the characteristics of modified matrix with URM algorithm to
design a new algorithm, which can minimize the impact of environment
noise and further reduce computational complexity to a lower order of magnitude. So the proposed method can not only improve the accuracy of DOA
detection but also reduce the computational complexity, compared to the
classic DOA algorithm. Theory analysis and simulation experiment show
that the proposed algorithm for AVS based on URM can significantly
improve the DOA resolution in low SNR ratios and few snapshots.
3aSPb5. Multiple pitch estimation using comb filters considering overlap of frequency components. Kyohei Tabata, Ryo Tanaka, Hiroki Tanji,
Takahiro Murakami, and Yoshihisa Ishida (Dept. of Electronics and Bioinformatics, Meiji Univ., 1-1-1 Higashimita, Tama-ku, Kawasaki-shi, Kanagawa 214-8571, Japan, ce31063@meiji.ac.jp)
We propose a method of the multiple pitch estimation using the comb
filters for transcript. We can know the pitches of a musical sound by detecting the bigger outputs in comb filters connected in parallel. Each comb filter
has peak corresponding to each pitch and its harmonic frequencies. The outputs of the comb filters corresponding to input pitch frequencies have bigger
frequency components, and show bigger outputs than other comb filter’s
ones. However, when there is the fundamental frequency of higher tone near
harmonics of lower tones, the pitch estimation often fails. Therefore, the
estimation is assigned to a wrong note when frequency components are
shared. The proposed method estimates the correct pitch by correcting the
outputs using the matrix, which is defined as the power ratio of the harmonic
frequencies to the fundamental frequency. The effectiveness of our proposed
method is confirmed by simulations. The proposed method enables more
accurate pitch estimation than other conventional methods.
3aSPb6. Evaluating microphones and microphone placement for signal
processing and automatic speech recognition of teacher-student dialog.
Michael C. Brady, Sydney D’Mello, Nathan Blanchard (Comput. Sci., Univ. of
Notre Dame, Fitzpatrick Hall, South Bend, IN 46616, mbrady8@nd.edu),
Andrew Olney (Psych., Univ. of Memphis, Memphis, TN), and Martin
Nystrand (Education, English, Univ. of Wisconsin, Madison, WI)
We evaluate a variety of audio recording techniques for a project on the
automatic analysis of speech dialog in middle school and high school classrooms. In our scenario, the teacher wears a headset microphone or a lapel
microphone. A second microphone is then used to collect speech and related
sounds from students in the classroom. Various boundary microphones,
omni-directional microphones, and cardioid microphones are tested as this
second classroom microphone. A commercial microphone array [Microsoft
Xbox Kinect] is also tested. We report on how well digital source-separation
techniques work for segregating the teacher and student speech signals from
one another based on these various microphones and placements. We also
test the recordings using various automatic speech recognition engines for
word recognition error rates under different levels of background noise. Preliminary results indicate one boundary microphone, the Crown PZM-30, to
be superior for the classroom recordings. This is based on its performance at
capturing near and distant student signals for ASR in noisy conditions, as
measured by ASR error rates across different ASR engines.
168th Meeting: Acoustical Society of America
2215
3a WED. AM
improve the histogram of the DUET algorithm. In our method, the phase
component of the observed mixture at each frequency bin is modified by
those at the neighboring frequency bins. The proposed method gives the
sharper histogram in comparison with the conventional approach.
WEDNESDAY MORNING, 29 OCTOBER 2014
INDIANA F, 9:00 A.M. TO 11:30 A.M.
Session 3aUW
Underwater Acoustics, Acoustical Oceanography, Animal Bioacoustics, and ASA Committee on Standards:
Standardization of Measurement, Modeling, and Terminology of Underwater Sound
Susan B. Blaeser, Cochair
Acoustical Society of America Standards Secretariat, 1305 Walt Whitman Road, Suite 300, Melville, NY 11747
Michael A. Ainslie, Cochair
Underwater Tech. Dept., TNO, P.O. Box 96864, The Hague 2509JG, Netherlands
George V. Frisk, Cochair
Dept. of Ocean Eng., Florida Atlantic Univ., Dania Beach, FL 33004-3023
Chair’s Introduction—9:00
Invited Papers
9:05
3aUW1. Strawman outline for a standard on the use of passive acoustic towed arrays for marine mammal monitoring and mitigation. Aaron Thode (SIO, UCSD, 9500 Gilman Dr., MC 0238, La Jolla, CA 92093-0238, athode@ucsd.edu)
There is a perceived need from several U.S. federal agencies and departments to develop consistent standards for how passive acoustic monitoring (PAM) for marine mammals is implemented for mitigation and regulatory monitoring purposes. The use of towed array
technology is already being required for geophysical exploration activities in the Atlantic Ocean and the Gulf of Mexico. However, to
date no specific standards have been developed or implemented for towed arrays. Here, a strawman outline for a ANSI standard is presented (http://wp.me/P4j34t-a) to cover requirements and recommendations for the following aspects of towed array operations: initial
planning (including guidelines for when PAM is not appropriate), hardware, software, and operator training requirements, real-time mitigation and monitoring procedures, and required steps for performance validation. The outline scope, at present, does not cover operational shutdown decision criteria, sound source verification, or defining the required detection range of the system. Instead of specifying
details of towed array systems, the current strategy is to focus on the process of defining the required system performance for a given
application, and then stepping through how the system hardware, software, and operations should be selected and validated to meet or
exceed these requirements. [Work supported by BSEE.]
9:30
3aUW2. Towards a standard for the measurement of underwater noise from impact pile driving in shallow water. Peter H. Dahl
(Appl. Phys. Lab. and Mech. Eng. Dept., Univ. of Washington, Mech. Eng., 1013 NE 40th St., Seattle, WA 98105, dahl@apl.washington.edu), Pete D. Theobald, and Stephen P. Robinson (National Physical Lab., Children’s Respiratory and Critical Care Specialists, PA,
Middlesex, United Kingdom)
Measurements of the underwater noise field from impact pile driving are essential to the address environmental regulations in effect
in both Europe and North America to protect marine life. For impact pile driving in shallow water there exists a range scale R* = H/
tan(H) that delineates important features in the propagation of underwater sound from impact pile driving, where H is the Mach angle
of the wavefront radiated into the water from the pile and H is water depth. This angle is about 17o for many steel piles typically used,
and thus R* is approximately 3H. For range R, such that R/R* ~ 0.5 depth variation in the noise field is highest, more so with peak pressure than with sound exposure level (SEL); for R/R* > 1 the field becomes more uniform with depth. This effect of measurement range
can thus have implications on environmental monitoring designed to obtain a close-range datum, which is often used with a transmission
loss model to infer the noise level at farther range. More consistent results are likely obtained if the measurement range is at least 3H.
Ongoing standardization activities for the measurement and reporting of sound levels radiated from impact pile driving will also be
discussed.
2216
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2216
9:55
3aUW3. Importance of metrics standardization involving the effects of sound on fish. Michele B. Halvorsen (CSA Ocean Sci. Inc,
8502 SW Kansas Hwy, Stuart, FL 34997, mhalvorsen@conshelf.com)
Reporting accurate metrics while employing good measurement practices is a topic that is gaining awareness. Seemingly a simple
and expected task, however when reading current and past literature, reporting sound metrics utilized is often not met. It is clear that
increased awareness and development of standardization of acoustic metrics is necessary. When reviewing previously published literature on the effects of sound on fish, it is often difficult to fully understand how metrics were calculated leaving the reader to make
assumptions. Furthermore, the lack of standardization and definition decreases the amount of data and research studies that could be
directly comparable. In a field that has paucity of effects of sound on fish, this situation underscores the importance and need for
standardization.
10:20
3aUW4. Developments in standards and calibration methods for hydrophones and electroacoustic transducers for underwater
acoustics. Stephen P. Robinson (National Physical Lab., National Physical Lab., Hampton Rd., Teddington TW11 0LW, United Kingdom, stephen.robinson@npl.co.uk), Kenneth G. Foote (Woods Hole Oceanographic Inst., Woods Hole, MA), and Pete D. Theobald
(National Physical Lab., Teddington, United Kingdom)
If they are to be meaningful, underwater acoustic measurements must be related to common standards of measurement. In this paper,
a description is given of the existing standards for the calibration of hydrophones and electroacoustic transducers for underwater acoustics. The description covers how primary standards are currently realized and disseminated, and how they are validated by international
comparisons. A report is also given of the status of recent developments in specification standards, for example within the International
Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC). The discussion focuses on the revision
of standards for transducer calibration, and the inclusion of extended guidance on uncertainty assessment, and on the criteria for determining the locations of the acoustic near-field and far-field. A description is then provided of recent developments using non-traditional
techniques such as optical sensing, which may lead to the next generation of standards. A report is also given of the status of recent
developments in and of a number of current initiatives to promote best measurement practice.
10:45
3aUW5. All clicks are not created equally: Variations in high-frequency
acoustic signal parameters of the Amazon river dolphin (Inia geoffrensis). Marie Trone (Math and Sci., Valencia College, 1800 Denn John Ln.,
Kissimee, FL 34744, mtronedolphin@yahoo.com), Randall Balestriero
(Universite de Toulon, La Garde, France), Herve Glotin (Universite de Toulon, Toulon, France), and Bonnett E. David (None, None, Silverdale,
WA)
The quality and quantity of acoustical data available to researchers are
rapidly increasing with advances in technology. Recording cetaceans with a
500 kHz sampling rate provides a more complete signal representation than
traditional sampling at 96 kHz and lower. Such sampling provides a profusion of data concerning various parameters, such as click duration, interclick intervals, frequency, amplitude, and phase. However, there is disagreement in the literature in the use and definitions of these acoustic terms and
parameters. In this study, Amazon River dolphins (Inia geoffrensis) were
recorded using a 500 kHz sampling rate in the Peruvian Amazon River
watershed. Subsequent spectral analyses, including time waveforms, fast
Fourier transforms and wavelet scalograms, demonstrate acoustic signals
with differing characteristics. These high frequency, broadband signals are
compared, and differences are highlighted, despite the fact that currently an
unambiguous way to describe these acoustic signals is lacking. The need for
standards in cetacean bioacoustics with regard to terminology and collection
techniques is emphasized.
11:00
3aUW6. Acoustical terminology in the Sonar Modelling Handbook.
Andrew Holden (Dstl, Dstl Portsdown West, Fareham PO17 6AD, United
Kingdom, apholden@dstl.gov.uk)
The UK Sonar Modelling Handbook (SMH) defines the passive and
active Sonar Equations, and their individual terms and units, which are
extensively used for sonar performance modelling. The new Underwater
2217
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Acoustical Terminology ISO standard, which is currently being developed
by the ISO working group TC43/SC3/WG2 to standardize terminology will
have an impact on the SMH definitions. Work will be presented comparing
the current SMH terminology with both the future ISO standard and other
well-known definitions to highlight the similarities and differences between
each of these.
11:15
3aUW7. The definitions of “level,” “sound pressure,” and “sound pressure level” in the International System of Quantities, and their implications for international standardization in underwater acoustics. Michael
A. Ainslie (Acoust. and Sonar, TNO, P.O. Box 96864, The Hague 2509JG,
Netherlands, michael.ainslie@tno.nl)
The International System of Quantities (ISQ), incorporating definitions
of physical quantities and their units, was completed in 2009 following an
extensive collaboration between two major international standards organizations, the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC). The ISQ encompasses all SI
units as well as selected units outside the SI such as the byte (including both
decimal and binary multiples), bel, neper, and decibel. The ISQ, which
includes definitions of the terms “level,” “sound pressure,” and “sound pressure level,” is presently being used to underpin an underwater acoustics terminology standard under development by ISO. For this purpose, pertinent
ISQ definitions are analyzed and compared with alternative standard definitions, and with conventional use of the same terms. The benefits of combining IEC and ISO definitions into a single standard, solving some longstanding problems, are described. The comparison also reveals some teething problems, such as internal inconsistencies within the ISQ, and discrepancies with everyday use of some of the terms, demonstrating the need for
continued collaboration between the major standards bodies. As of 2014,
the ISQ is undergoing a major revision, leading to a unique opportunity to
resolve these discrepancies.
168th Meeting: Acoustical Society of America
2217
3a WED. AM
Contributed Papers
WEDNESDAY AFTERNOON, 29 OCTOBER 2014
MARRIOTT 7/8, 1:00 P.M. TO 3:00 P.M.
Session 3pAA
Architectural Acoustics: Architectural Acoustics Medley
Norman H. Philipp, Chair
Geiler & Associates, 1840 E. 153rd Circle, Olathe, KS 66062
Contributed Papers
1:00
3pAA1. From the sound up: Reverse-engineering room shapes from
sound signatures. Willem Boning and Alban Bassuet (Acoust., ARUP, 77
Water St., New York, NY 10005, willem.boning@arup.com)
Typically, architects and acousticians design rooms for music starting
from a model room shape known from past experience to perform well
acoustically. We reverse the typical design process by using a model sound
signature to generate room shapes. Our method builds off previous research
on reconstructing room shapes from recorded impulse responses, but takes
an instrumental, design-oriented approach. We demonstrate how an abstract
sound signature constructed in a hybrid image source-statistical acoustical
simulator can be translated into a room shape with the aid of a parametric
design interface. As a proof of concept, we present a study in which we generated a series of room shapes from the same sound signature, analyzed
them with commercially available room acoustic software, and found objective parameters for comparable receiver positions between shapes to be
within just-noticeable-difference ranges of each other.
1:15
3pAA2. Achieving acoustical comfort in restaurants. Paul Battaglia
(Architecture, Univ. at Buffalo, 31 Rose Ct Apt. 4, Snyder, NY 14226,
plb@buffalo.edu)
The achievement of a proper acoustical ambiance for restaurants has
long been described as a problem of controlling noise to allow for speech
intelligibility among patrons at the same table. This simplification of the
acoustical design problem for restaurants does not entirely result in achieving either a sensation of acoustical comfort or a preferred condition for
social activity sought by architects. In order to more fully study the subjective impression of acoustical comfort a large data base from 11 restaurants
with 75 patron surveys for each (825 total) was assembled for analysis. The
results indicate that a specific narrow range of reverberation time can produce acoustical comfort for restaurant patrons of all ages. Other physical
and acoustical conditions of the dining space are shown to have little to no
consistent effect on the registration of comfort. The results also indicate that
different subjective components of acoustical comfort—quietude, communication, privacy—vary significantly by age group with specific consequences
for the acoustical design of restaurants for different clienteles.
1:30
3pAA3. 500-seat theater in the city of Qom; Computer simulation vs.
acoustics measurements. Hassan Azad (Architecture, Univ. of Florida,
3527 SW, 20th Ave., 1132B, Gainesville, FL 32607, h.azad@ufl.edu)
There is an under construction 500-seat Theater in Qom city in Iran in
which I was part of the acoustics design team. We went through a different
steps of the acoustics design using Odeon software packages which enabled
us to go back and forth in design process and make proper improvement
while we were suffering from having limitations on material choice. Fortunately the theater is being built so after a while it would be feasible to do
acoustics measurements with the help of Building and Housing Research
Center (BHRC) in Iran as well as subjective evaluation during the very first
performances. This paper is aimed to juxtapose the results of computer
2218
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
simulation and acoustics measurement and make a comparison in between
to see if there are any discrepancies.
1:45
3pAA4. Acoustical materials and sustainability analyses. Hassan Azad
(Architecture, Univ. of Florida, 3527 SW, 20th Ave., 1132B, Gainesville,
FL 32607, h.azad@ufl.edu)
Acoustical materials can perform a variety of functions from absorption
and diffusion to the insulation and noise control. They may have similar
acoustical performance but very different characteristics in terms of sustainability. It is important to evaluate the environmental effects of materials
which exhibit the same acoustical performance in order to wisely choose the
best alternative available. This study is intended to introduce and compare
the different tools and methods which are commonly used in environmental
sustainability analysis of materials including Eco-profile, Eco-indicator,
Eco-invent, and also software packages like IMPACT. Also, a specific kind of
computer model is proposed in which one can run process of calculation of
both acoustics properties and sustainability assessment of a given material
through computer aided techniques. The model consists of a simple cubic
room with a given set of materials for its elements like walls, floor, ceiling,
and windows or doors (if any). The acoustics properties which can be calculated are reverberation time with the help of either Odeon or Catt-Acoustics
Software and Air borne/impact sound insulation with the help of recently
developed software, SonArchitect. For the sustainability assessment both
LCA method and software packages like IMPACT are the main tools.
2:00
3pAA5. Influence of the workmanship on the airborne sound insulation
properties of light weight building plasterboard steel frame wall systems. Herbert Muellner (Acoust. and Bldg. Phys., Federal Inst. of Technol.
TGM Vienna, Wexstrasse 19-23, Vienna A-1200, Austria, herbert.muellner@tgm.ac.at) and Thomas Jakits (Appl. Res. and Development, SaintGobain Rigips Austria GesmbH, Vienna, Austria)
Building elements which are built according to the light weight mode of
construction, e.g. plasterboard steel frame wall systems show a large variation of air borne sound insulation properties although the elements appear as
identical. According to several studies conducted in the recent years, certain
aspects of workmanship have significant influence on the air borne sound
insulation characteristics of light weight building elements. The method to
fasten the planking (e.g., gypsum boards, gypsum fiber boards) as well as
the number and position of the screws can lead to considerable variations
regarding the sound insulation properties. Above 200 Hz, the sound reduction index R can differ more than 10 dB by the variation of the position of
the screws. Applying prefabricated composite panels of adhesive connected
plasterboards not only considerably reduces the depth of the dip of the critical frequency caused by the higher damping due to the interlayer but it can
also significantly decrease the negative influence of the workmanship on the
air borne sound insulation properties of these kinds of light weight walls in
comparison to the standard planking of double layer plasterboard systems.
The influence of secondary construction details and workmanship will be
discussed in the paper.
168th Meeting: Acoustical Society of America
2218
2:15
firefighters. The National Institute for Occupational Safety and Health
(NIOSH) firefighter fatality reports suggest that there have been instances
when the PASS alarm is not audible by other firefighters on the scene. This
paper seeks to use acoustic models to measure the sound pressure level of
various signals throughout a structure. With this information, a visual representation will be created to map where a PASS alarm is audible and where it
is masked by noise sources. This paper presents an initial audibility study,
including temporal masking and frequency analysis. The results will be
compared to auralizations and experimental data. Some other potential
applications will be briefly explored.
3pAA6. Contribution of floor treatment characteristics to background
noise levels in health care facilities, Part 1. Adam L. Paul, David A.
Arena, Eoin A. King, Robert Celmer (Acoust. Prog. & Lab, Univ. of Hartford , 200 Bloomfield Ave., West Hartford, CT 06117, celmer@hartford.
edu), and John J. LoVerde (Paul S. Veneklasen Res. Foundation, Santa
Monica, CA)
Acoustical tests were conducted on five types of commercial-grade
flooring to assess their potential contribution to noise generated within
health care facilities outside of patient rooms. The floor types include sheet
vinyl (with and without a 5 mm rubber backing), virgin rubber (with and
without a 5 mm rubber backing), and a rubber-backed commercial grade
carpet for comparison. The types of acoustical tests conducted were ISO3741 compliant sound power level testing (using two source types: a tapping
machine to simulate footfalls and a rolling hospital cart), and sound absorption testing as per ASTM-C423. Among the non-carpet samples, the material type that produced the least sound power was determined to be the
rubber-backed sheet vinyl. While both 5 mm-backed samples showed a significant difference compared to their un-backed counterparts with both
source types, the rubber-backed sheet vinyl performed slightly better than
the rubber-backed virgin rubber in the higher frequency bands in both tests.
The performance and suitability of these flooring materials in a health care
facility compared to commercial carpeting will be discussed. [Work supported by Paul S. Veneklasen Research Foundation.]
2:45
3pAA8. Investigations on acoustical coupling within single-space monumental structures using a diffusion equation model. Z€
uhre S€
u G€
ul (R&D
/ Architecture, MEZZO Studyo / METU, METU Technopolis KOSGEBTEKMER No112, ODTU Cankaya, Ankara 06800, Turkey, zuhre@mezzostudyo.com), Ning Xiang (Graduate Program in Architectural Acoust.,
School
¸ of Architecture, Rensselaer Polytechnic Inst., Troy, NY), and Mehmet Calışkan (Dept. of Mech. Eng., Middle East Tech. Univ. / MEZZO
Studyo, Ankara, Turkey)
2:30
3pAA7. Visualization of auditory masking for firefighter alarm detection. Casey Farmer (Dept. of Mech. Eng., Univ. of Texas at Austin, 1208
Enfield Rd., Apt. 203, Austin, TX 78703, caseymfarmer@utexas.edu), Mustafa Z. Abbasi, Preston S. Wilson (Appl. Res. Labs, Dept. of Mech. Eng.,
Univ. of Texas at Austin, Austin, TX), and Ofodike A. Ezekoye (Dept. of
Mech. Eng., Univ. of Texas at Austin, Austin, TX)
An essential piece of firefighter equipment is the Personal Alert Safety
System (PASS), which emits an alarm when a firefighter has been inactive
for a specified period of time and is used to find and rescue downed
WEDNESDAY AFTERNOON, 29 OCTOBER 2014
INDIANA A/B, 1:00 P.M. TO 3:20 P.M.
Session 3pBA
Biomedical Acoustics: History of High Intensity Focused Ultrasound
Lawrence A. Crum, Cochair
Applied Physics Laboratory, University of Washington, Center for Industrial and Medical Ultrasound, Seattle, WA 98105
Narendra T. Sanghvi, Cochair
R & D, SonaCare Medical, 4000 Pendleton way, Indianapolis, IN 46226
Invited Papers
1:00
3pBA1. History of high intensity focused ultrasound, Bill and Frank Fry and the Bioacoustics Research Laboratory. William
O’Brien and Floyd Dunn (Elec. Eng., Univ. of Illinois, 405 N. Mathews, Urbana, IL 61801, wdo@uiuc.edu)
1946 is a key year in the history of HIFU. That year, sixty-eight years ago, the Bioacoustics Research Laboratory was established at
the University of Illinois. Trained in theoretical physics, William J, (Bill) Fry (1918–1968) left his graduate studies at Penn State University to work at the Naval Research Laboratory in Washington, DC on underwater sound during World War II. Bill was hired by the
2219
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2219
3p WED. PM
Sound energy distributions and flows within single-space rooms can be
exploited to understand the occurrence of multi-slope decays. In this work,
a real-size monumental worship space is selected for investigations of nonexponential sound energy decays. Previous field tests in this single-space
venue indicate multi-slope formation within such a large volume and the
multiple-dome upper structure layout. In order to illuminate/reveal the probable reasons of non-exponential sound energy decays within such an architectural venue, sound energy distributions and energy flows are investigated.
Due to its computational efficiency and advantages in spatial energy density
and flow vector analysis, a diffusion equation model (DEM) is applied for
modeling sound field of the monumental worship space. Preliminary studies
indicate good agreement for overall energy decay time estimations among
experimental field and DEM results. The energy flow vector and energy distribution analysis indicate the upper central dome-structure to be the potential energy accumulation/ concentration zone, contributing to the later
energy decays.
University of Illinois in 1946, wanting to continue to conduct research activities of his own choosing in the freer university atmosphere.
Like Bill, Francis J. (Frank) Fry (1920–2005) went to Penn State as well as the University of Pittsburgh where he studied electrical engineering. Frank joined Bill at the University of Illinois, also in 1946, having worked at Westinghouse Electric Corporation where his division was a prime contractor on the Manhattan Project. Floyd Dunn also arrived at the University of Illinois in 1946 as an undergraduate
student, having served in the European Theater during World War II. The talk will recount some of the significant HIFU contributions
that emerged from BRL faculty, staff, and students. [NIH Grant R37EB002641.]
1:20
3pBA2. Transforming ultrasound basic research in to clinical systems. Narendra T. Sanghvi and Thomas D. Franklin (R & D, SonaCare Medical, 4000 Pendleton way, Indianapolis, IN 46226, narensanghvi@sonacaremedical.com)
In late 1960s, Robert F. Heimburger, MD, Chief of Neurosurgery at Indiana University School of Medicine, started collaborating
with William J. Fry and Francis J. Fry at Interscience Research Institute (IRI) in Champaign, IL. and treated brain cancer patients with
HIFU. In 1970, Dr. Heimburger and Indiana University School of Medicine (IUMS) invited IRI to join IUMS and Indianapolis Center
For Advanced Research, Inc. (ICFAR). In 1972, a dedicated Fortune Fry Research Laboratory (FFRL) was inaugurated to advance ultrasound research relevant for clinical use. In the ‘70s, an automated computer controlled, integrated B-mode, image-guided HIFU system
(“the candy machine”) was developed that successfully treated brain cancer patients at IUMS. HIFU was found to be safe for the
destruction of brain tumors. Later a second-generation brain HIFU device was developed to work with CAT or MR images. In 1974, the
FFRL developed a first cardiac real-time, 2-D ultrasound scanner. Prof. H. Feigenbaum pioneered this imaging technique and formed
“Echocardigraphy Society.” In 1978, an automated breast ultrasound system was successfully developed led to form Labsonics, Inc. that
proliferated 300 scanners in 4 years. In 1986, the Sonablate system to treat prostate cancer was developed. The Sonablate has been used
worldwide.
1:40
3pBA3. The development of high intensity focused ultrasound in Europe, what could we have done better? Gail ter Haar (Phys.,
Inst. of Cancer Res., Phys. Dept., Royal Marsden Hospital, Sutton, Surrey SM2 5PT, United Kingdom, gail.terhaar@icr.ac.uk)
The clinical uptake of HIFU has been disappointingly slow. This despite its promise as a minimally invasive, ultimately conformal
technique. It may be instructive to look at the way in which this technique has evolved from its early days with an eye as to whether a
different approach might have resulted in its more rapid acceptance. Examples will be drawn from HIFU’s development in the United
Kingdom.
2:00
3pBA4. LabTau’s experience in therapeutic ultrasound : From lithotripsy to high intensity focused ultrasound. Jean-Yves
Chapelon, Michael Canney, David Melodelima, and Cyril Lafon (U1032, INSERM, 151 Cours Albert Thomas, Lyon 69424, France,
jean-yves.chapelon@inserm.fr)
Research on therapeutic ultrasound at LabTau (INSERM Lyon, France) began in the early 1980s with work on shock waves that
lead to the development of the first ultrasound-guided lithotripter. In 1989, this research shifted towards new developments in the field
of HIFU with applications in urology and oncology. The most significant developments have been obtained in urology with the AblathermTM project, a transrectal HIFU device for the thermal ablation of the prostate. This technology has since become an effective therapeutic alternative for patients with localized prostate cancer. Since 2000, three generations of the AblathermTM have been CE marked
and commercialized by EDAP-TMS. The latest version, the FocalOneTM, allows for the focal treatment of prostate cancer and combines
dynamic focusing and fusion of MR images to ultrasound images acquired in real time by the imaging probe integrated in the HIFU
transducer. Using toroidal ultrasound transducers, a HIFU device was also recently validated clinically for the treatment of liver metastases. Another novel application that has reached the clinic is for the treatment of glaucoma using a miniature, disposable HIFU device.
Today, new approaches are also being investigated for treating cerebral and cardiac diseases.
2:20
3pBA5. High intensity therapeutic ultrasound research in the former USSR in the 1950s–1970s. Vera Khokhlova (Dept. of Acoust.,
Phys. Faculty, Moscow State Univ., 1013 NE 40th St., Seattle, Washington 98105, va.khokhlova@gmail.com), Valentin Burov (Dept.
of Acoust., Phys. Faculty, Moscow State Univ., Moscow, Russian Federation), and Leonid Gavrilov (Andreev Acoust. Inst., Moscow,
Russian Federation)
A historical overview of therapeutic ultrasound research performed in the former USSR in the 1950s–1970s is presented. In the
1950s, the team of A.K.Burov in Moscow proposed the use of non-thermal, non-cavitational mechanisms of high intensity unfocused
ultrasound to induce specific immune responses in treating Brown Pearce tumors in an animal model and melanoma tumors in a number
of patients. Later, in the early 1970s, new studies began at the Acoustics Institute in Moscow jointly with several medical institutions.
Significant results included first measurements of cavitation thresholds in animal brain tissues in vivo and demonstration of the feasibility to apply high intensity focused ultrasound (HIFU) for local ablation of brain structures through the intact skull. Another direction
was ultrasound stimulation of superficial and deep receptors in humans and animals using short HIFU pulses; these studies became the
basis for ultrasound stimulation of different neural structures and have found useful clinical applications for diagnostics of skin, neurological, and hearing disorders. Initial studies on the synergism between ultrasound in therapeutic doses combined with consecutive application of ionizing radiation were carried out. Later, hyperthermia research was also performed for brain tissues and for ophthalmology.
[Work supported by the grant RSF 14-12-00974.]
2220
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2220
2:40
3pBA6. The development of MRI-guided focused ultrasound at Brigham & Women’s Hospital. Nathan McDannold (Radiology,
Brigham and Women, 75 Francis St, Boston, MA MA, njm@bwh.harvard.edu)
The Focused Ultrasound Laboratory was created in the Department of Radiology at Brigham & Women’s Hospital in the early
1990’s, when Ferenc Jolesz invited Kullervo Hynynen to join him to collaborate with GE Medical Systems to develop MRI-guided
Focused Ultrasound surgery. This collaboration between Dr. Hynynen, an experienced researcher of therapeutic ultrasound, Dr. Jolesz,
who developed MRI-guided laser ablation, and the engineers at GE and later InSightec, with their decades of experience developing
MRI and ultrasound systems, established a program that over two decades produced important contributions to HIFU. In this talk,
Nathan McDannold, the current director of the laboratory, will review the achievements made by the team of researchers, which include
the development of the first MRI-guided FUS system, the creation of the first MRI-compatible phased arrays, important contributions to
the validation and implantation of MR temperature mapping and thermal dosimetry, the development of an MRI-guided transcranial system, and the discovery that ultrasound and microbubbles can temporarily disrupt the blood–brain barrier. This output of this team, which
led to clinical systems that have treated tens of thousands of patients at sites around the world, is an excellent example of how academic
research can be to the clinic.
3:00
3pBA7. What have we learned about shock wave lithotripsy in the past thirty years? Pei Zhong (Mech. Eng. and Mater. Sci., Duke
Univ., 101 Sci. Dr., Durham, NC 27708, pzhong@duke.edu)
Shock wave lithotripsy (SWL) has revolutionized the treatment of kidney stone disease since its introduction in the early 1980s.
Considering the paucity of knowledge about the bioeffects of shockwaves in various tissues and renal concretions 30 years ago, the success of SWL is a truly remarkable fest on its own. We have learned a lot since then. New technologies have been introduced for shock
wave generation, focusing, and measurement, among others. In parallel, new knowledge has been acquired progressively about the
mechanisms of stone comminution and tissue injury. Yet there are still outstanding issues that are constantly debated, waiting for resolution. In this talk, the quest for a better understanding of the shockwave interaction with stones and renal tissue in the field of SWL will
be reviewed in chronological order. Focus will be on stress waves and cavitation for their distinctly different (for their origin), yet often
synergistically combined (in their action), roles in the critical processes of SWL. This historical review will be followed by a discussion
of the recent development and future prospects of SWL technologies that may ultimately help to improve the clinical performance and
safety of contemporary shock wave lithotripters. [Work supported by NIH through 5R37DK052985-18.]
WEDNESDAY AFTERNOON, 29 OCTOBER 2014
INDIANA C/D, 2:00 P.M. TO 3:05 P.M.
3p WED. PM
Session 3pED
Education in Acoustics: Acoustics Education Prize Lecture
Uwe J. Hansen, Chair
Chemistry & Physics, Indiana State University, 64 Heritage Dr, Terre Haute, IN 47803-2374
Chair’s Introduction—2:00
Invited Paper
2:05
3pED1. Educating mechanical engineers in the art of noise control. Colin Hansen (Mech. Eng., Univ. of Adelaide, 33 Parsons St.,
Marion, SA 5043, Australia, chansen@bigpond.net.au)
Acoustics and noise control is one of the disciplines where the material that students learn during a well-structured undergraduate
course, can be immediately applied to many problems that they may encounter during their employment. However, in order to find optimal solutions to noise control problems, it is vitally important that students have a good fundamental understanding of the physical principles underlying the subject as well as a good understanding of how these principles may be applied in practice. Ideally, they should
have access to affordable software and be confident in their ability to interpret and apply the results of any computer-based modelling
that they may undertake. Students must fully understand any ethical issues that may arise, such as their obligation to ensure their actions
do not contribute to any negative impact on the health and welfare of any communities. How do we ensure that our mechanical engineering graduates develop the understanding and knowledge required to tackle noise control problems that they may encounter after graduation? This presentation attempts to answer this question by discussing the process of educating undergraduate and postgraduate
mechanical engineering students at the University of Adelaide, including details of lab classes, example problems, text books and software developed for the dual purpose of educating students and being useful in assisting graduates solve practical noise control problems.
2221
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2221
WEDNESDAY AFTERNOON, 29 OCTOBER 2014
INDIANA E, 1:00 P.M. TO 2:35 P.M.
Session 3pID
Interdisciplinary: Hot Topics in Acoustics
Paul E. Barbone, Chair
Mechanical Engineering, Boston University, 110 Cummington St, Boston, MA 02215
Chair’s Introduction—1:00
Invited Papers
1:05
3pID1. Online education: From classrooms to outreach, the internet is changing the way we teach and learn. Michael B. Wilson
(Phys., North Carolina State Univ., 1649 Highlandon Ct, State College, PA 16801, wilsomb@gmail.com)
The internet is changing the face of education in the world today. More people have access to more information than ever before,
and new programs are organizing and providing educational content for free to millions of internet users worldwide. This content ranges
from interesting facts and demonstrations that introduce a topic to entire university courses. Some of these programs look familiar and
draw from the media and education of the past, building off the groundwork laid by television programs like Watch Mr. Wizard, Bill
Nye the Science Guy, and Reading Rainbow, with others more reminiscent of traditional classroom lectures. Some programs, on the
other hand, are truly a product of modern internet culture and fan communities. While styles and target audiences vary greatly, the focus
is education, clarifying misconceptions, and sparking an interest in learning. Presented will be a survey of current online education,
resources, and outreach, as well as the state of acoustics in online education.
1:35
3pID2. Advanced methods of signal processing in acoustics. R. Lee Culver (School of Architecture, Rensselaer Polytechnic Inst.,
State College, Pennsylvania) and Ning Xiang (School of Architecture, Rensselaer Polytechnic Inst., Greene Bldg., 110 8th St., Troy, NY
12180, xiangn@rpi.edu)
Signal processing is applied in virtually all areas of modern acoustics to extract, classify, and/or quantify relevant information from
acoustic measurements. Methods range from classical approaches based on Fourier and time-frequency analysis, to array signal processing, feature extraction, computational auditory scene analysis, and Bayesian inference, which incorporates physical models of the acoustic system under investigation together with advanced sampling techniques. This talk highlights new approaches to signal processing
recently applied in a broad variety of acoustical problems.
2:05
3pID3. Hot topics in fish acoustics (active). Timothy K. Stanton (Dept. Appl. Ocean. Phys. & Eng., Woods Hole Oceanographic Inst.,
Woods Hole, MA 02543, tstanton@whoi.edu)
It is important to quantify the spatial distribution of fish in their natural environment (ocean, lake, and river) and how the distribution
evolves in time for a variety of applications including (1) management of fish stocks to maintain a sustainable source of food and (2) to
improve our understanding of the ecosystem (such as how climate change impacts fish) through quantifying predator–prey relationships
and other behavior. Active fish acoustics provides an attractive complement to nets given the great distances sound travels in the water
and its ability to rapidly survey a large region at high resolution. This method involves studying distributions of fish in the water through
analyzing their echoes through various means. While this field has enjoyed development for decades, there remain a number of “hot topics” receiving attention from researchers today. These include: (1) broadband acoustics as an emerging tool for advanced classification
of, and discrimination between, species, (2) multi-beam imaging systems used to classify fish schools by size and shape, (3) long-range
(km to 10’s km) detection of fish, and (4) using transmission loss to classify fish on one-way propagation paths. Recent advances in these
and other topics will be presented.
2222
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2222
WEDNESDAY AFTERNOON, 29 OCTOBER 2014
MARRIOTT 3/4, 1:00 P.M. TO 3:15 P.M.
Session 3pNS
Noise: Sonic Boom and Numerical Methods
Jonathan Rathsam, Cochair
NASA Langley Research Center, MS 463, Hampton, VA 23681
Alexandra Loubeau, Cochair
NASA Langley Research Center, MS 463, Hampton, VA 23681
Contributed Papers
3pNS1. Source parameters for the numerical simulation of lightning as
a nonlinear acoustic source. Andrew Marshall, Neal Evans, Chris Hackert,
and Karl Oelschlaeger (Southwest Res. Inst., 6220 Culebra Rd., San Antonio, TX 78238-5166, andrew.marshall@swri.org)
Researchers have proposed using acoustic data to obtain additional
insight into aspects of lightning physics. However, it is unclear how much
information is retained in the nonlinear acoustic waveform as it propagates
and evolves away from the lightning channel. Prior research in tortuous
lightning has used simple N-waves as the initial acoustic emission. It is not
clear if more complex properties of the lightning channel physics are also
transmitted in the far-field acoustic signal, or if simple N-waves are a sufficient source term to predict far-field propagation. To investigate this, the
authors have conducted a numerical study of acoustic emissions from a linear lightning channel. Using a hybrid strong-shock/weak-shock code, the
authors compare the propagation of a simple N-wave and emissions from a
source derived from simulated strong shock waves from the lightning channel. The implications of these results on the measurement of sound from
nearby lightning sources will be discussed.
1:15
3pNS2. Nearfield acoustic measurements of triggered lightning using a
one-dimensional microphone array. Maher A. Dayeh and Neal Evans
(Southwest Res. Inst., Div 18, B77, 6220 Culebra Rd., San Antonio, TX
78238, neal.evans@swri.org)
For the first time, acoustic signatures from rocket-triggered lightning are
measured by a 15 m long, one-dimensional microphone array consisting of
16 receivers, situated 79 m from the lightning channel. Measurements were
taken at the International Center for Lightning Research and Testing
(ICLRT) in Camp Blanding, FL, during the summer of 2014. We describe
the experimental setup and report on the first observations obtained to date.
We also discuss the implications of these novel measurements on the thunder initiation process and its energy budget during lightning discharges.
Challenges of obtaining measurements in these harsh ambient conditions
and their countermeasures will also be discussed.
1:30
3pNS3. The significance of edge diffraction in sonic boom propagation
within urban environments. Jerry W. Rouse (Structural Acoust. Branch,
NASA Langley Res. Ctr., 2 North Dryden St., MS 463, Hampton, VA
23681, jerry.w.rouse@nasa.gov)
Advances in aircraft design, computational fluid dynamics, and sonic
boom propagation modeling suggest that commercial supersonic aircraft can
be designed to produce quiet sonic booms. Driven by these advances the
decades-long government ban on overland supersonic commercial air transportation may be lifted. The ban would be replaced with a noise-based certification standard, the development of which requires knowledge of
2223
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
community response to quiet sonic booms. For inner city environments the
estimation of community exposure to sonic booms is challenging due to the
complex topography created by buildings, the large spatial extent and the
required frequency range. Such analyses are currently intractable for traditional wave-based numerical methods such as the Boundary Element
Method. Numerical methods based upon geometrical acoustics show promise, however edge diffraction is not inherent in geometrical acoustics and
may be significant. This presentation shall discuss an initial investigation
into the relative importance of edge diffraction in inner city sound fields
caused by sonic booms. Results will provide insight on the degree to which
edge diffraction effects are necessary for accurate predictions of inner city
community exposure.
1:45
3pNS4. Sonic boom noise exposure inside homes. Jacob Klos (Structural
Acoust. Branch, NASA Langley Res. Ctr., 2 N. Dryden St., MS 463, Hampton, VA 23681, j.klos@nasa.gov)
Commercial supersonic overland flight is presently banned both nationally and internationally due to the sonic boom noise that is produced in
overflown communities. However, within the next decade, NASA and
industry may develop and demonstrate advanced supersonic aircraft that significantly mitigate the noise perceived at ground level. To allow commercial
operation of such vehicles, bans on commercial supersonic flight must be
replaced with a noise-based certification standard. In the development of
this standard, variability in the dose-response model needs to be identified.
Some of this variability is due to differing sound transmission characteristics
of homes both within the same community and among different communities. A tool to predict the outdoor-to-indoor low-frequency noise transmission into homes has been developed at Virginia Polytechnic Institute and
State University, which was used in the present study to assess the indoor
exposure in two communities representative of the northern and southern
United States climate zones. Sensitivity of the indoor noise level to house
geometry and material properties will be discussed. Future plans to model
the noise exposure variation among communities within the United States
will also be discussed.
2:00
3pNS5. Evaluation of the effect of aircraft size on indoor annoyance
caused by sonic booms. Alexandra Loubeau (Structural Acoust. Branch,
NASA Langley Res. Ctr., MS 463, Hampton, VA 23681, a.loubeau@nasa.
gov)
Sonic booms from recently proposed supersonic aircraft designs developed with advanced tools are predicted to be quieter than those from previous designs. The possibility of developing a low-boom flight demonstration
vehicle for conducting community response studies has attracted international interest. These studies would provide data to guide development of a
preliminary noise certification standard for commercial supersonic aircraft.
An affordable approach to conducting these studies suggests the use of a
168th Meeting: Acoustical Society of America
2223
3p WED. PM
1:00
sub-scale experimental aircraft. Due to the smaller size and weight of the
sub-scale vehicle, the resulting sonic boom is expected to contain spectral
characteristics that differ from that of a full-scale vehicle. To determine
the relevance of using a sub-scale aircraft for community annoyance studies, a laboratory study was conducted to verify that these spectral differences do not significantly affect human response. Indoor annoyance was
evaluated for a variety of sonic booms predicted for several different sizes
of vehicles. Previously reported results compared indoor annoyance for the
different sizes using the metric Perceived Level (PL) at the exterior of the
structure. Updated results include analyses with other candidate noise metrics, nonlinear regression, and specific boom duration effects.
2:15
3pNS6. Effects of secondary rattle noises and vibration on indoor
annoyance caused by sonic booms. Jonathan Rathsam (NASA Langley
Res. Ctr., MS 463, Hampton, VA 23681, jonathan.rathsam@nasa.gov)
For the past 40 years, commercial aircraft have been banned from overland supersonic flight due to the annoyance caused by sonic booms. However,
advanced aircraft designs and sonic boom prediction tools suggest that significantly quieter sonic booms may be achievable. Additionally, aircraft noise
regulators have indicated a willingness to consider replacing the ban with a
noise-based certification standard. The outdoor noise metric used in the certification standard must be strongly correlated with indoor annoyance. However, predicting indoor annoyance is complicated by many factors including
variations in outdoor-to-indoor sound transmission and secondary indoor rattle noises. Furthermore, direct contact with vibrating indoor surfaces may also
affect annoyance. A laboratory study was recently conducted to investigate
candidate noise metrics for the certification standard. Regression analyses
were conducted for metrics based on the outdoor and transmitted indoor sonic
boom waveforms both with and without rattle noise, and included measured
floor vibration. Results indicate that effects of vibration are significant and independent of sound level. Also, the presence or absence of rattle sounds in a
transmitted sonic boom signal generally changes the regression coefficients
for annoyance models calculated from the outdoor sound field, but may not
for models calculated from the indoor sound field.
2:30
3pNS7. Artificial viscosity in smoothed particle hydrodynamics simulation of sound interference. Xu Li, Tao Zhang, YongOu Zhang (School of
Naval Architecture and Ocean Eng., Huazhong Univ. of Sci. and Technol.,
Wuhan, Hubei Province 430074, China, lixu199123@gmail.com), Huajiang
Ouyang (School of Eng., Univ. of Liverpool, Liverpool, United Kingdom),
and GuoQing Liu (School of Naval Architecture and Ocean Eng., Huazhong
Univ. of Sci. and Technol., Wuhan, Hubei Province, China)
The artificial viscosity has been widely used in reducing unphysical
oscillations in the Smoothed Particle Hydrodynamics (SPH) simulations.
However, the effects of artificial viscosity on the SPH simulation of sound
interference have not been discussed in existing literatures. This paper analyzes the effects and gives some suggestions on the choice of computational
parameters of the artificial viscosity in the sound interference simulation.
First, a standard SPH code for simulating sound interference in the time domain is built by solving the linearized acoustic wave equations. Second, the
Monaghan type artificial viscosity is used to optimize the SPH simulation.
Then the SPH codes with and without the artificial viscosity are both used to
simulate the sound interference and the numerical solutions are compared
with the theoretical results. Finally, different values of computational parameters of the artificial viscosity are used in the simulation in order to
determine the appropriate values. It turns out that the numerical solutions of
2224
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
SPH simulation of sound interference agree well with the theoretical results.
The artificial viscosity can improve the accuracy of the sound interference
simulation. The appropriate values of computational parameters of the artificial viscosity are recommended in this paper.
2:45
3pNS8. Smoothed particle hydrodynamics simulation of sound reflection and transmission. YongOu Zhang (School of Naval Architecture and
Ocean Eng., Huazhong Univ. of Sci. and Technol., Wuhan 430074, China,
zhangyo1989@gmail.com), Tao Zhang (School of Naval Architecture and
Ocean Eng., Huazhong Univ. of Sci. and Technol., Wuhan, Hubei Province,
China), Huajiang Ouyang (School of Eng., Univ. of Liverpool, Liverpool,
United Kingdom), and TianYun Li (School of Naval Architecture and
Ocean Eng., Huazhong Univ. of Sci. and Technol., Wuhan, China)
Mesh-based methods are widely used in acoustic simulations nowadays. However, acoustic problems with complicated domain topologies
and multiphase systems are difficult to be described with these methods.
On the contrary, Smoothed Particle Hydrodynamics (SPH), as a Lagrangian method, does not have much trouble in solving these problems. The
present paper aims to simulate the reflection and transmission of sound
waves with the SPH method in time domain. Firstly, the linearized acoustic equations are represented in the SPH form by using the particle approximation. Then, one dimensional sound reflection and transmission are
simulated with the SPH method and the solutions are compared with the
theoretical results. Finally, the effects of smoothing length and neighboring
particle numbers on the computation are discussed. The errors of sound
pressure, particle velocity, and change of density show that the SPH
method is feasible in simulating the reflection and transmission of sound
waves. Meanwhile, the relationship between the characteristic impedance
and the reflected waves obtained by the SPH simulation is consistent with
the theoretical result.
3:00
3pNS9. A high-order Cartesian-grid finite-volume method for aeroacoustics simulations. Mehrdad H. Farahani (Head and Neck Surgery,
UCLA, 31-24 Rehab Ctr., UCLA School of Medicine, 1000 Veteran Ave.,
Los Angeles, CA 90095, mh.farahani@gmail.com), John Mousel (Mech.
and Industrial Eng., The Univ. of Iowa, Iowa City, IA), and Sarah Vigmostad (Biomedical Eng., The Univ. of Iowa, Iowa City, IA)
A moving-least-square based finite-volume method is developed to simulate acoustic wave propagation and scattering from complicated solid geometries. This hybrid method solves the linearized perturbed compressible
equations as the governing equations of the acoustic field. The solid boundaries are embedded in a uniform Cartesian grid and represented using level
set fields. Thus, the current approach avoids unstructured grid generation for
the irregular geometries. The desired boundary conditions are imposed
sharply on the immersed boundaries using a ghost fluid method. The scope
of the implementation of the moving moving-least-square approach in the
current solver is threefold: reconstruction of the field variables on cell faces
for high-order flux construction, population of the ghost cells based on the
desired boundary condition, and filtering the high wave number modes near
the immersed boundaries. The computational stencils away from the boundaries are identical; hence, only one moving-least-square shape-function is
computed and stored with its underlying grid pattern for all the interior
cells. This feature significantly reduces the memory requirement of the
acoustic solver compared to similar finite-volume method on irregular
unstructured mesh. The acoustic solver is validated against several benchmark problems.
168th Meeting: Acoustical Society of America
2224
WEDNESDAY AFTERNOON, 29 OCTOBER 2014
MARRIOTT 9/10, 1:00 P.M. TO 3:25 P.M.
Session 3pUW
Underwater Acoustics: Shallow Water Reverberation I
Dajun Tang, Chair
Applied Physics Lab., Univ. of Washington, 1013 NE 40th St., Seattle, WA 98105
Chair’s Introduction—1:00
Invited Papers
1:05
3pUW1. Overview of reverberation measurements in Target and Reverberation Experiment 2013. Jie Yang, Dajun Tang, Brian T.
Hefner, Kevin L. Williams (Appl. Phys. Lab, Univ. of Washington, 1013 NE 40th St., Seattle, WA 98105, jieyang@apl.washington.
edu), and John R. Preston (Appl. Res. Lab., Penn State Univ., State College, PA)
The Target and REverberation EXperiment 2013 (TREX13) was carried out off the coast of Panama City, Florida, from 22 April to
16 May, 2013. Two fixed-source/fixed-receiver acoustic systems were used to measure reverberation over time under diverse environmental conditions, allowing study of reverberation level (RL) dependence on bottom composition, sea surface conditions, and water column properties. Beamformed RL data are categorized to facilitate studies emphasizing (1) bottom reverberation; (2) sea surface impact;
(3) biological impact; and (4) target echo. This presentation is an overview of RL over the entire experiment, summarizing major observations and providing a road map and suitable data sets for follow-up efforts on model/data comparisons. Emphasis will be placed on
the dependence of RL on local geoacoustic properties and sea surface conditions. [Work supported by ONR.]
1:25
3p WED. PM
3pUW2. Non-stationary reverberation observations from the shallow water TREX13 reverberation experiments using the
FORA triplet array. John R. Preston (ARL, Pennsylvania State Univ., P. O. Box 30, MS3510, State College, PA 16804, jrp7@arl.psu.
edu), Douglas A. Abraham (CausaSci LLC, Ellicott City, MD), and Jie Yang (APL, Univ. of Washington, Seattle, WA)
A large experimental effort called TREX13 was conducted in April-May 2013 off Panama City, Florida. As part of this effort, reverberation and clutter measurements were taken in a fixed-fixed configuration in very shallow water (~20 m) over a 22 day period. Results
are presented characterizing reverberation, clutter and noise in the 1800-5000 Hz band. The received data are taken from the triplet subaperture of the Five Octave Research Array (FORA). The array was fixed 2 m off the sea floor and data were passed to a nearby moored
ship (the R/V Sharp). An ITC 2015 source transducer was fixed 1.1 m off the seafloor nearby. Pulses comprised of gated CWs and
LFMs were used in this study. Matched filtered polar plots of the reverberation and clutter are presented using the FORA triplet beamformer. There are clear indications of biologic scattering. Some of the nearby shipwrecks are clearly visible in the clutter, as are reflections from a DRDC air-filled hose. The noise data show a surprising amount of time-dependent anisotropy. Some statistical
characterization of these various components of the reverberation are presented using K-distribution based algorithms to note differences
in the estimated shape parameter. Help from the Applied Physics Laboratory at the University of Washington was crucial to this effort.
[Work supported by ONR code 322OA.]
Contributed Paper
1:45
3pUW3. Propagation measurement using source tow and moored vertical line arrays during TREX13. William S. Hodgkiss, David Ensberg
(Marine Physical Lab, Scripps Inst. of Oceanogr., La Jolla, CA), and Dajun
Tang (Appl. Phys. Lab, Univ of Washington, 1013 NE 40th St., Seattle, WA
98105, djtang@apl.washington.edu)
The objective of TREX13 (Target and Reverberation EXperiment 2013)
is to investigate shallow water reverberation by concurrently measuring
propagation, local backscatter, and reverberation, as well as sufficient environmental parameters needed to achieve unambiguous model/data
2225
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
comparison. During TREX13 the Marine Physical Laboratory (MPL) conducted propagation and forward scatter measurements. The MPL effort during TREX13 included deploying three, 32-element (0.2 m element spacing),
vertical line arrays along the Main Reverberation Track at a bearing of
~128 and ranges ~2.4 km, ~4.2 km, and ~0.5 km from the R/V Sharp,
where reverberation measurements were being made. In addition, MPL carried out repeated source tows in the band of 2–9 kHz along the Main Reverberation Track, using tonal and LFM waveforms. The experimental
procedure is described and the resulting source-tow data is examined in the
context of Transmission Loss and its implications for reverberation.
168th Meeting: Acoustical Society of America
2225
Invited Papers
2:00
3pUW4. Comparison of signal coherence for continuous active and pulsed active sonar measurements in littoral waters. Paul C.
Hines (Dept. of Elec. and Comput. Eng., Dalhousie Univ., PO Box 15000, Halifax, NS B3H 4R2, Canada, phines50@gmail.com), Stefan
M. Murphy (Defence R&D Canada, Dartmouth, NS, Canada), and Keaton T. Hicks (Dept. of Mech. Eng., Dalhousie Univ., Halifax, NS,
Canada)
Military sonars must detect, localize, classify, and track submarine threats from distances safely outside their circle of attack. However, conventional pulsed active sonars (PAS) have duty cycles on the order of 1% which means that 99% of the time, the track is out of
date. In contrast, continuous active sonars (CAS) have a 100% duty cycle, which enables continuous updates to the track. This should
significantly improve tracking performance. However, one would typically want to maintain the same bandwidth for a CAS system as
for the PAS system it might replace. This will provide a significant increase in the time-bandwidth product, but may not produce the
increase in gain anticipated if there are coherence limitations associated with the acoustic channel. To examine the impact of the acoustic
channel on the gain for the two pulse types, an experiment was conducted as part of the Target and Reverberation Experiment (TREX)
in May 2013 using a moored active sonar and three passive acoustic targets, moored at ranges from 2 to 6 km away from the sonar. In
this paper, preliminary results from the experiment will be presented. [Work supported by the U.S. Office of Naval Research.]
2:20
3pUW5. Reverberation and biological clutter in continental shelf waveguides. Ankita D. Jain, Anamaria Ignisca (Mech. Eng.,
Massachusetts Inst. of Technol., Rm. 5-229, 77 Massachusetts Ave., Cambridge, MA 02139, ankitadj@mit.edu), Mark Andrews, Zheng
Gong (Elec. & Comput. Eng., Northeastern Univ., Boston, MA), Dong Hoon Yi (Mech. Eng., Massachusetts Inst. of Technol.,
Cambridge, MA), Purnima Ratilal (Elec. & Comput. Eng., Northeastern Univ., Boston, MA), and Nicholas C. Makris (Mech. Eng.,
Massachusetts Inst. of Technol., Cambridge, MA)
Seafloor reverberation in continental shelf waveguides is the primary limiting factor in active sensing of biological clutter in the
ocean for noise unlimited scenarios. The detection range of clutter is determined by the ratio of the intensity of scattered returns from
clutter versus the seafloor in a resolution cell of an active sensing system. We have developed a Rayleigh-Born volume scattering model
for seafloor scattering in an ocean waveguide. The model has been tested with data collected from a number of Ocean Acoustic Waveguide Remote Sensing (OAWRS) experiments in distinct US Northeast coast continental shelf environments, and has shown to provide
accurate estimates of seafloor reverberation over wide areas for various source frequencies. We estimate scattered returns from fish clutter by combining ocean-acoustic waveguide propagation modeling that has been calibrated in a variety of continental shelf environments
for OAWRS applications with a model for fish target strength. Our modeling of seafloor reverberation and scattered returns from fish
clutter is able to explain and elucidate OAWRS measurements along the US Northeast coast.
Contributed Papers
2:40
2:55
3pUW6. Transmission loss and reverberation variability during
TREX13. Sean Pecknold (DRDC Atlantic Res. Ctr., PO Box 1012, Dartmouth, NS B2Y 3Z7, Canada, sean.pecknold@drdc-rddc.gc.ca), Diana
McCammon (McCammon Acoust. Consulting, Waterville, NS, Canada),
and Dajun Tang (Ocean Acoust., Appl. Phys. Lab., Univ. of Washington,
Seattle, WA)
3pUW7. Transmission loss and direction of arrival observations from a
source in shallow water. David R. Dall’Osto (Appl. Phys. Lab., Univ. of
Washington, 1013 N 40th St., Seattle, WA 98105, dallosto@apl.washington.edu) and Peter H. Dahl (Appl. Phys. Lab. and Mech. Eng. Dept., Univ.
of Washington, Seattle, WA)
The ONR-funded Target and Reverberation Experiment 2013
(TREX13) took place in the Northeastern Gulf of Mexico near Panama
City, Florida, during April and May of 2013. During this trial, which took
place in a shallow water (20 m deep) environment, several sets of one-way
and two-way acoustic transmission loss and reverberation data were
acquired. Closed form expressions are derived to trace the uncertainty in the
inputs to a Gaussian beam propagation model through the model to obtain
an estimate of the uncertainty in the output, both for transmission loss and
for reverberation. The measured variability of the TREX environment is
used to compute an estimate of the expected transmission loss and reverberation variability. These estimates are then compared to the measured acoustic data from the trial.
2226
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Signals generated by the source used in the reverberation studies of the
Targets and Reverberation Experiment (TREX) were recorded by a receiving array located 4.7 km downrange. The bathymetry over this range is relatively flat, with water depth 20 m. The receiving system consists of a
7-channel vertical line array, a 4-channel horizontal line array, oriented perpendicular to the propagation direction, and a 4-channel vector sensor
(3-component vector and one pressure), with all channels recorded coherently recorded Transmissions were made once every 30 seconds and over a
two hour recording period, changes in the frequency content, amplitude and
direction were observed. As both the source and receiving array are at a
fixed position in the water column, these observations are assumed to be due
to changes in the environment. Interpretation of the data is given in terms of
the evolving sea-surface conditions, the presence of nearby scatterers such
as fish, and reflection/refraction due to the sloping shoreline.
168th Meeting: Acoustical Society of America
2226
3:10
3pUW8. Effect of a roughened sea surface on shallow water propagation
with emphasis on reactive intensity obtained with a vector sensor. David
R. Dall’Osto (Appl. Phys. Lab., Univ. Washington, 1013 N 40th St., Seattle,
WA 98105, dallosto@apl.washington.edu) and Peter H. Dahl (Appl. Phys.
Lab. and Mechanical Eng. Dept., Univ. of Washington, Seattle, WA)
3p WED. PM
In this study, sea-surface conditions during the Targets and Reverberation Experiment (TREX) are analyzed. The sea-surface directional spectrum
was experimentally measured up to 0.6 Hz with two wave buoys separated
by 5 km. The analysis presented here focuses on propagation relating to
three canonical sea-surfaces observed during the experiment: calm conditions, and rough conditions with waves either perpendicular or parallel to
the primary propagation direction. Acoustic data collected during calm and
rough conditions show a significant difference in the amount of out-of-plane
scattering. Interference due to this out-of-plane scattering is observed in the
component of reactive intensity perpendicular to the propagation direction.
These observations are compared with those generated using a model of the
sea-surface scattering based on a combination of buoy-measured and modeled directional spectrum. Simulated sea-surfaces are also constructed for
this numerical study. A model for wind waves is used to obtain surface
wavenumbers greater than those measured by the wave buoys (~1.5 rad/m).
Importantly, the spectral peak and it direction are well measured by the
buoys and no assumptions on fetch are required, resulting in a more realistic
wave spectrum and description of sea-surface conditions for acoustic
modeling.
2227
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2227
WEDNESDAY AFTERNOON, 29 OCTOBER 2014
MARRIOTT 5, 3:30 P.M. TO 4:30 P.M.
Plenary Session, Annual Meeting, and Awards Ceremony
Judy R. Dubno, President
Acoustical Society of America
Annual Meeting of the Acoustical Society of America
Presentation of Certificates to New Fellows
Mingsian Bai – for contributions to nearfield acoustic holography
David S. Burnett – for contributions to computational acoustics
James E. Phillips – for contributions to vibration and noise control and for service to the Society
Bonnie Schnitta – for the invention and application of noise mitigation systems
David R. Schwind – for contributions to the acoustical design of theaters, concert halls, and film studios
Neil T. Shade – for contributions to education and to the integration of electroacoustics in architectural acoustics
Joseph A. Turner – for contributions to theoretical and experimental ultrasonics
Announcements and Presentation of Awards
Presentation to Leo L. Beranek on the occasion of his 100th Birthday
Rossing Prize in Acoustics Education to Colin H. Hansen
Pioneers of Underwater Acoustics Medal to Michael B. Porter
Silver Medal in Speech Communication to Sheila E. Blumstein
Wallace Clement Sabine Medal to Ning Xiang
WEDNESDAY EVENING, 30 OCTOBER 2014
7:30 P.M. TO 9:00 P.M.
OPEN MEETINGS OF TECHNICAL COMMITTEES
The Technical Committees of the Acoustical Society of America will hold open meetings on Tuesday, Wednesday, and Thursday
evenings. On Tuesday the meetings will begin at 8:00 p.m., except for Engineering Acoustics which will hold its meeting starting at
4:30 p.m. On Wednesday evening, the Technical Committee on Biomedical Acoustics will meet starting at 7:30 p.m. On Thursday evening, the meetings will begin at 7:30 p.m.
Biomedical Acoustics Indiana A/B
These are working, collegial meetings. Much of the work of the society is accomplished by actions that originate and are taken in
these meetings including proposals for special sessions, workshops, and technical initiatives. All meetings participants are cordially
invited to attend these meetings and to participate actively in the discussion.
2228
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2228
ACOUSTICAL SOCIETY OF AMERICA
PIONEERS OF UNDERWATER ACOUSTICS MEDAL
Michael B. Porter
2014
The Pioneers of Underwater Acoustics Medal is presented to an individual irrespective of nationality, age, or society
affiliation, who has made an outstanding contribution to the science of underwater acoustics, as evidenced by publication of
research in professional journals or by other accomplishments in the field. The award was named in honor of five pioneers
in the field: H. J. W. Fay, R. A. Fessenden. H. C. Hayes, G. W. Pierce, and P. Langevin.
PREVIOUS RECIPIENTS
Harvey C. Hayes
Albert B. Wood
J. Warren Horton
Frederick V. Hunt
Harold L. Saxton
Carl Eckart
Claude W. Horton, Sr.
Arthur O. Williams
Fred N. Spiess
2229
1959
1961
1963
1965
1970
1973
1980
1982
1985
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Robert J. Urick
Ivan Tolstoy
Homer P. Bucker
William A. Kuperman
Darrell R. Jackson
Frederick D. Tappert
Henrik Schmidt
William M. Carey
George V. Frisk
1988
1990
1993
1995
2000
2002
2005
2007
2010
168th Meeting: Acoustical Society of America
2229
2230
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2230
CITATION FOR MICHAEL B. PORTER
. . . for contributions to underwater acoustic modeling
INDIANAPOLIS, INDIANA • 29 OCTOBER 2014
Michael B. Porter describes his college days as “ending up” at Caltech, living in an
eleven-student communal environment, sleeping on a floor-mattress, working in various
odd-jobs and mastering the culinary skill of baking beans, all in anticipation of his immediate future that would be dominated by his student loan. Aside from what to him were his
major undergraduate accomplishments, meeting his significant other, Laurel Henderson,
and developing crowd pleasing culinary talents, he also apparently learned some math and
physics. At Northwestern, he received his Ph.D. in Applied Math working for Ed Reiss in
which, among other things, he developed numerical algorithms that were to become standard methods in the Underwater Acoustics (UW) community. Probably, of the four most
often used models in all of the UW community in the last quarter century or so, Michael
B. Porter is the originator of two of them: The KRAKEN normal mode models and BELLHOP, a ray-Gaussian beam model.
However, these major contributions were only a few along the diverse research trail
that Michael pioneered. His research venues were also as diverse in that his 35 years of
research were conducted while in research, professor, and management positions at more
or less every type of research organization—government, academic, and private sector. His
research community impact is pervasive in that he is a coauthor of Computational Ocean
Acoustics, recently revised in a second edition, and he has also created and maintains the
Ocean Acoustics Library (OALIB), a site where anyone can download his MATLAB versions of all the major underwater acoustic propagation models. These latter activities alone
probably make Mike Porter a “household name” for the whole international community in
UW—but these are only part of the story.
Mike’s first pioneering acoustic contribution made not too long after he was born in
Quebec City in 1958 was when he developed an innovative glue-based repair procedure
for woofer-speakers that produced flying bits of speaker cone when directly plugged into a
wall outlet. While this experience probably motivated him to explore numerical methods,
he did spend some time later working on transducers with George Benthien at the Naval
Ocean Systems Center (NOSC).
His groundbreaking Ph.D. thesis and seminal paper in 1984 in the Journal of the Acoustical Society of America (JASA) on an unconditionally stable approach to normal mode
computation laid the foundation for KRACKEN/KRACKEN C and the SACLANTCEN
SNAP normal mode (NM) models, probably the two most used NM models in the world.
After his Ph.D. he was fortunate enough to collaborate with Homer Bucker at NOSC on
Gaussian beams from which BELLHOP would be an outgrowth. So, almost immediately
after his Ph.D. he was a leader in the UW modeling community.
I first met Mike when he came to the Naval Research Laboratory (NRL) in 1985 to work
in Orest Diachok’s Branch on Arctic (and other) acoustics. There he further organized his
models to community useable tools while also significantly contributing to the new area of
Matched Field Processing (MFP). We also established a close working relationship in that
area and in particular worked on a rapid method to do three-dimensional modal propagation, which he later enhanced to include more oceanographic as well as global propagation
phenomena. We have remained close friends and colleagues since those NRL days, including coauthoring Computational Ocean Acoustics with Finn Jensen and Henrik Schmidt.
In 1987 Mike joined Finn Jensen’s modeling group at SACLANTCEN, and it was
there that he worked with the rest of the coauthors (all from or at SACLANTCEN) on
Computational Ocean Acoustics. There he also developed ongoing research partnerships
with U. S. oceanographer Steve Piacsek as well as other European scientists. Much of
his SACLANTCEN research concerned range-dependent modeling, including a seminal
contribution to energy conservation of one-way equations, coupled mode modeling, and
chaotic effects in multipath environments.
He returned to the U. S. in 1991 to a faculty position at the New Jersey Institute of
Technology’s (NJIT) math department with David Stickler, Daljit Ahluwahlia, and Greg
Kriegsmann, and was rather quickly elevated to being one of its youngest full professors.
2231
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2231
There he worked in the area of MFP, extending it to some complicated broadband scenarios (with Zoi-Heleni Michalopoulou) as well further optimizing his models. While at
NJIT, he also did a sabbatical at the University of Algarve with his former SACLANTCEN
colleague Sergio Jesus and with the Portuguese and French hydrographers Yann Stephan
and Emanuel Coelho to study the acoustic effects of internal tides.
In 1999 he accepted a position at Science Applications International Corporation
(SAIC) as Assistant Vice President/Chief Scientist in its Ocean Science Division headed
by Peter Mikhalevsky. It was there that he began close collaborations with Paul Hursky,
Ahmad Abawi, Martin Siderius, and Keyko McDonald (from SPAWAR). At SAIC he also
completed his transition to heavy-duty experimental activity, which probably originated by
him being misled at SACLANTCEN to think that at-sea experiments were associated with
Michelin rated dining. His subsequent growth as an at-sea scientist is evidenced by his role
as chief scientist on a series of multi-institutional acoustic communications (Acomms) sea
trials. Mike was uniquely qualified for this Acomms role in that the only practical model
to describe the Acomms channel was BELLHOP. So, as the experiment chief scientist he
was also the expert on the theoretical aspects of the project. He had progressed to a level
that made him one of a very few scientists in our community capable of leading the theory,
simulation, and experimental aspects of a large UW project. This was all happening while
he was also working in MFP and inverse methods at SAIC.
Ever restless and seeking new experiences, he founded a new company in 2004, Heat
Light and Sound, Inc. (HLS), taking with him Abawi, Hursky, and Siderius. At HLS he
has continued his research, lately being involved in ocean soundscapes, marine mammal
acoustics, and other environmentally-related areas as well as continuing on in his established fields of research. During this latter period he was also a coauthor of the seminal
paper in JASA (2006) on the passive fathometer with Martin Siderius and Chris Harrison.
Most important to me is that Mike has been my very good friend over these many years,
and it has been a pleasure to watch him share his friendship and his knowledge with a very
broad segment of the UW community. He is an author of acoustic models and a book that
are central to the acoustic community and has established the Ocean Acoustics Library, the
latter probably being the most important instrument in disseminating models to students as
well as seasoned researchers. Recognized early in his career with the A. B. Wood Medal,
Michael B. Porter’s career trajectory in Underwater Acoustics has truly been a pioneering
adventure. The ASA Pioneers of Underwater Acoustics Medal is a fitting recognition of
his many achievements.
WILLIAM A. KUPERMAN
2232
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2232
ACOUSTICAL SOCIETY OF AMERICA
Silver Medal in
Speech Communication
Sheila E. Blumstein
2014
The Silver Medal is presented to individuals, without age limitation, for contributions to the advancement of science,
engineering, or human welfare through the application of acoustic principles, or through research accomplishment in
acoustics.
PREVIOUS RECIPIENTS
Franklin S. Cooper
Gunnar Fant
Kenneth N. Stevens
Dennis H. Klatt
Arthur S. House
Peter Ladefoged
2233
1975
1980
1983
1987
1991
1994
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Patricia K. Kuhl
Katherine S. Harris
Ingo R. Titze
Winifred Strange
David B. Pisoni
1997
2005
2007
2008
2010
168th Meeting: Acoustical Society of America
2233
2234
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2234
CITATION FOR SHEILA E. BLUMSTEIN
. . . for contributions to understanding how acoustic signals are transformed into linguistic
representations
INDIANAPOLIS, INDIANA • 29 OCTOBER 2014
Sheila Blumstein was born in New York City, obtained a B.A. in Linguistics from
the University of Rochester, and a Ph.D. in Linguistics from Harvard University, under
the guidance of the legendary Roman Jakobson. Sheila’s dissertation, A Phonological
Investigation of Aphasic Speech, published as a book by Mouton in 1973, already clearly
indicated the focus of her research: the representation of speech and language in the brain.
Today, as the Albert D. Mead Professor of Cognitive and Linguistic Sciences at Brown
University, Sheila pursues this research agenda as vigorously as when she started there on
the faculty in 1970.
Sheila Blumstein has contributed immeasurably to our knowledge of the acoustics and
perception of speech. Specifically, her research addresses how the continuous acoustic
signal is transformed by perceptual and neural mechanisms into linguistically relevant
representations. Among her many significant contributions to the field of Speech Communication, the following two have had a profound impact on our field. First, through detailed
analysis of speech sounds, Sheila showed that the mapping between acoustic properties
and perceived phonetic categories is richer, and more consistent and invariant, than previously thought, a finding which necessitated a new conception of the relation between the
production and perception of speech. Second, Sheila’s finding that subtle yet systematic
acoustic differences can affect activation of word candidates in the mental lexicon indicated that acoustic information not directly relevant for phoneme identification is not discarded but is retained and plays a critical role in word comprehension, providing a crucial
piece of evidence in the ongoing debate about the structure of the mental lexicon.
At the time that Sheila started investigating the speech signal in the 1970s, the prevalent scientific opinion was that there was no simple mapping between acoustic signal and
perceived phonemes because the speech signal was too variable. Acoustic properties were
strongly affected by contextual factors such as variation in speaker, speaking rate, and
phonetic environment. Careful consideration of Gunnar Fant’s acoustic theory of speech
production led Sheila to the hypothesis that invariant acoustic properties could be found
in the speech signal. In contrast to previous research that was dependent on the speech
spectrograph, Sheila focused more on global acoustic properties such as the overall shape
of the spectrum at the release of the stop consonant. Through careful and detailed acoustic
analysis and subsequent perceptual verification, Sheila uncovered stable invariant acoustic
properties that consistently signaled important linguistic features such as place and manner
of articulation. Sheila supported these claims by investigating a variety of speech sound
classes (including stop consonants, fricatives, and approximants) in a variety of languages
because she fully appreciated that conclusions drawn on the basis of one language can
be misleading and universal generalizations can only be made after crosslinguistic comparisons. Sheila’s work on acoustic features resulted in a series of seminal publications
(1978-1987) in the Journal of the Acoustical Society of America, co-authored with Kenneth
Stevens and others.
By the late 1980s, research on speech perception had moved beyond the identification
of individual consonants and vowels to the comprehension of words and to the new field of
“auditory word recognition.” While there was a general consensus that word recognition
involves a process whereby information extracted from the speech signal is matched with
a stored representation in the mental lexicon, it was not clear whether all available acoustic information in the signal played a role in this matching process. In her seminal paper
“The effect of subphonetic differences on lexical access” (Cognition, 1994), Sheila and her
students showed that subtle acoustic variations which do not affect the categorization of a
phoneme nevertheless do affect word recognition. This was a very elegant demonstration
2235
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2235
that subtle subphonemic acoustic information is not discarded before the lexicon is accessed
but instead plays a role in the comprehension of words. This was a very important finding
and necessitated reconsideration of the then dominant view that lexical access proceeds
on the basis of categorical phonemes rather than more fine-grained continuous acoustic
information.
Sheila is co-founder of Brown University’s Barus Speech Lab where she has taught,
supervised, and mentored hundreds of undergraduates, graduates, and postdocs. This lab is
one of the world’s leading research centers for the study of speech at all levels: acoustics,
psycholinguistics, and neurolinguistics. In addition to her speech research, Sheila is equally
known for her research on aphasia, focusing again on speech production and perception.
Just as Sheila was able to make use of technological advances to view the speech signal from a different perspective, she also capitalized on new brain imaging techniques to
augment her understanding of the brain that was based on behavioral data collected from
aphasic patients. Sheila’s most recent acoustic research also uses fMRI to investigate cortical regions involved in the perception of phonetic category invariance as well as neural
systems underlying lexical competition.
A quick glance at Sheila’s resume shows that she has garnered just about every honor
possible. She has been a Guggenheim Fellow, and a recipient of the Claude Pepper (Javits
Neuroscience) Investigator Award. She is a Fellow of the Acoustical Society of America,
the American Association for the Advancement of Science, the American Academy of Arts
and Sciences, the Linguistic Society of America, and the American Philosophical Society.
In addition, Sheila has served Brown University in many capacities, including Dean of the
College, Interim Provost, and Interim President. In all of the positions she has held, Sheila
has earned the admiration and respect of all constituencies. Her warm, supportive, patient
style renders an incisive critique into a constructive suggestion, reflecting her enviable
supervisory and administrative skills.
It is simply not possible to undertake work in acoustic phonetics, phonology, neuroimaging, or aphasia without referring to Sheila’s work. Sheila’s research has been continuously funded through federal research grants since the 1970s. Her research is not only
influential and pivotal, it is also incredibly inspiring. Her students have secured prestigious
positions and continue to conduct innovative research. The field would not be what it is
today without Sheila’s many seminal contributions spanning five decades.
ALLARD JONGMAN
JOAN SERENO
SHARI BAUM
ADITI LAHIRI
2236
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2236
WALLACE CLEMENT SABINE AWARD
OF THE
ACOUSTICAL SOCIETY OF AMERICA
Ning Xiang
2014
The Wallace Clement Sabine Award is presented to an individual of any nationality who has furthered the knowledge of
architectural acoustics, as evidenced by contributions to professional journals and periodicals or by other accomplishments
in the field of architectural acoustics.
PREVIOUS RECIPIENTS
Vern O. Knudsen
Floyd R. Watson
Leo L. Beranek
Erwin Meyer
Hale J. Sabine
Lothar W. Cremer
Cyril M. Harris
Thomas D. Northwood
1957
1959
1961
1964
1968
1974
1979
1982
Richard V. Waterhouse
A. Harold Marshall
Russell Johnson
Alfred C. C. Warnock
William J. Cavanaugh
John S. Bradley
J. Christopher Jaffe
1990
1995
1997
2002
2006
2008
2011
SILVER MEDAL IN
ARCHITECTURAL ACOUSTICS
The Silver Medal is presented to individuals, without age limitation, for contributions to the advancement of science, engineering,
or human welfare through the application of acoustic principles, or through research accomplishment in acoustics.
PREVIOUS RECIPIENT
Theodore J. Schultz
2237
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
1976
168th Meeting Acoustical Society of America
2237
2238
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting Acoustical Society of America
2238
CITATION FOR NING XIANG
. . . for contributions to measurements and analysis techniques, and numerical simulation
of sound fields in coupled rooms
INDIANAPOLIS, INDIANA • 29 OCTOBER 2014
Ning Xiang, 16th recipient of the Society’s Wallace Clement Sabine Medal, is well
known to members of the Society and the worldwide acoustics community for his work
in binaural scale-model measurement, theory and practice of maximum-length sequences,
and Bayesian signal processing. A consummate theoretician and experimentalist, his work
reflects the growing importance of computational modeling and model-based signal processing across the broader field of acoustics, but is unique for making significant general
contributions while maintaining a strong and specific focus on architectural acoustics.
Ning formally began his career in acoustics in 1984, arriving as a young student from
China at the office of his doctoral supervisor Jens Blauert. Though more-or-less inexperienced in the field and hardly able to communicate in German, his mentors and colleagues of that time well remember his fierce determination and commitment. This earnest
enthusiasm for the work would serve him well over his professional career, becoming one
of the key attributes he sought to instill in the many graduate students he would come to
supervise.
Earning a Masters degree in 1986 (Diplom-Ingenieur) from Ruhr-University Bochum,
Ning went on to earn a Ph.D. in 1990 for his development of a binaural acoustical modeling system. This work, which involved design and fabrication of novel scale-model transducers and a miniature (1/10 scale) binaural artificial head, set early the high standard his
future experimental work would demonstrate. At the same time, his doctoral work firmly
established him as a theorist and signal processor for his research and development of
measurement algorithms and software based on maximum-length sequences. This included
a new and effective factorization method required for application of Fast Hadamard Transforms and development of fast test methods for long maximum-length sequences through
identification of the similarity with Morse-Thue sequences [Signal Processing (1992)].
It was through these important findings in maximum-length sequences that Ning began
a long and fruitful collaborative relationship with Manfred Schroeder [Journal of the
Acoustical Society of America (JASA) (2003)].
After the completion of his doctoral degree, Ning joined the technical staff of Head
acoustics in Herzogenrath, Germany as a research scientist/engineer. Here too he continued to bring together theory and practice and experimental work with signal processing,
forming an on-going and fruitful professional relationship with founder Klaus Genuit that
led to a number of important papers [JASA (1995), ACUSTICA - acta acustica (1996)].
This was followed by an appointment in 1997 as a research scientist at the Fraunhofer
Institute for Building Physics in Stuttgart, Germany. Here the application of binaural
measurement technology to performance spaces remained his focus. While this was to be
Ning’s last appointment in Germany, his many professional relationships remain strong
and he is well remembered by his colleagues for his ability to appealingly distill in his lectures and talks the rigor of his analytical thinking into well-organized, clearly articulated
concepts without sacrificing substance or detail.
In 1998 Ning accepted a position as a Research Scientist and Research Associate Professor with the National Center for Physical Acoustics and the Department of Electrical
Engineering of the University of Mississippi. His work on acoustic/seismic coupling for
buried mine detection, conducted in collaboration with James Sabatier and Paul Goggans,
was a departure in domain from his prior work in room and building acoustics. But, characteristically, it became for Ning an opportunity for fertile cross-pollination between subdisciplines. Advances he had made in measurement by maximum-length sequence transitioned into acoustic/seismic measurement while advances in Bayesian signal processing
for mine detection provided him with an important new approach for parameters estimation from single- and multiple-slope Schroeder decay curves of noisy impulse responses
[JASA (2001, 2003)].
In 2003, Ning was appointed Associate Professor at the Rensselaer Polytechnic Institute
(RPI). Returning his full attention to the field of architectural acoustics, Ning expanded
his on-going work in maximum-length sequences and Bayesian estimation. His work in
2239
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting Acoustical Society of America
2239
parameter and model estimation for systems of acoustically coupled rooms led him naturally
into development of new computational diffusion-equation models for simulation of acoustically coupled rooms and detailed scale-model measurements to validate these models.
From this work—much of it carried out with his Masters and Ph.D. students and conducted
with a worldwide group of collaborators—has grown prodigiously and now encompasses
modeling, measurement, and simulation of scattering, material impedance, and mode distribution in addition to binaural measurement and multiple-slope decay curve analysis.
It is especially fitting that Ning should receive this award directly after J. Christopher
Jaffe, as Ning has been instrumental in bringing to full fruition the work begun by Dr.
Jaffe in founding the Graduate Program in Architectural Acoustics at RPI in 1999 and the
research in coupled rooms that was the initial focus of that program [JASA (2005, 2006,
2008, 2009, 2011, 2013)]. The flourishing and growth of the RPI program directed by Ning
since 2005 is, along with his other scholarly and professional accomplishments, one of his
enduring contributions to the field of architectural acoustics.
This educational and mentoring role cannot be overemphasized; in the close community of architectural and room acoustics Ning’s direct role in training a new generation of
acousticians has been felt across academia, government, and industry in both the U.S. and
abroad. Whether in consulting practice in Turkey, as a Fulbright fellow in Finland, or as a
professor in the United States, a community of young acousticians is daily reaping benefits
of having been educated to pursue the field of architectural acoustics with scientific and
engineering rigor coupled with a bold willingness to investigate new ideas and an openness
to the worldwide acoustics community. Doubtless Ning recalls his own enthusiasm as a
young graduate student in Bochum and seeks to cultivate that in his own students.
For the reasons cited here and those which space does not allow us to mention but are
well known to his colleagues, students, and the Society as a whole, we are pleased and
privileged to present Dr. Ning Xiang with the Wallace Clement Sabine Medal.
JASON E. SUMMERS
JENS BLAUERT
2240
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting Acoustical Society of America
2240
THURSDAY MORNING, 30 OCTOBER 2014
MARRIOTT 7/8, 8:40 A.M. TO 11:40 A.M.
Session 4aAAa
Architectural Acoustics, Speech Communication, and Noise: Room Acoustics Effects on Speech
Comprehension and Recall I
Lily M. Wang, Cochair
Durham School of Architectural Engineering and Construction, University of Nebraska - Lincoln, PKI 101A, 1110 S. 67th St.,
Omaha, NE 68182-0816
David H. Griesinger, Cochair
Research, David Griesinger Acoustics, 221 Mt Auburn St #504, Cambridge, MA 02138
Chair’s Introduction—8:40
Invited Papers
8:45
4aAAa1. Speech recognition in adverse conditions. Ann Bradlow (Linguist, Northwestern Univ., 2016 Sheridan Rd., Evanston, IL,
abradlow@northwestern.edu)
Speech recognition is highly sensitive to adverse conditions at all stages of the speech chain, i.e., the sequence of events that transmits a message from the mind/brain of a speaker through the acoustic medium to the mind/brain of a listener. Adverse conditions can
originate from source degradations (e.g., disordered or foreign-accented speech), environmental disturbances (e.g., background sounds
with or without energetic masking), and/or receiver (i.e., listener) limitations (e.g., impaired or incomplete language models, peripheral
deficiencies, or tasks with high cognitive load). (For more on this classification system, see Mattys, Davis, Bradlow, & Scott, 2012, Language and Cognitive Processes, 27). This talk will present a series of studies focused on linguistic aspects of these various possible sources of adverse conditions for speech recognition. In particular, we will demonstrate separate and combined influences of the talker’s
language background (a possible source degradation), the presence of a background speech masker in either the same or a different language from that of the target speech (a possible environmental degradation), and the listener’s experience with the language of the target
and/or masking speech (a possible receiver limitation). Together, these studies demonstrate strong influences of language and linguistic
experience on speech recognition in adverse conditions.
9:05
4a THU. AM
4aAAa2. Speech intelligibility and sentence recognition memory in noise. Rajka Smiljanic (Linguist, Univ. of Texas at Austin,
Calhoun Hall 407, 1 University Station B5100, Austin, TX 78712-0198, rajka@mail.utexas.edu)
Much of daily communication occurs in adverse conditions impacting various levels of speech processing negatively. These adverse
conditions may originate in talker- (fast, reduced speech), signal- (noise or degraded target signal), and listener- (impeded access or
decoding of the target speech signal) oriented limitations, and may have consequences for perceptual processes, representations, attention, and memory functions (see Mattys et al., 2012 for a review). In this talk, I first discuss a set of experiments that explore the extent
to which listener-oriented clear speech and speech produced in response to noise (noise-adapted speech) by children, young adults and
older adults contribute to enhanced word recognition in challenging listening conditions. Next, I discuss whether intelligibility-enhancing speaking style modifications impact speech processing beyond word recognition, namely recognition memory for sentences. The
results show that effortful speech processing in challenging listening environments can be improved by speaking style adaptations on
the part of the talker. In addition to enhanced intelligibility, a substantial improvement in sentence recognition memory can be achieved
through speaker adaptations to the environment and to the listener when in adverse conditions. These results have implications for the
quality of speech communication in a variety of environments, such as classrooms and hospitals.
9:25
4aAAa3. Reducing cognitive demands on listeners by speaking clearly in noisy places. Kristin Van Engen (Psych., Washington
Univ. in St. Louis, One Brookings Dr., Campus Box 1125, Saint Louis, MO 63130-4899, kvanengen@wustl.edu)
Listeners have more difficulty identifying spoken words in noisy environments when those words have many phonological neighbors
(i.e., similar-sounding words in the lexicon) than when they have few phonological neighbors. This difficulty appears to be exacerbated
in old age, where reductions in inhibitory control presumably make it more difficult to cope with competition from similar-sounding
words. Fortunately, word recognition in noise can generally be improved for a wide range of listeners (e.g., younger and older adults,
individuals with and without hearing impairment) when speakers adopt a clear speaking style. This study investigated whether clear
speech, in addition to generally increasing speech intelligibility, also reduces the inhibitory demands associated with identifying lexically difficult words in noise for younger and older adults. The results show that, indeed, the difference between rates of identification
2241
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2241
for words with many versus few neighbors was eliminated when those words were produced in clear speech. Data on the roles of individual differences (e.g., hearing, working memory, and inhibitory control) that may contribute to word identification in noise will also be
presented.
9:45
4aAAa4. Improved speech understanding and amplitude modulation sensitivity in rooms: Wait a second!. Pavel Zahorik (Div. of
Communicative Disord., Dept. of Surgery, Univ. of Louisville School of Medicine, Psychol. and Brain Sci., Life Sci. Bldg. 317, Louisville, KY 40292, pavel.zahorik@louisville.edu), Paul W. Anderson (Dept. of Psychol. and Brain Sci., Univ. of Louisville, Louisville,
KY), Eugene Brandewie (Dept. of Psych., Univ. of Minnesota, Minneapolis, MN), and Nirmal K. Srinivasan (National Ctr. for Rehabilitative Auditory Res., Portland VA Medical Ctr., Portland, OR)
Sound transmission between source and receiver can be profoundly affected by room acoustics, yet under many circumstances, these
acoustical effects have relatively minor perceptual consequences. This may be explained, in part, by listener adaptation to the acoustics
of the listening environment. Here, evidence that room adaptation improves speech understanding is summarized. The adaptation is
rapid (around 1 s), and observable for a variety of speech materials. It also appears to depend critically on the amplitude modulation
characteristic of the signal reaching the ear, and as a result, similar room adaptation effects have been observed for measurements of amplitude modulation sensitivity. A better understanding of room adaptation effects will hopefully contribute to improved methods for
speech transmission in rooms for both normally hearing and hearing-impaired listeners. [Work supported by NIDCD.]
10:05–10:20 Break
10:20
4aAAa5. The importance of attention, localization, and source separation to speech cognition and recall. David H. Griesinger
(Res., David Griesinger Acoust., 221 Mt Auburn St #504, Cambridge, MA 02138, dgriesinger@verizon.net)
Acoustic standards for speech are based on word recognition. But for successful communication sound must be detected, separated
from noise and other streams, phones, syllables, and words must be recognized and parsed into sentences, meaning must be found by
relating the sentences to previous knowledge, and finally information must be stored in long term memory. All of these tasks require
time and working memory. Acoustical conditions that increases the difficulty of any part of the task reduce recall. But attention is possibly the most important factor in successful communication. There is compelling anecdotal evidence that sound profoundly and involuntarily influences attention. Humans detect in fractions of a second whether a sound source is close, independent of its loudness and
frequency content. When sound is perceived as close it demands a degree of attention that distant sound does not. The mechanism of
detection relies on the phase relationships between harmonics of complex tones in the vocal formant range, properties of sound that also
ease word recognition and source separation. We will present the physics of this process and the acoustic properties that enable it. Our
goal is to increase attention and recall in venues of all types.
10:40
4aAAa6. Release from masking in simulated reverberant environments. Nirmal Kumar Srinivasan, Frederick J. Gallun, Sean D.
Kampel, Kasey M. Jakien, Samuel Gordon, and Megan Stansell (National Ctr. for Rehabilitative Auditory Res., 3710 SW US Veterans
Hospital Rd., Portland, OR 97239, nirmal.srinivasan@va.gov)
It is well documented that older listeners have more difficulty in understanding speech in complex listening environments. In two
separate experiments, speech intelligibility enhancement due to prior exposure to listening environment and spatial release from masking
(SRM) for small spatial separations were measured in simulated reverberant listening environments. Release from masking was measured by comparing threshold target-to-masker ratios (TMR) obtained with a speech target presented directly ahead of the listener and
two speech maskers presented from the same location or in symmetrically displaced spatial configurations in an anechoic chamber. The
results indicated that older listeners required much higher TMR at threshold and obtained decreased benefit from prior exposure to listening environments compared to younger listeners. For the small separation experiment, speech stimuli were presented over headphones and virtual acoustic techniques were used to simulate very small spatial separations (approx. 2 degrees) between target and
maskers. Results reveal, for the first time, the minimum separation required between target and masker to achieve release from speechon-speech masking in anechoic and reverberant conditions. The advantages of including small separations for understanding the functions relating spatial separation to release from masking will be discussed, as well as the value of including older listeners. [Work
supported by NIH R01 DC011828.]
11:00
4aAAa7. Speech-on-speech masking for children and adults. Lauren Calandruccio, Lori J. Leibold (Allied Health Sci., Univ. of North
Carolina, 301 S. Columbia St., Chapel Hill, NC 27599, Lauren_Calandruccio@med.unc.edu), and Emily Buss (Otolaryngology/Head
and Neck Surgery, Univ. of North Carolina at Chapel Hill, Chapel Hill, NC)
Children experience greater difficulty understanding speech in noise compared to adults. This age effect is pronounced when the
noise causes both energetic and informational masking, for example, when listening to speech while other people are talking. As children acquire speech and language, they are faced with multi-speech environments all the time, for example, in the classroom. For adults,
speech perception tends to be worse when the target and masker are matched in terms of talker sex and language, with mismatches
improving performance. It is unknown, however, whether children are able to benefit from these (sex or language) target/masker mismatches. The goal of this project is to further our understanding of the speech-on-speech masking deficit children demonstrate throughout childhood, while specifically investigating whether children’s speech recognition improves when the target and masker are spoken
by talkers of the opposite sex, or when the target and masker speech are spoken in different languages. Normal-hearing children and
adults were tested on word identification and sentence recognition tasks. Differences in SNR needed to equate performance between the
two groups will be reported, as well as data reporting whether children are able to benefit from these target/masker mismatch cues.
2242
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2242
11:20
4aAAa8. The neural basis of informational and energetic masking effects in the perception and production of speech. Samuel
Evans (Inst. of Cognit. Neurosci., Univ. College London, 17 Queen Square, London, London WC1N 3AR, United Kingdom, samuel.
evans@ucl.ac.uk), Carolyn McGettigan (Dept. of Psych., Royal Holloway, Egham, United Kingdom), Zarinah Agnew (Dept. of Otolaryngol., Univ. of California, San Francisco, San Francisco, CA), Stuart Rosen (Dept. of Speech, Hearing and Phonetic Sci., Univ. College
London, London, United Kingdom), Lima Cesar (Ctr. for Psych., Univ. of Porto, Porto, Portugal), Dana Boebinger, Markus Ostarek,
Sinead H. Chen, Angela Richards, Sophie Meekings, and Sophie K. Scott (Inst. of Cognit. Neurosci., Univ. College London, London,
United Kingdom)
When we have spoken conversations, it is usually in the context of competing sounds within our environment. Speech can be masked
by many different kinds of sounds, for example, machinery noise and the speech of others, and these different sounds place differing
demands on cognitive resources. In this talk, I will present data from a series of functional magnetic resonance imaging (fMRI) studies
in which the informational properties of background sounds have been manipulated to make them more or less similar to speech. I will
demonstrate the neural effects associated with speaking over and listening to these sounds, and demonstrate how in perception these
effects are modulated by the age of the listener. The results will be interpreted within a framework of auditory processing developed
from primate neurophysiology and human functional imaging work (Rauschecker and Scott 2009).
THURSDAY MORNING, 30 OCTOBER 2014
SANTA FE, 10:35 A.M. TO 12:05 P.M.
Session 4aAAb
Architectural Acoustics: Uses, Measurements, and Advancements in the Use of Diffusion and Scattering
Devices
David T. Bradley, Chair
Physics Astronomy, Vassar College, Poughkeepsie, NY 12604
Chair’s Introduction—10:35
Invited Papers
10:40
4a THU. AM
4aAAb1. Effect of installed diffusers on sound field diffusivity in a real-world classroom. Ariana Sharma, David T. Bradley, and
Mohammed Abdelaziz (Phys. + Astronomy, Vassar College, 124 Raymond Ave, Poughkeepsie, NY 12604, arsharma@vassar.edu)
An ideal diffuse sound field is both homogeneous (acoustic quantities are independent of position) and isotropic (acoustic quantities
are invariant with respect to direction). Predicting and characterizing sound field diffusivity is essential to acousticians when designing
and using acoustically sensitive spaces. Surfaces with a non-planar geometry, referred to as diffusers, can be installed in these spaces as
a means of increasing and/or controlling the field diffusivity. Although some theoretical and computational modeling work has been carried out to better understand the relationship between these installed diffusers and the resulting field diffusivity, the current state-of-theart does not include a systematic understanding of this relationship. Furthermore, very little work has been done to characterize this relationship in full scale and in the real world. In the current project, the effect of diffusers on field diffusivity has been studied in a full
scale, real-world classroom. Field diffusivity has been measured for various configurations of the diffusers using two measurement techniques. The first technique uses a three-dimensional grid of receivers to characterize the field homogeneity. To characterize field isotropy, a spherical microphone array has also been used. Results and analysis will be presented and discussed.
11:00
4aAAb2. Effect of measurement conditions on sound scattered from a pyramid diffuser in a free field. Kimberly A. Riegel, David
T. Bradley, Mallory Morgan, and Ian Kowalok (Phys. + Astronomy, Vassar College, 124 Raymond Ave., Poughkeepsie, NY 12604,
kiriegel@vassar.edu)
A surface with a non-planar geometry, referred to as a diffuser, can be used in acoustically sensitive spaces to help control or eliminate unwanted effects from strong reflections by scattering the reflected sound. The scattering behavior of a diffuser can be measured in
a free field, according to the standard ISO 17497-2. Many of the measurement conditions discussed in this standard can have an effect
on the measured data; however, these conditions are often not well-specified and/or have not been substantiated. In the current study, a
simple pyramid diffuser has been measured while varying several measurement conditions: surface material, orientation of the surface
geometry, perimeter shape of the surface, and mounting depth of the surface. Reflected polar response and diffusion coefficient data
have been collected and compared for each condition. Data have also been contrasted with those obtained by numerical simulation using
boundary element method (BEM) techniques for an idealized pyramid diffuser. Results and analysis will be presented and discussed.
2243
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2243
Contributed Papers
11:20
4aAAb3. Sound field diffusion by number of peak by continuous wavelet
transform. Yongwon Cha, Muhammad Imran, and Jin Yong Jeon (Dept. of
Architectural Eng., Hanyang Univ., Hanyang University, Seoul 133-791,
South Korea, chadyongwoncha@gmail.com)
The number of peak (Np) in the impulse response signal (IRs) captured
for the real hall have been investigated and measured by using continuous
wavelet transform (CWT). Np has a relationship with perceptual diffusion
as an objective characteristic that is influenced by walls scattering elements.
In addition, as measuring diffuse sound fields, the CWT coefficients are
used for detecting the diffusive sound. Based on the absolute coefficient values calculated from CWT analysis, a practical method of counting reflections is considered. These reflections are specified as diffusive or specular
base on their similarity with the mother wavelet. Temporal and spatial representation of absolute values of CWT is presented. Auditory experiments
using a paired comparison method were conducted to gauge the relationship
between the Np and perceptual sound field diffusion. It is revealed that a
dominant factor influencing the subjective preference in the hall was the Np
that varied with different wall surface treatments.
11:35
4aAAb4. In praise of smooth surfaces: Promoting a balance between
specular and diffuse surfaces in performance space design. Gregory A.
Miller and Scott D. Pfeiffer (Threshold Acoust., LLC, 53 W. Jackson Boulevard, Ste. 815, Chicago, IL 60604, gmiller@thresholdacoustics.com)
Diffusive surfaces are often presented as a panacea for achieving desirable listening conditions in performance spaces. While diffusive surfaces are
a valuable and necessary part of the finish palette in any theater or concert
hall, a significant number of specular surfaces are crucial to the success of
many such spaces. Case studies will be presented in which excessive use of
2244
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
diffusion has resulted in losses of clarity and loudness, including comparisons to the results following the introduction of specular surfaces, either flat
or gently curved. Aural examples will be presented to demonstrate the perceptual differences when specular surfaces are employed as compared to
highly diffusive surfaces at key locations in spaces for music and drama.
11:50
4aAAb5. Scattershot: A look at designing, integrating, and measuring
diffusion. Shane J. Kanter, John Strong, Carl Giegold, and Scott Pfeiffer
(Threshold Acoust., 53 W. Jackson Blvd., Ste. 815, Chicago, IL 60604,
skanter@thresholdacoustics.com)
A primary goal of the small-scale performance venue is to provide the
audience with supportive, well-timed reflections and to energize the space
adequately without overpowering the room volume. The judicious use of
sound-diffusive elements in such venues can lend a pleasing sense of body
and space while avoiding undesirable reflections that disrupt the listener experience. However, while working with architects to develop a space that is
pleasing to both the ear and the eye, it is often necessary to reconcile these
needs with each other. Diffusive elements must integrate seamlessly within
the space visually as well as architecturally. While developing interior room
acoustics for three small spaces for performance/worship, with audience
size ranging from 150 to 299, an exploration of diffusive elements was conducted. As each project required a different method and frequency range of
diffusion, scale models were constructed and tested under varied conditions,
using sometimes unorthodox methods to determine the acoustic effect.
These efforts were focused on limiting coloration caused by the “picket
fence effect,” reducing harsh reflections without rendering a space excessively sound-absorptive, and maintaining coherent reflections from discrete
sections of a prominent wall while leaving other sections diffusive. Methods, experiences, and results will be presented.
168th Meeting: Acoustical Society of America
2244
THURSDAY MORNING, 30 OCTOBER 2014
LINCOLN, 8:00 A.M. TO 12:00 NOON
Session 4aAB
Animal Bioacoustics and Acoustical Oceanography: Use of Passive Acoustics for Estimation of Animal
Population Density I
Tina M. Yack, Cochair
Bio-Waves, Inc., 364 2nd Street, Suite #3, Encinitas, CA 92024
Danielle Harris, Cochair
Centre for Research into Ecological and Environmental Modelling, University of St. Andrews, The Observatory, Buchanan
Gardens, St. Andrews KY16 9LZ, United Kingdom
Chair’s Introduction—8:00
Invited Papers
8:05
4aAB1. Estimating density from passive acoustics: Are we there yet? Tiago A. Marques, Danielle Harris, and Len Thomas (Ctr. for
Res. into Ecological and Environ. Modelling, Univ. of St. Andrews, The Observatory, Buchannan Gardens, St. Andrews, Fife KY16 9
LZ, United Kingdom, tiago.marques@st-andrews.ac.uk)
In the last few years, there have been a considerable number of papers describing methods or case studies involving passive acoustic
density estimation. While this might be interpreted as evidence that density estimates might now be easily and routinely implemented,
the truth is that so far these methods and applications have been essentially proof-of-concept in nature, based in areas and/or species particularly suited for the methods and also often involved assumptions hard to evaluate. We briefly review some of the existing work in
this area concentrating on a few aspects we believe are key for the implementation of density estimation from passive acoustics in a
broader context. These are (1) the development of fundamental research addressing the problem of sound production rate, fundamental
as it allows to convert estimates of density of sounds into density of animals and (2) the development of hardware capable of providing
cheap deployable units capable of ranging, allowing straightforward implementations of distance sampling based approaches. The perfect density estimate is out there waiting to happen, but we have not found it yet.
8:25
4a THU. AM
4aAB2. Use of passive acoustics for estimation of cetacean population density: Realizing the potential. Jay Barlow and Shannon
Rankin (Marine Mammal and Turtle Div., NOAA-SWFSC, 8901 La Jolla Shores Dr., La Jolla, CA 92037, jay.barlow@noaa.gov)
The potential of passive acoustic methods to estimate cetacean population density has seldom been realized. It has been most successfully applied to species that consistently use echo-location during foraging, have very distinctive echo-location signals and forage a
large fraction of the time, notably sperm whale, porpoise, and beaked whales. Research is needed to eliminate some of the impediments
to applying acoustics to estimate the density of other species. For baleen whales, one of the greatest uncertainties is the lack of information on call rates. For delphinids, the greatest uncertainties are in estimating group size and in species recognition. For all species, there
is a need to develop inexpensive recorders that can be distributed in large number at random locations in a study area. For towed hydrophone surveys, there is a need to better localize species in their 3-D environment and to instantaneously localize animals from a single
signal received on multiple hydrophones. While improvements can be made, we may need to recognize that some of impediments cannot
be overcome with any reasonable research budget. In these cases, efforts should be concentrated in improving acoustic methods to aid
visual-based transect methods.
8:45
4aAB3. Acoustic capture-recapture methods for animal density estimation. David Borchers (Dept. of Mathematics & Statistics,
Univ. of St. Andrews, CREEM, Buchannan Gdns, St. Andrews, Fife KY16 9LZ, United Kingdom, dlb@st-andrews.ac.uk)
Capture-recapture methods are one of the two most widely-used methods of estimating wildlife density and abundance. They can be
used with passive acoustic detectors—in which case acoustic detection on a detector constitutes “capture” and detection on other detectors and/or at other times constitute “recaptures.” Unbiased estimation of animal density from any capture-recapture survey requires that
the effective area of the detectors be estimated, and information on detected animals’ locations are essential for this. While locations are
not observed, acoustic data contain information on location in a variety of guises, including time-difference-of arrival, signal strength,
and sometimes directional information. This talk gives an overview of the use of such data with spatially explicit capture-recapture
(SECR) methods, including consideration of some of the particular challenges that acoustic data present for SECR methods, ways of
dealing with these, and an outline of some unresolved issues.
2245
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2245
9:05
4aAB4. U.S. Navy application and interest in passive acoustics for estimation of marine mammal population density. Anu Kumar
(Living Marine Resources, NAVFAC EXWC, 1000 23rd Ave., Code EV, Port Hueneme, CA 93043, anurag.kumar@navy.mil), Chip
Johnson (Environ. Readiness, Command Pacific Fleet, Coronado, CA), Julie Rivers (Environ. Readiness, Command Pacific Fleet, Pearl
Harbor, HI), Jene Nissen (Environ. Readiness, U.S. Fleet Forces, Norfolk, VA), and Joel Bell (Marine Resources, NAVFAC Atlantic,
Norfolk, VA)
Marine species population density estimation from passive acoustic monitoring is an emergent topic of interest to the U.S. Navy.
Density estimates are used by the Navy and other Federal partners in effects modeling for environmental compliance documentation.
Current traditional methods of marine mammal density estimation via visual line transect surveys require expensive ship time and long
days at-sea for an experienced crew to yield limited spatial and temporal coverage. While visual surveys remain an effective means of
deriving density estimates, passive acoustic based density estimation methods have the unique ability to improve on visual density estimates for some key species by: (a) expanding spatial and temporal density coverage, (b) providing coverage in areas too remote or difficult for traditional visual surveys, (c) reduce the statistical uncertainty of a given density estimate, and (d) providing estimates for
species that are difficult to survey visually (e.g., minke and beaked whales). The U.S. Navy has invested in research for the development,
refinement, and scientific validation of passive acoustic methods for cost effective density estimates in the future. The value, importance,
and current development in passive acoustic-based density estimation methods for Navy applications will be discussed.
9:25
4aAB5. Towing the line: Line-transect based density estimation of whales using towed hydrophone arrays. Thomas F. Norris and
Tina M. Yack (Bio-Waves Inc., 364 2nd St., Ste. #3, Encinitas, CA 92024, thomas.f.norris@bio-waves.net)
Towed hydrophone arrays have been used to monitor marine mammals from research vessels since the 1980s. Although towed
hydrophone arrays have now become a standard part of line-transect surveys of cetaceans, density estimation exclusively using passive
acoustic has only been attempted for a few species. We use examples from four acoustic line-transect surveys that we conducted in the
North Pacific Ocean to illustrate the steps involved, and issues inherent, in using data from towed hydrophone arrays to estimate densities of cetaceans. We will focus on two species of cetaceans, sperm whales and minke whales, with examples of beaked whales and
other species as needed. Issues related to survey design, data-collection, and data analysis and interpretation will be discussed using
examples from these studies. We provide recommendations to improve the survey design, data-collection methods, and analyses. We
also suggest areas where additional research and methodological development are required in order to produce robust density estimates
from acoustic based data.
Contributed Papers
9:45
10:15
4aAB6. From clicks to counts: Applying line-transect methods to passive acoustic monitoring of sperm whales in the Gulf of Alaska. Tina M.
Yack, Thomas F. Norris, Elizabeth Ferguson (Bio-Waves Inc., 364 2nd St.,
Ste. #3, Encinitas, CA 92024, tina.yack@bio-waves.net), Brenda K. Rone
(Cascadia Res. Collective, Seattle, WA), and Alexandre N. Zerbini (Alaska
Fisheries Sci. Ctr., Seattle, WA)
4aAB7. Studying the biosonar activities of deep diving odontocetes in
Hawaii and other western Pacific locations. Whitlow W. Au (Hawaii Inst.
of Marine Biology, Univ. of Hawaii, 46-007 Lilipuna Rd., Kaneohe, HI
96744, wau@hawaii.edu) and Giacomo Giorli (Oceanogr. Dept., Univ. of
Hawaii, Honolulu, HI)
A visual and acoustic line-transect survey of marine mammals was conducted in the central Gulf of Alaska (GoA) during the summer of 2013. The
survey area was divided into four sub-strata to reflect four distinct habitats;
“inshore,” “slope,” “offshore,” and “seamount.” Passive acoustic monitoring
was conducted using a towed-hydrophone array system. One of the main
objectives of the acoustic survey was to obtain an acoustic-based density
estimate for sperm whales. A total of 241 acoustic encounters of sperm
whales during 6,304 km of effort were obtained compared to 19 visual
encounters during 4,155 km of effort. Line-transect analytical methods were
used to estimate the abundance of sperm whales. To estimate the detection
function, target motion analysis was used to obtain perpendicular distances
to individual sperm whales. An acoustic-based density and abundance estimate was obtained for each stratum (N = 78; CV = 0.36 offshore; N = 16;
CV = 0.55 seamount; N = 121; and CV = 0.18 slope) and for the entire survey area (N = 215; D = 0.0013; and CV = 0.18). These results will be compared to visual-based estimates. The advantages and disadvantages of
acoustic-based density estimates as well as application of these methods to
other species (e.g., beaked whales) and areas will be discussed.
10:00–10:15 Break
Ecological acoustic recorders (EARs) have been deployed at several
locations in Hawaii and in other western Pacific locations to study the foraging behavior of deep-diving odontocetes. EARs have been deployed at
depths greater than 400 m at five locations around the island of Kauai, one
at Ni’ihau, two around the island of Okinawa, and four in the Marianas (two
close to island of Guam, one close to the island of Saipan and another close
to the island of Tinian). The four groups of deep-diving odontocetes were
blackfish (mainly pilot whales and false killer whales), sperm whales,
beaked whales (Cuvier and Bainsville beaked whales) and Risso’s dolphin.
In all locations, the biosonar signals of blackfish were detected the most followed by either by sperm and beaked whales depending on specific locations with Risso’s dolphin being detected the least. There was a strong
tendency for these animals to forage at night in all locations. The detection
rate indicate much lower populations of these four groups of odontocetes
around Okinawa and in the Marianas then off Kauai in the main Hawaiian
island chain by a factor of about 4–5.
10:30
4aAB8. Fin whale vocalization classification and abundance estimation.
Wei Huang, Delin Wang (Elec. and Comput. Eng., Northeastern Univ., 006
Hayden Hall, 370 Huntington Ave., Boston, MA 02115, huang.wei1@
husky.neu.edu), Nicholas C. Makris (Mech. Eng., Massachusetts Inst. of
Technol., Cambridge, MA), and Purnima Ratilal (Elec. and Comput. Eng.,
Northeastern Univ., Boston, MA)
Several thousand fin whale vocalizations from multiple fin individuals
were passively recorded by a high-resolution coherent hydrophone array
system in the Gulf of Maine in Fall 2006. The recorded fin whale vocalizations have relatively short durations roughly 0.4 s and frequencies ranging
2246
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2246
10:45
4aAB9. Neglect of bandwidth of odontocetes echolocation clicks biases
propagation loss and single hydrophone population estimates. Michael
A. Ainslie, Alexander M. von Benda-Beckmann (Acoust. and Sonar, TNO,
P.O. Box 96864, The Hague 2509JG, Netherlands, michael.ainslie@tno.nl),
Len Thomas (Ctr. for Res. into Ecological and Environ. Modelling, Univ. of
St. Andrews, St Andrews, United Kingdom), and Tyack L. Tyack (Sea
Mammal Res. Unit, Scottish Oceans Inst., Univ. of St. Andrews, St.
Andrews, United Kingdom)
Passive acoustic monitoring with a single hydrophone has been suggested as a cost-effective method to monitor population density of echolocating marine mammals, by estimating the distance at which the
hydrophone is able to distinguish the echolocation clicks from the background. To avoid a bias in the estimated population density, this method
relies on an unbiased estimate of the propagation loss (PL). It is common
practice to estimate PL at the center frequency of a broadband echolocation
click and to assume this narrowband PL applies also to the broadband click.
For a typical situation this narrowband approximation overestimates PL,
underestimates the detection range and consequently overestimates the population density by an amount that for fixed center frequency increases with
increasing pulse bandwidth and sonar figure of merit. We investigate the
detection process for different marine mammal species and assess the magnitude of error on the estimated density due to various simplifying assumptions. Our main purposes are to quantify and, where possible and needed,
correct the bias in the population density estimate for selected species and
detectors due to use of the narrowband approximation, and to understand
the factors affecting the magnitude of this bias to enable extrapolation to
other species and detectors.
11:00
4aAB10. Instantaneous acoustical response of marine mammals to abrupt changes in ambient noise. John E. Joseph, Tetyana Margolina (Oceanogr., Naval Postgrad. School, 833 Dyer Rd, Monterey, CA, jejoseph@
nps.edu), and Ming-Jer Huang (National Kaohsiung Univ. of Appl. Sci.,
Kaohsiung, Taiwan)
Four months of passive acoustic data recorded at Thirtymile Bank in offshore southern California have been analyzed to describe instantaneous
vocal response of marine mammals to abrupt changes in ambient noise.
Main contributors to the distinctive regional soundscape are heavy commercial shipping, military activities in the naval training range, diverse marine
life and natural sources including wind and tectonic activity. Many of these
sources produce intense, irregular and short-term events shaped by local
oceanographic conditions, bathymetry and bottom structure (Thirtymile
Bank blind thrust). We seek to attribute detected changes in cetacean vocal
behavior (loudness, calling rate, and pattern) to these events and differentiate the reaction by noise source, its intensity, frequency and/or duration.
Main target species are blue and fin whales. Initial hypotheses formulated
after data scanning are tested statistically (2D histograms and PCA). To
quantify the vocal behavior variations, an innovative detection approach
based on pattern recognition is applied, which allows for extraction of individual calls with low false alarm and high detection success comparable to
those of a human analyst. Obtained results relate cetacean acoustic behavior
to ambient noise variability and thus help refine existing cue-based formulae
for estimation of whale population density from PAM data.
2247
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
11:15
4aAB11. Measuring whale and dolphin call rates as a function of behavioral, social, and environmental context. Stacy L. DeRuiter, Catriona M.
Harris (School of Mathematics & Statistics, Univ. of St. Andrews, CREEM,
St. Andrews KY169LZ, United Kingdom, sldr@st-andrews.ac.uk), Nicola J.
Quick (Duke University Marine Lab, Duke Univ., Beaufort, NC), Dina
Sadykova, Lindesay A. Scott-Hayward (School of Mathematics & Statistics,
Univ. of St. Andrews, St. Andrews, United Kingdom), Alison K. Stimpert
(Moss Landing Marine Lab., California State Univ., Moss Landing, CA),
Brandon L. Southall (Southall Environ. Assoc., Inc., Aptos, CA), Len
Thomas (School of Mathematics & Statistics, Univ. of St. Andrews, St
Andrews, United Kingdom), and Fleur Visser (Kelp Marine Res., Hoorn,
Netherlands)
Cetacean sound-production rates are highly variable and patchy in time,
depending upon individual behavior, social context, and environmental context. Better quantification of the drivers of this variability should allow more
realistic estimates of expected call rates, improving our ability to convert
between call counts and animal density, and also facilitating detection of
sound-production changes due to acoustic disturbance. Here, we analyze
digital acoustic tag (DTAG) records and visual observations collected during behavioral response studies (BRSs), which aim to assess normal cetacean behavior and measure changes in response to acoustic disturbance;
data sources include SOCAL BRS, the 3S project, and Bahamas BRS, with
statistical contributions from the MOCHA project (http://www.creem.stand.ac.uk/mocha/links). We illustrate use of generalized linear models (and
their extensions) as a flexible framework for sound-production-rate analysis.
In the context of acoustic disturbance, we also detail use of two-dimensional
spatially adaptive surfaces to jointly model effects of sound-source proximity and sound intensity. Specifically, we quantify variability in pilot whale
group sound production rates in relation to behavior and environment, and
individual fin whale call rates in relation to social and environmental context
and dive behavior; with and without acoustic disturbance.
11:30
4aAB12. Estimating relative abundance of singing humpback whales in
Los Cabos, Mexico, using diffuse ambient noise. Kerri Seger, Aaron M.
Thode (Scripps Inst. of Oceanogr., Univ. of California, San Diego, 8880 Biological Grade, MESOM 161, La Jolla, CA 92093-0206, kseger@ucsd.edu),
Diana C. L
opez Arzate, and Jorge Urban (Laboratorio de Mamıferos Marionoma de Baja California Sur, La Paz, BCS,
nos, Universidad Aut
Mexico)
Previous research has speculated that diffuse ambient noise levels can
be used to estimate relative cetacean abundance in certain locations when
baleen whale vocal activity dominates the soundscape (Au et al., 2000; Mellinger et al., 2009). During the 2013 and 2014 humpback whale breeding
seasons off Los Cabos, Mexico, visual point and line transects were conducted alongside two bottom-mounted acoustic deployments. As theorized,
preliminary analysis of ambient noise between 100 and 1,000 Hz is dominated by humpback whale song. It also displays a diel cycle similar to that
found in the West Indies, Australia, and Hawai’i, whereby peak levels occur
near midnight and troughs occur soon after sunrise (Au et al., 2000; McCauley et al., 1996). Depending upon site and year, the median band-integrated
levels fluctuated between 7 and 16 dB re 1 uPa when sampled in one hour
increments. This presentation uses analytical models of wind-generated
noise in an ocean waveguide to analyze potential relationships between
singing whale density and diffuse ambient noise levels. It explores whether
various diel cycle strengths (peak-to-peak measurements and Fourier analysis) correspond with trends observed from concurrent visual censuses.
[Work sponsored by the Ocean Foundation.]
168th Meeting: Acoustical Society of America
2247
4a THU. AM
from 15 to 40 Hz. Here we classify the fin whale vocalizations and apply the
results to estimate the minimum number of vocalizing fin individuals
detected by our hydrophone array. The horizontal azimuth or bearing of
each fin whale vocalization is first determined by beamforming. Each beamformed fin whale vocalization spectrogram is next characterized by several
features such as center frequency, upper and lower frequency limits, as well
as amplitude-weighted mean frequency. The vocalizations are then classified using k-mean clustering into several distinct vocal types. The vocalization clustering result is then combined with the bearing-time trajectory
information for a consecutive sequence of vocalizations to provide an estimate of the minimum number of vocalizing fin individuals detected.
11:45
SAMBAH (Static Acoustic Monitoring of the Baltic Sea Harbor Porpoise) is an EU LIFE + -funded project with the primary goal of estimating
the abundance and distribution of the critically endangered Baltic Sea harbor porpoise. From May 2011 to April 2013, project members in all EU
countries around the Baltic Sea undertook a static acoustic survey using 304
porpoise detectors distributed in a randomly positioned systematic grid in
waters 5–80 m deep. In the recorded data, click trains originating from porpoises have been identified automatically using an algorithm developed specifically for Baltic conditions. To determine the click train C-POD detection
function, a series of experiments have been carried out, including acoustic
tracking of wild free ranging porpoises using hydrophone arrays in an area
with moored C-PODs and playbacks of porpoise-like signals at SAMBAH
C-PODs during various hydrological conditions. Porpoise abundance has
been estimated by counting the number of individuals detected in short time
interval windows (snapshots), and then accounting for false positive detections, probability of animals being silent, and probability of detection of
non-silent animals within a specified maximum range. We describe the
method in detail, and how the auxiliary experiments have enabled us to estimate the required quantities.
4aAB13. Large-scale static acoustic survey of a low-density population—Estimating the abundance of the Baltic Sea harbor porpoise. Jens
C. Koblitz (German Oceanogr. Museum, Katharinenberg 14-20, Stralsund
18439, Germany, Jens.Koblitz@meeresmuseum.de), Mats Amundin (Kolmården Wildlife Park, Kolmarden, Sweden), Julia Carlstr€
om (AquaBiota Water
Res., Stockholm, Sweden), Len Thomas (Ctr. for Res. into Ecological and Environ. Modelling, Univ. of St. Andrews, St. Andrews, United Kingdom), Ida
Carlen (AquaBiota Water Res., Stockholm, Sweden), Jonas Teilmann (Dept.
of BioSci., Aarhus Univ., Roskilde, Denmark), Nick Tregenza (Chelonia Ltd.,
Long Rock, United Kingdom), Daniel Wennerberg (Kolmården Wildlife
Park, Kolmarden, Sweden), Line Kyhn, Signe Svegaard (Dept. of BioSci.,
Aarhus Univ., Roskilde, Denmark), Radek Koza, Monika Kosecka, Iwona
Pawliczka (Univ. of Gdansk, Gdansk, Poland), Cinthia Tiberi Ljungqvist
(Kolmården Wildlife Park, Kolmården, Sweden), Katharina Brundiers (German Oceanogr. Museum, Stralsund, Germany), Andrew Wright (George
Mason Univ., Fairfax, VA), Lonnie Mikkelsen, Jakob Tougaard (Dept. of
BioSci., Aarhus Univ., Roskilde, Denmark), Olli Loisa (Turku Univ. of Appl.
Sci., Turku, Finland), Anders Galatius (Dept. of BioSci., Aarhus Univ., Rosussi (ProMare NPO, Harjumaa, Estonia), and Harald
kilde, Denmark), Ivar J€
Benke (German Oceanogr. Museum, Stralsund, Germany)
THURSDAY MORNING, 30 OCTOBER 2014
INDIANA A/B, 7:55 A.M. TO 12:00 NOON
Session 4aBA
Biomedical Acoustics: Mechanical Tissue Fractionation by Ultrasound: Methods, Tissue Effects, and
Clinical Applications I
Vera A. Khokhlova, Cochair
University of Washington, 1013 NE 40th Street, Seattle, WA 98105
Jeffrey B. Fowlkes, Cochair
Univ. of Michigan Health System, 3226C Medical Sciences Building I, 1301 Catherine Street, Ann Arbor, MI 48109-5667
Chair’s Introduction—7:55
Invited Papers
8:00
4aBA1. Histotripsy: An overview. Charles A. Cain (Biomedical Eng., Univ. of Michigan, 2200 Bonisteel Blvd., 2121 Gerstacker, Ann
Arbor, MI 48105, cain@umich.edu)
Histotripsy produces non-thermal lesions by generating dense highly confined energetic bubble clouds that mechanically fractionate
tissue. This nonlinear thresholding phenomenon has useful consequences. If only the tip of the waveform (P-) exceeds the intrinsic
threshold*, small lesions less than the diffraction limit can be generated. This is called microtripsy (other presentations in this session).
Moreover, side lobes from distorting aberrations can be “thresholded-out” wherein part of the main lobe exceeds the intrinsic threshold
producing a clean bubble cloud (and lesion) conferring significant immunity to aberrations. If a high frequency probe (imaging) waveform intersects a low frequency pump waveform, the compounded waveform can momentarily exceed the intrinsic threshold producing
a lesion with an imaging transducer. Multi-beam histotripsy (other presentations in this session) allows flexible placement of both pump
and probe transducers. Very broadband P- “monopolar” pulses*, ideal for histotripsy, can be synthesized in a generalization of the
multi-beam histotripsy (other presentations in this session) case wherein very short pulses from transducer elements of many different
frequencies are added at the focus of what is called a frequency compounding transducer (other presentations in this session). Ultrasound
image guidance works well with histotripsy. Bubble clouds are easily seen simplifying both lesion targeting and continuous validation
of the ongoing process. Hypoechoic homogenized tissue allows real-time quantification of lesion formation.
2248
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2248
8:20
4aBA2. Boiling histotripsy: A noninvasive method for mechanical tissue disintegration. Adam D. Maxwell (Dept. of Urology,
Univ. of Washington School of Medicine, 1013 NE 40th St., Seattle, WA 98105, amax38@u.washington.edu), Tatiana D. Khokhlova
(Dept. of Gastroenterology, Univ. of Washington, Seattle, WA), George R. Schade (Dept. of Urology, Univ. of Washington School of
Medicine, Seattle, WA), Yak-Nam Wang, Wayne Kreider (Ctr. for Industrial and Medical Ultrasound, Appl. Phys. Lab., Univ. of Washington, Seattle, WA), Petr Yuldashev (Phys. Faculty, Moscow State Univ., Moscow, Russian Federation), Julianna C. Simon (Ctr. for
Industrial and Medical Ultrasound, Appl. Phys. Lab., Univ. of Washington, Seattle, WA), Oleg A. Sapozhnikov (Phys. Faculty, Moscow
State Univ., Moscow, Russian Federation), Navid Farr (Ctr. for Industrial and Medical Ultrasound, Appl. Phys. Lab., Univ. of Washington, Seattle, WA), Ari Partanen (Clinical Sci., Philips Healthcare, Cleveland, OH), Michael R. Bailey (Ctr. for Industrial and Medical
Ultrasound, Appl. Phys. Lab., Univ. of Washington, Seattle, WA), Joo Ha Hwang (Dept. of Gastroenterology, Univ. of Washington,
Seattle, WA), Lawrence A. Crum (Ctr. for Industrial and Medical Ultrasound, Appl. Phys. Lab., Univ. of Washington, Seattle, WA), and
Vera A. Khokhlova (Phys. Faculty, Moscow State Univ., Moscow, Russian Federation)
Boiling histotripsy is an experimental noninvasive focused ultrasound therapy that applies shocked ms-length pulses to achieve mechanical disintegration of a targeted tissue. Localized delivery of high-amplitude shocks causes rapid heating, resulting in boiling of the
tissue. The interaction of incident shocks with the boiling bubble results in tissue disruption and liquefaction without significant thermal
injury. Simulations are utilized to design and characterize therapy sources, predicting focal waveforms, shock amplitudes, and boiling
times. Transducers have been developed to generate focal shock amplitudes >70 MPa and achieve rapid boiling at depth in tissue. Therapy systems including ultrasound-guided single-element sources and clinical MRI-guided phased arrays have been successfully used to
create ex vivo and in vivo lesions at ultrasound frequencies in the 1–3 MHz range. Histological and biochemical analyses show mechanical disruption of tissue architecture with minimal thermal effect, similar to cavitation-based histotripsy. Atomization as observed with
acoustic fountains has been proposed as an underlying mechanism of tissue disintegration. This promising technology is being explored
for several applications in tissue ablation, as well as new areas such as tissue engineering and biomarker detection. [Work supported by
NIH 2T32DK007779-11A1, R01EB007643-05, 1K01EB015745, and NSBRI through NASA NCC 9-58.]
8:40
4aBA3. Bubbles in tissue: Yes or No? Charles C. Church (NCPA, Univ. of MS, 1 Coliseum Dr., University, MS 38677, cchurch@olemiss.edu)
The question of whether bubbles exist in most or all biological tissues rather than being restricted to only a few well-known examples remains a mystery. When Apfel and Holland developed the theoretical background for the mechanical index (MI), they first
assumed that such bubbles did exist and further assumed that some of those bubbles were of a size that would undergo inertial cavitation
at the lowest possible rarefactional pressure. Comparison of cavitation thresholds determined experimentally in various mammalian tissues in vivo with the results of computational studies seems to provide a definitive answer to that question. No, optimally sized bubbles
do not pre-exist in tissue, although very small bubbles, with radii on the order of nm, may be present. However, this answer is inextricably related to the accuracy of the theory used to study the question, in this case a form of the Keller-Miksis equation modified to include
the viscoelastic properties of tissue. Previous analysis has focused on elasticity, assuming that viscosity is constant, but is it? Blood is
known to be shear-thinning, and some soft tissues appear to be as well. The effect of shear rate on cavitation thresholds and implications
for bubble populations in tissue will be discussed.
9:00
4a THU. AM
4aBA4. Benefits and challenges of employing elevated acoustic output in diagnostic imaging. Kathryn Nightingale (Biomedical
Eng., Duke Univ., PO Box 90281, Durham, NC 27708-0281, kathy.nightingale@duke.edu) and Charles C. Church (National Ctr. for
Acoust., Univ. of MS, University, MS)
The acoustic output levels used in diagnostic ultrasonic imaging in the US have been subject to a de facto limitation by guidelines
established by the USFDA in 1976, for which no known bioeffects had been reported. These track-3 guidelines link the Mechanical
Index (MI) and the Thermal Index (TI) to the maximum outputs as of May 28, 1976, through a linear derating process. Subsequently,
new imaging technologies have been developed that employ unique beam sequences (e.g., harmonic imaging and ARFI imaging) which
were not well developed when the current regulatory scheme was put in place, so neither the MI nor the TI takes them into account in an
optimal manner. Additionally, there appears to be a large separation between the maxima in the track-3 guidelines and the acoustic output levels for which cavitation-based bioeffects are observed in tissues not known to contain gas bodies. In this presentation, we summarize the history of and the scientific basis for the MI, define an output regime and specify clinical applications under consideration for
conditionally increased output (CIO), review the potential risks of CIO in this regime based upon existing scientific evidence, and summarize the evidence for the potential clinical benefits of CIO.
9:20
4aBA5. Standards for characterizing highly nonlinear acoustic output from therapeutic ultrasound devices: Current methods
and future challenges. Thomas L. Szabo (Biomedical Dept., Boston Univ., 44 Cummington Mall, Boston, MA 02215, tlszabo@bu.edu)
One of the major challenges of characterizing the acoustic fields and power from diagnostic and high intensity or high pressure therapeutic devices is addressing the impact of amplitude-dependent nonlinear propagation effects. The destructive capabilities of high intensity therapeutic devices (HITU) make acoustic output measurements with conventional fragile sensors used for diagnostic ultrasound
difficult. Different approaches involving more robust measurement devices, scaling and simulation are described in two recent IEC
documents, IEC TS 62556 for the specification and measurement of HITU fields and IEC 62555 for the measurement of acoustic power
from HITU devices. Existing and proposed applications include even higher pressure levels and use of cavitation effects. Promising
hybrid approaches involve a combination of measurement and simulation. In order to meet the challenges of design, verification, and
measurement, standards and consensus are needed to couple the measurements to the prediction of acoustic output in realistic tissue
models as well as associated effects such as acoustic radiation force and temperature elevation.
2249
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2249
9:40
4aBA6. Uncertainties in characterization of high-intensity, nonlinear pressure fields for therapeutic applications. Wayne Kreider
(CIMU, Appl. Phys. Lab., Univ. of Washington, 1013 NE 40th St., Seattle, WA 98105, wkreider@uw.edu), Petr V. Yuldashev (Phys.
Faculty, Moscow State Univ., Moscow, Russian Federation), Adam D. Maxwell (Dept. of Urology, Univ. of Washington, Seattle, WA),
Tatiana D. Khokhlova (CIMU, Appl. Phys. Lab., Univ. of Washington, Seattle, WA), Sergey A. Tsysar (Phys. Faculty, Moscow State
Univ., Moscow, Russian Federation), Michael R. Bailey (CIMU, Appl. Phys. Lab., Univ. of Washington, Seattle, WA), Oleg A. Sapozhnikov, and Vera A. Khokhlova (Phys. Faculty, Moscow State Univ., Moscow, Russian Federation)
A fundamental aspect of developing therapeutic ultrasound applications is the need to quantitatively characterize the acoustic fields
delivered by transducers. A typical approach is to make direct pressure measurements in water. With very high intensities and potentially
shocks, executing this approach is problematic because of the strict requirements imposed on hydrophone bandwidth, robustness, and
size. To overcome these issues, a method has been proposed that relies on acoustic holography and simulations of nonlinear propagation
based on the 3D Westervelt model. This approach has been applied to several therapy transducers including a multi-element phased
array. Uncertainties in the approach can be evaluated for both model boundary conditions determined from linear holography and the
nonlinear focusing gain achieved at high power levels. Neglecting hydrophone calibration uncertainties, errors associated with the holography technique remain less than about 10% in practice. To assess the accuracy of nonlinear simulations, results were compared to independent measurements of focal waveforms using a fiber optic probe hydrophone (FOPH). When relative calibration uncertainties
between the capsule hydrophone and FOPH are mitigated, simulations and FOPH measurements agree within about 15% for peak pressures at the focus. [Work supported by NIH grants EB016118, EB007643, T32 DK007779, DK43881, and NSBRI through NASA NCC
9-58.]
10:00–10:20 Break
10:20
4aBA7. Cavitation characteristics in High Intensity Focused Ultrasound lesions. Gail ter Haar and Ian Rivens (Phys., Inst. of Cancer
Res., Phys. Dept., Royal Marsden Hospital, Sutton, Surrey SM2 5PT, United Kingdom, gail.terhaar@icr.ac.uk)
The acoustic emissions recorded during HIFU lesions fall into three broad categories: those associated with non-inertial cavitation,
those associated with inertial cavitation, and those linked with tissue water boiling. These three mechanisms can be linked with different
lesion shapes, and with characteristic histological appearance. By careful choice of acoustic driving parameters, these effects may be
studied individually.
10:40
4aBA8. The role of tissue mechanical properties in histotripsy tissue fractionation. Eli Vlaisavljevich, Charles Cain, and Zhen Xu
(Univ. of Michigan, 1111 Nielsen Ct. Apt. 1, Ann Arbor, MI 48105, evlaisav@umich.edu)
Histotripsy is a therapeutic ultrasound technique that controls cavitation to fractionate tissue using short, high-pressure ultrasound
pulses. Histotripsy has been demonstrated to successfully fractionate many different tissues, though stiffer tissues such as cartilage or
tendon (Young’s moduli >1 MPa) are more resistant to histotripsy-induced damage than softer tissues such as liver (Young’s moduli ~9
kPa). In this work, we investigate the effects of tissue mechanical properties on various aspects of the histotripsy process including the
pressure threshold required to generate a cavitation cloud, the bubble dynamics, and the stress–strain applied to tissue structures. Ultrasound pulses of 1–2 acoustic cycles at varying frequencies (345 kHz, 500 kHz, 1.5 MHz, and 3 MHz) were applied to agarose tissue
phantoms and ex vivo bovine tissues with varying mechanical properties. Results demonstrate that the intrinsic threshold to initiate a
cavitation cloud is independent of tissue stiffness and frequency. The bubble expansion is suppressed in stiffer tissues, leading to a
decrease in strain to surrounding tissue and an increase in damage resistance. Finally, we investigate strategies to optimize histotripsy
therapy for the treatment of tissues with specific mechanical properties. Overall, this work improves our understanding of how tissue
properties affect histotripsy and will guide parameter optimization for histotripsy tissue fractionation.
11:00
4aBA9. Technical advances for histotripsy: Strategic ultrasound pulsing methods for precise histotripsy lesion formation. KuangWei Lin, Timothy L. Hall, Zhen Xu, and Charles A. Cain (Univ. of Michigan, 2200 Bonisteel Blvd., Gerstacker, Rm. 1107, Ann Arbor,
MI 48109, kwlin@umich.edu)
Conventional histotripsy uses ultrasound pulses longer than three cycles wherein the bubble cloud formation relies on the pressurerelease scattering of the positive shock fronts from sparsely distributed single cavitation bubbles, making the cavitation event unpredictable and sometimes chaotic. Recently, we have developed three new strategic histotripsy pulsing techniques to further increase the
precision of cavitation cloud and lesion formation. (1) Microtripsy: When applying histotripsy pulses shorter than three cycles, the formation of a dense bubble cloud only depends on the applied peak negative pressure (P-) exceeding an intrinsic threshold of the medium.
With a P- not significantly higher than this, very precise sub-vocal-volume lesions can be generated. (2) Dual-beam histotripsy: A subthreshold high-frequency pulse (perhaps from an imaging transducer) is enabled by a sub-threshold low-frequency pump pulse to exceed
the intrinsic threshold and produces very precise lesions. (3) Frequency compounding: a near monopolar pulse can be synthesized using
a frequency-compounding transducer (an array transducer consisting of elements with various resonant frequencies). By adjusting time
delays for individual frequency components and allowing their principal negative peaks to arrive at the focus concurrently, a near
monopolar pulse with a dominant negative phase can be generated (no complicating high peak positive shock fronts).
2250
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2250
11:20
4aBA10. Histotripsy: Urologic applications and translational progress. William W. Roberts (Urology, Univ. of Michigan, 3879
Taubman Ctr., 1500 East Medical Ctr. Dr., Ann Arbor, MI 48109-5330, willrobe@umich.edu), Charles A. Cain (Biomedical Eng., Univ.
of Michigan, Ann Arbor, MI), J. B. Fowlkes (Radiology, Univ. of Michigan, Ann Arbor, MI), Zhen Xu, and Timothy L. Hall (Biomedical Eng., Univ. of Michigan, Ann Arbor, MI)
Histotripsy is an extracorporeal ablative technology based on initiation and control of acoustic cavitation within a target volume.
This mechanical form of tissue homogenization differs from the ablative processes employed by conventional thermoablative modalities
and exhibits a number of unique features (non-thermal, high precision, real-time monitoring/feedback, and tissue liquefaction), which
are potentially advantageous characteristics for ablative applications in a variety of organs and disease processes. Histotripsy has been
applied to the prostate in canine models for tissue debulking as a therapy for benign prostatic hyperplasia and for ablation of ACE-1
tumors, a canine prostate cancer model. Homogenization of normal renal tissue as well as implanted VX-2 renal tumors has been demonstrated with histotripsy. Initial studies assessing tumor metastases in this model did not reveal metastatic potentiation following mechanical homogenization by histotripsy. Enhanced understanding of cavitation and methods for acoustic control of the target volume are
being refined in tank studies for treatment of urinary calculi. Development of novel acoustic pulsing strategies, refinement of technology,
and enhanced understanding of cavitational bioeffects are driving pre-clinical translation of histotripsy for a number of applications. A
human pilot trial is underway to assess the safety of histotripsy as a treatment for benign prostatic hyperplasia.
11:40
4aBA11. Boiling histotripsy of the kidney: Preliminary studies and predictors of treatment effectiveness. George R. Schade, Adam
D. Maxwell (Dept. of Urology, Univ. of Washington, 5555 14th Ave. NW, Apt 342, Seattle, WA 98107, grschade@uw.edu), Tatiana
Khokhlova (Dept. of Gastroenterology, Univ. of Washington, Seattle, WA), Yak-Nam Wang, Oleg Sapoznikov, Michael R. Bailey, and
Vera Khokhlova (Ctr. for Industrial and Medical Ultrasound, Appl. Phys. Lab, Univ. of Washington, Seattle, WA)
Boiling histotripsy (BH), an ultrasound technique to mechanically homogenize tissue, has been described in ex vivo liver and myocardium. As a noninvasive, non-thermal based approach, BH may have advantages over clinically available thermal ablative technologies for renal masses. We aimed to characterize BH exposures in human and porcine ex vivo kidneys using a 7-element 1 MHz
transducer (duty factor 1–3%, 5–10 ms pulses, 98 MPa in situ shock amplitude, 17 MPa peak negative). Lesions were successfully created in both species, demonstrating focally homogenized tissue above treatment thresholds (pulse number) with stark transition between
treated and untreated cells on histologic assessment. Human tissue generally required more pulses to produce similar effect compared to
porcine. Similarly, kidneys displayed tissue specific resistance to BH with increasing resistance from cortex to medulla to the collecting
system. Tissue properties that predict resistance to renal BH were evaluated demonstrating correlation between tissue collagen content
and tissue resistance. Subsequently, the impact of intervening abdominal wall and ribs on lesion generation ex vivo was evaluated.
“Transabdominal” and “transcostal” treatment required approximately 5- and 20-fold greater acoustic power, respectively, to elicit boiling vs. no intervening tissue. [Work supported by NIH T32DK007779, R01EB007643, K01EB015745 and NSBRI through NASA NCC
9-58.]
THURSDAY MORNING, 30 OCTOBER 2014
MARRIOTT 9/10, 8:30 A.M. TO 11:15 A.M.
4a THU. AM
Session 4aEA
Engineering Acoustics: Acoustic Transduction: Theory and Practice I
Richard D. Costley, Chair
Geotechnical and Structures Lab., U.S. Army Engineer Research & Development Center, 3909 Halls Ferry Rd,
Vicksburg, MS 39180
Contributed Papers
8:30
4aEA1. Historic transducers: Balanced armature receiver (BAR). Jont
B. Allen (ECE, Univ. of Illinois, Urbana-Champaign, Urbana, IL) and Noori
Kim (ECE, Univ. of Illinois, Urbana-Champaign, 1085 Baytowner df 11,
Champaign, IL 61822, nkim13@illinois.edu)
The oldest telephone receiver is the Balanced Armature Receiver (BAR)
type, and it is still in use. The original technology goes back to the invention
of telephone receiver by A. G. Bell in 1876. Attraction and release of the armature are controlled by the current from the coils, which generates electromagnetic fields [Hunt (1954) Chapter 7, and Beranek and Mellow (2014)].
2251
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
As the electrical current goes into the electric terminal of the receiver, it
generates an AC magnetic field which direction is perpendicular to the current. Due to the polarity between the permanent (DC) magnet field and the
generated AC magnetic field, an armature (which sits within the core of the
coil and the magnet) feels a force. The very basic principles for explaining
this movement in a gyrator, a fifth circuit element introduced by Tellegen in
1948, along with an inductor, a capacitor, a resistor, and a transformer. This
component represents the anti-reciprocal characteristic of the system. This
study is starting from comparing the BAR type receiver to the moving-coil
loud speaker. We believe that this work will provide a fundamental and
clear insight into this type of BAR system.
168th Meeting: Acoustical Society of America
2251
8:45
4aEA2. Radiation from wedges of a power law profile. Marcel C. Remillieux, Brian E. Anderson, Timothy J. Ulrich, and Pierre-Yves Le Bas (Geophys. Group (EES-17), Los Alamos National Lab., MS D446, Los Alamos,
NM 87545, mcr1@lanl.gov)
The large impedance contrast between bulk piezoelectric disks and air
does not allow for efficient coupling of sound radiation from the piezoelectric into air. Here, we present the idea of using wedges of power law profiles
to more efficiently radiate sound into air. Wedges of power law profiles
have been used to provide absorption of vibrational energy in plates, but
their efficient radiation of sound into air has not been demonstrated. We
present numerical modeling and experimental results to demonstrate the
concept. The wedge shape provides a gradual impedance contrast as the
wave travels down the tapering of the wedge, while the wave speed also
continually slows down. For an ideal wedge that tapers down to zero thickness, the waves become trapped at the tip and the vibrational energy can
only radiate into the surrounding air. [This work was supported by institutional support [Laboratory Directed Research and Development (LDRD)] at
Los Alamos National Laboratory.]
9:00
4aEA3. The self-sustained oscillator as an underwater low frequency
projector: Progress report. Andrew A. Acquaviva and Stephen C. Thompson (Graduate Program in Acoust., The Penn State Univ., c/o Steve Thompson, N-249 Millennium Sci. Complex, University Park, PA, acquavaa@
gmail.com)
Wind musical instruments are examples of pressure operated self-sustained oscillators that act as acoustic projectors. Recent studies have shown
that this type of self-sustained oscillator can also be implemented underwater as a low frequency projector. However, the results of the early feasibility studies were complicated by the existence of cavitation in the high
pressure region of the resonator. A redesign has eliminated the cavitation
and allows better comparison with analytical calculations.
9:15
4aEA4. Design and testing of an underwater acoustic Fresnel zone plate
diffractive lens. David C. Calvo, Abel L. Thangawng, Michael Nicholas,
and Christopher N. Layman, Jr. (Acoust. Div., Naval Res. Lab., 4555 Overlook Ave., SW, Washington, DC 20375, david.calvo@nrl.navy.mil)
Fresnel zone plate (FZP) lenses offer a means of focusing sound based
on diffraction in cases where the thickness of conventional lenses may be
impractical. A binary-profile FZP for underwater use featuring a center
acoustically opaque disk with alternating transparent and opaque annular
regions was fabricated to operate nominally at 200 kHz. The overall diameter of the lens was 13 in. and consisted of 13 opaque annuli. The opaque
regions were 3 mm thick and made from silicone rubber with a high concentration of gas voids. These regions were bonded to an acoustically transparent silicone rubber substrate film that was 1 mm thick. The FZP was
situated in a frame and tested in a 5 x 4 x 4 cu. ft. ultrasonic tank using a piston source for insonification. The measured focal distance for normal incidence of 12.5 cm agreed with finite-element predictions taking into account
the wavefront curvature of the incident field which had to be included given
the finite dimensions of tank. The focal gain was measured to be 20 dB. The
radius to the first null at the focal plane was approximately 4 mm, which
agreed with theoretical resolution predictions. [Work sponsored by the
Office of Naval Research.]
9:30
4aEA5. Acoustical transduction in two-dimensional piezoelectric array.
Ola Nusierat, Lucien Cremaldi (Phys. and Astronomy, Univ. of MS, Oxford,
MS), and Igor Ostrovskii (Phys. and Astronomy, Univ. of MS, Lewis Hall,
Rm. 108, University, MS 38677, iostrov@phy.olemiss.edu)
The acoustical transduction in an array of ferroelectric domains with
alternating piezoelectric coefficients is characterized by multi-frequency
resonances, which occur at the boundary of the acoustic Brillouin zone
2252
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
(ABZ). The resonances correspond to two successive domain excitations in
the first and second ABZ correspondingly, where the speed of ultrasound is
somewhat different. An important parameter for acoustical transduction is
the electric impedance Z. The results of the theoretical and experimental
investigations of Z in a periodically poled LiNbO3 are presented. The magnitude and phase of Z depend on the array parameters including domain resonance frequency and domain number; Z of arrays consisting of up to 88
0.45-mm-long domains in the zx-cut crystal are investigated. The strong
changes in Z-magnitude and phase are observed in the range of 3–4 MHz.
The two resonance zones are within 3.33 6 0.05 MHz and 3.676 0.05
MHz. The change in domain number influences Z and its phase. By varying
the number of inversely poled domains and resonance frequencies, one can
significantly control/change the electrical impedance of the multidomain
array. The findings may be used for developing new acoustic sensors and
transducers.
9:45
4aEA6. A non-conventional acoustic transduction method using fluidic
laminar proportional amplifiers. Michael V. Scanlon (RDRL-SES-P,
Army Res. Lab., 2800 Powder Mill Rd., Adelphi, MD 20783-1197, michael.
v.scanlon2.civ@mail.mil)
Pressure sensing using fluidic laminar proportional amplifiers (LPAs)
was developed at Harry Diamond Laboratories in the late 1970s and was
applied to acoustic detection and amplification. LPAs use a partially constrained laminar jet of low-pressure air as the sensing medium, which is
deflected by the incoming acoustic signal. LPA geometries enable pressure
gain by focusing incoming pressure fluctuations at the jet’s nozzle exit,
thereby applying leverage to create jet deflection over its short transit toward a splitter. With no input signal, the jet is not deflected and downstream
pressures on both sides of the splitter are equal. A differential input signal
of magnitude one, referenced to ambient pressure balancing the opposite
side of the jet, produces an differential output signal of magnitude ten. This
amplified signal can be differentially fed into the inputs on both sides of the
next LPA for additional gain. By cascading LPAs together, very small signals can be amplified a large amount. Originally, a DC pressure amplifier,
LPAs have exceptional infrasound response, and excellent sensitivity since
there is no mass or stiffness associated with a diaphragm, and is matched to
the environment. Standard microphones at the output ports can take advantage of increased sensitivity and gain.
10:00–10:15 Break
10:15
4aEA7. Investigation of piezoelectric bimorph bender transducers to
generate and receive shear waves. Andrew R. McNeese, Kevin M. Lee,
Megan S. Ballard, Thomas G. Muir (Appl. Res. Labs., The Univ. of Texas
at Austin, 10000 Burnet Rd., Austin, TX 78758, mcneese@arlut.utexas.
edu), and R. Daniel Costley (U.S. Army Engineer Res. and Development
Ctr., Vicksburg, MS)
This paper further demonstrates the ability of piezoceramic bimorph
bender elements to preferentially generate and receive near-surface shear
waves for in situ sediment characterization measurements, in terrestrial as
well as marine clay soils. The bimorph elements are housed in probe transducers that can manually be inserted into the sediment and are based on the
work of Shirley [J. Acoust. Soc. Am. 63(5), 1643–1645 (1978)] and of
Richardson et al. [Geo.—Marine Letts. 196–203 (1997)]. The transducers
can discretely generate and receive horizontally polarized shear waves,
within their bimorph directivity patterns. The use of multiple probes allows
one to measure the shear wave velocity and attenuation parameters in the
sediment of interest. Measured shear wave data on a hard clay terrestrial
soil, as well as on soft marine sediments, are presented. These parameters
along with density and compressional wave velocity define the elastic moduli (Poisson’s ratio, shear modulus, and bulk modulus) of the sediment,
which are of interest in various areas of geophysics, underwater acoustics,
and geotechnical engineering. Discussion will focus on use of the probes in
both terrestrial and marine sediment environments. [Work supported by
ARL:UT Austin.]
168th Meeting: Acoustical Society of America
2252
10:30
4aEA8. Multi-mode seismic source for underground application. abderrhamane ounadjela (sonic, Schlumberger, 2-2-1 Fuchinobe, Sagamihara,
Sagamihara, Kanagawa 252-0206, Japan, ounadjela1@slb.com), Henri
Pierre Valero, Jean christophe Auchere (sonic, Schlumberger, SagamiharaShi, Japan), and Olivier Moyal (sonic, Schlumberger, Clamart, France)
A new multi-mode downhole acoustic source has been designed to fulfill
requirements of oil business. Three acoustic modes of radiation, i.e., monopole, dipole, and quadruple modes, respectively, are considered to assess the
properties of the oil reservoir. Because of the geometry of the well it is challenging to design an efficient and effective powerful device. This new
source uses an apparatus to convert the axial motion of the four motors distributed on the azimuth into a radial one. In order to make this conversion
effective, the axial motion transformation into a radial one is performed
thanks to a rod rolling on a cone; this conversion minimizes the loss by friction and is very effective. The conversion apparatus is also exploited to
match the acoustic impedance of the surrounding medium. This new design
is described in this paper as well as intensive modeling which allowed optimizing this multi-mode source device. Experimental data is in a good agreement with numerical modeling.
10:45
4aEA9. Sound characteristics of the caxirola when used by different
uninstructed users. Talita Pozzer and Stephan Paul (UFSM, Tuiuti, 925.
Apto 21, Santa Maria, RS 97015661, Brazil, talita.pozzer@eac.ufsm.br)
11:00
4aEA10. A micro-machined hydrophone using the piezoelectric-gate-offield-effect-transistor for low frequency sounds. Min Sung, Kumjae Shin
(Dept. of Mech. Eng., Pohang Univ. of Sci. and Technology(POSTECH),
PIRO 416, POSTECH, San31, Hyoja-dong, Nam-gu, Pohang, Kyungbuk
790784, South Korea, smmath2@postech.ac.kr), Cheeyoung Joh (Underwater sensor Lab., Agency for Defense Development, Changwon, Kyungnam, South Korea,), and Wonkyu Moon (Dept. of Mech. Eng., Pohang
Univ. of Sci. and Technology(POSTECH), Pohang, Kyungbuk, South
Korea)
The micro-sized piezoelectric body for the miniaturized hydrophone is
known to have the limits in low frequencies due to its high impedance and
low sensitivity. In this study, a new transduction mechanism named as
PiGoFET (piezoelectric gate of field effect transistor) is devised so that its
application for the miniaturized hydrophone could overcome the limits of
the micro-sized piezoelectric body. The PiGoFET transduction can be realized by combining a field effect transistor and a small piezoelectric body on
its gate. A micro-machined silicon membrane of 2 mm diameter was connected to the small piezoelectric body so that acoustic pressure can apply
appropriate forces on the body on the FET gate. The electric field from the
deformed piezoelectric body modulates the channel current of FET directly,
thus the transduction makes the sound pressure transferred to the source–
drain current effectively at very low frequencies with micro-sized piezoelectric body. Under the described concept, a hydrophone was fabricated by
micro-machining and calibrated using the comparison method in low frequencies to investigate its performance. [Research funded by MRCnD.]
4a THU. AM
While originally developed to be the official musical instrument of the
2014 Soccer World Cup the caxirola was banned form the stadiums as could
be thrown into the field by angry spectators. Nevertheless, outside the stadiums the caxirola was still used, thus an already started investigation into the
acoustics of the caxirola was concluded. At a previous ASA meeting we presented the sound power level (SWL) of the caxirola only for the two most
typical ways of use. Now we present data on the sound pressure level close
to the user’s ears (SPLcue) and the SWL, both measured in a reverberation
room, from 30 subjects that used the caxirola according their understanding.
It was found that the total SPLcue vary from 78 dB(A) up to 95 dB(A) and
the global SWL of the caxirola varies from 72 dB until 84 dB. The distribution is not normal, them the SWL has 79 dB(A) as median, that is very similar of the result obtained at the previous study. The SPLcue and the SPL
measured for calculating the SWL are differents. This probably due to the
distance variation between the source and the ear of user causing a near field
some times.
2253
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2253
THURSDAY MORNING, 30 OCTOBER 2014
INDIANA C/D, 8:00 A.M. TO 10:20 A.M.
Session 4aPAa
Physical Acoustics, Underwater Acoustics, Signal Processing in Acoustics, Structural Acoustics and
Vibration, and Noise: Borehole Acoustic Logging and Micro-Seismics for Hydrocarbon Reservoir
Characterization
Said Assous, Cochair
Geoscience, Weatherford, East Leake, Loughborough LE126JX, United Kingdom
David Eccles, Cochair
Weatherford, Geoscience, Loughborough, United Kingdom
Chair’s Introduction—8:00
Invited Papers
8:05
4aPAa1. Generalized collar waves and their characteristics. Xiuming Wang, Xiao He, and Xiumei Zhang (State Key Lab. of
Acoust., Inst. of Acoust. , 21 4th Northwestern Ring Rd., Hadian District, Beijing 100190, China, wangxm@mail.ioa.ac.cn)
A good acoustic logging while drilling (ALWD) tool is difficult to be designed because of collar waves that propagate along the
tool. There always exist such acoustic waves in ALWD. The collar wave arrivals can strongly interfere with formation compressional
waves in wave slowness picking up. In the past years, a considerable research work has been seen in suppressing collar waves in order
to accurately pick up p- and s-wave slowness, and the obtained p- and s-wave slowness accuracy is still a problem. In this work, numerical and physical experiments are conducted to tackle collar wave propagation problems. And the collar wave propagation physics is elaborated and a generalized collar wave concept is proposed. It is shown that collar waves are much complex, and they consist of two
kinds of collar waves, i.e., the direct collar waves and indirect collar waves. Both of these two collar waves make the ALWD data difficult to process for formation wave slowness picking up. Because of drilling string structures, the complicated collar waves cannot be
effectively suppressed only with a groove isolator.
8:20
4aPAa2. Characterizing the nonlinear interaction of S (shear) and P (longitudinal) waves in reservoir rocks. Thomas L. Szabo
(Biomedical Dept., Boston Univ., 44 Cummington Mall, Boston, MA 02215, tlszabo@bu.edu), Thomas Gallot (Sci. Inst., Univ. of the
Republic, Montevideo, Uruguay), Alison Malcolm, Stephen Brown, Dan Burns, and Michael Fehler (Earth Resources Lab., Massachusetts Inst. of Technol., Cambridge, MA)
The nonlinear elastic response of rocks is known to be caused by internal microstructure, particularly cracks and fluids. In order to
quantify this nonlinearity, this paper presents a method for characterizing the interaction of two nonresonant traveling waves: a low-amplitude P-wave probe and a high-amplitude lower frequency S-wave pump with their particle motions aligned. We measure changes in
the arrival time of the P-wave probe as it passes through the perturbation created by a traveling S-wave pump in a sample of room-dry
Berea sandstone (15 15 3 cm). The velocity measurements are made at times too short for the shear wave to reflect back from the
bottom of the sample and interfere with the measurement. The S-wave pump induces strains of 0.3—2.2 10 6, and we observe
changes in the P-wave probe arrival time of up to 100 ns, corresponding to a change in elastic properties of 0.2%. By changing the relative time delay between the probe and pump signals, we record the measured changes in travel time of the P-wave probe to recover the
nonlinear parameters b~ 102 and d ~ 109 at room-temperature. This work significantly broadens the applicability of dynamic acoustoelastic testing by utilizing both S and P traveling waves.
8:35
4aPAa3. A case study of multipole acoustic logging in heavy oil sand reservoirs. Peng Liu, Wenxiao Qiao, Xiaohua Che, Ruijia
Wang, Xiaodong Ju, and Junqiang Lu (State Key Lab. of Petroleum Resources and Prospecting, China Univ. of Petroleum (Beijing),
No. 18, Fuxue Rd., Changping District, Beijing, Beijing 102249, China, liupeng198712@126.com)
The multipole acoustic logging tool (MPAL) was tested in the heavy oil sand reservoirs of Canada. Compared with near shales, the
P-wave slowness of heavy oil sands does not change obviously, with the value of about 125ls/ft; the dipole shear slowness decreases
significantly to 275ls/ft. The heavy oil sands have a Vp/Vs value of less than 2.4. The slowness and amplitude of dipole shear wave are
good lithology discriminators that have great differences between heavy oil sands and shales. The heavy oil sand reservoirs are anisotropic. The crossover phenomenon in the fast and slow dipole shear wave dispersion curves indicates that the anisotropy is induced by
unbalanced horizontal stress in the region.
2254
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2254
8:50
4aPAa4. Borehole sonic imaging applications. Jennifer Market (Weatherford, 19819 Hampton Wood Dr, Tomball, TX 77377, jennifer.market@weatherford.com)
The advent of azimuthal logging-while-drilling (LWD) sonic tools has opened up a surfeit of near-real time applications. Azimuthal
images of compressional and shear velocities allow for geosteering, fracture identification, stress profiling, production enhancement, and
3D wellbore stability analysis. Combining borehole sonic images with electrical, gamma ray, and density images yields a detailed picture of the near- and far-wellbore nature of the stress field and resultant fracturing. A brief review of the physics of azimuthal sonic logging will be presented, paying particular attention to azimuthal resolution and depth of investigation. Examples of combined
interpretations of sonic, density, and electrical images will be shown to illustrate fracture characterization, unconventional reservoir
completion planning, and geosteering. Finally, recommendations for the optimized acquisition of borehole sonic images will be
discussed.
Contributed Papers
9:05
9:35
4aPAa5. Numerical simulations of an electromagnetic actuator in a lowfrequency range for dipole acoustic wave logging. Yinqiu Zhou, Penglai
Xin, and Xiuming Wang (Inst. of Acoust., Chinese Acad. of Sci., 21 North
4th Ring Rd., Haidian District, Beijing 100190, China, zhouyinqiu@mail.
ioa.ac.cn)
4aPAa7. Borehole acoustic array processing methods: A review. Said
Assous and Peter Elkington (GeoSci., Weatherford, East Leake, Loughborough LE126JX, United Kingdom, said.assous@eu.weatherford.com)
9:20
4aPAa6. Phase moveout method for extracting flexural mode dispersion
and borehole properties. Said Assous, David Eccles, and Peter Elkington
(GeoSci., Weatherford, Weatherford, East Leake, Loughborough, United
Kingdom, david.eccles@eu.weatherford.com)
Among the dispersive modes encountered in acoustic well logging applications is the flexural mode associated with dipole source excitations whose
low frequency asymptote provides the only reliable means of determining
shear velocity in slow rock formations. We have developed a phase moveout
method for extracting flexural mode dispersion curves from with excellent
velocity resolution; the method is entirely data-driven, but in combination
with a forward model able to generate theoretical dispersion curves, we are
able to address the inverse problem and extract formation and borehole
properties in addition to the rock shear velocity. The concept is demonstrated using data from isotropic and anisotropic formations.
9:50
4aPAa8. Classifying and removing monopole mode propagating
through drill collar. Naoki Sakiyama (Schlumberger K.K., 2-18-3-406,
Bessho, Hachio-ji 192-0363, Japan, NSakiyama@slb.com), Alain Dumont
(Schlumberger K.K., Kawasaki, Japan), Wataru Izuhara (Schlumberger
K.K., Inagi, Japan), Hiroaki Yamamoto (Schlumberger K.K., Kamakura, Japan), Makito Katayama (Schlumberger K.K., Yamato, Japan), and Takeshi
Fukushima (Schlumberger K.K., Hachio-ji, Japan)
Understanding characteristics of the acoustic wave propagating through
drill collars is important for formation evaluation with logging-while-drilling (LWD) sonic tools. Knowing the frequency-slowness information of
different types of the wave propagating through the collar, we can minimize
the unwanted wave propagating through the collar by processing and
robustly identify formation compressional and shear arrivals. Extensional
modes of the steel drill collar are generally dispersive and range from 180
ls/m to 400 ls/m depending on the frequency band. A fundamental torsional mode of the drill collar is nondispersive, but its slowness is sensitive
to the geometry of the drill collar. Depending on the geometry and shear
modulus of the material, the slowness of the torsional mode can be slower
than 330 ls/m. For identifying slowness of the formation arrivals, those different slownesses of the wave propagating through the collar need to be
identified separately from those of the wave propagating through formations. Examining various types of the acoustic wave propagating through a
drill collar, we determined that the waves can be properly muted by processing for the semblance of waveforms acquired with LWD sonic tools.
10:05–10:20 Panel Discussion
2255
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2255
4a THU. AM
In dipole acoustic logging, transducers are required to work in a low frequency range, such as 0.5–5 kHz, to measure shear wave velocities so as to
accurately analyze the anisotropy parameters of formations. In this paper, an
electromagnetic actuator is designed for more effective low-frequency excitations than conventional piezoelectric bender-bar transducers. A numerical
model has been set up to simulate electromagnetic actuators to generate
flexural waves. The Finite Element Method (FEM) has been applied to simulating the radiation modes and harmonic responses of the actuator in a
fluid, such as air and water. In the frequency range of 0–5 kHz, the first ten
vibration modes are simulated and analyzed. The simulation results of 3-D
harmonic responses of the sound field, such as the deformation, acoustic
sound pressure, and directivity pattern, have been conducted to evaluate the
radiation performance. From the simulation results, it is concluded that the
second asymmetric mode at 670 Hz could be excited more easily than the
others. This oscillated-vibration mode is useful to be applied in a dipole
source. The frequency response curve is broad and flat and the electromagnetic actuator is beneficial to generate the wideband signal in a required low
frequency range, especially below 1 kHz.
In this talk, we review the different borehole acoustic array methods and
compare their effectiveness with simulated and real waveform examples:
Starting from the slowness time coherence (STC) method, weighted semblance method (WSS), and many other common dispersive processing
approaches including: Prony’s method, maximum entropy (ARMA) methods, and predictive array processing and Matrix pencil technique. We also
discuss the Methods include phase minimization or coherency maximization
and phase-based approaches.
THURSDAY MORNING, 30 OCTOBER 2014
INDIANA C/D, 10:30 A.M. TO 12:00 NOON
Session 4aPAb
Physical Acoustics: Topics in Physical Acoustics I
Josh R. Gladden, Cochair
Physics & NCPA, University of Mississippi, 108 Lewis Hall, University, MS 38677
William Slaton, Cochair
Physics & Astronomy, The University of Central Arkansas, 201 Donaghey Ave, Conway, AR 72034
Contributed Papers
10:30
4aPAb1. Faraday waves on a two-dimensional periodic substrate. C. T.
Maki (Phys., Hampden-Sydney College, 1 College Rd., Hampden-Sydney,
VA 23943, MakiC15@hsc.edu), Peter Rodriguez, Purity Dele-Oni, PeiChuan Fu, and R. Glynn Holt (Mech. Eng., Boston Univ., Boston, MA)
A vertically oscillating body of liquid will exhibit Faraday waves when
forced above a threshold interface acceleration amplitude. The patterns and
their wavelengths at driving frequencies of order 100 Hz are well known in
the literature. However, wave interactions influenced by periodic structures
on a driving substrate are less well-studied. We report results of a Faraday
experiment with a specific periodically structured substrate in the strong
coupling regime where the liquid depth is of the order of the structure
height. We observe patterns and pattern wavelengths versus driving frequency over the range of 50–350 Hz. These observations may be of interest
in situations where Faraday waves appear or are applied.
10:45
4aPAb2. Substrate interaction in ranged photoacoustic spectroscopy of
layered samples. Logan S. Marcus, Ellen L. Holthoff, and Paul M. Pellegrino (U.S. Army Res. Lab., 2800 Powder Mill Rd., RDRL-SEE-E, Adelphi,
MD 20783, loganmarcus@gmail.com)
Photoacoustic spectroscopy (PAS) is a useful monitoring technique that
is well suited for ranged detection of condensed materials. Ranged PAS has
been demonstrated using an interferometer as the sensor. Interferometric
measurement of photoacoustic phenomena focuses on the measurement of
changes in path length of a probe laser beam. That probe beam measures,
without discrimination, the acoustic, thermal, and physical changes to the
excited sample and the layer of gas adjacent to the surface of the solid sample. For layered samples, the photoacoustic response of the system is influenced by the physical properties of the substrate as well as the sample under
investigation. We will discuss the affect that substrate absorption of the excitation source has on the spectra collected in PAS. We also discuss the role
that the vibrational modes of the substrate have in photoacoustic signal
generation.
11:00
4aPAb3. Difference frequency scattered waves from nonlinear interactions of a solid sphere. Chrisna Nguon (Univ. of Massachusetts Lowell, 63
Hemlock St., Dracut, MA 01826, chrisna_Nguon@student.uml.edu), Max
Denis (Mayo Clinic, Rochester, MN), Kavitha Chandra, and Charles
Thompson (Univ. of Massachusetts Lowell, Lowell, MA)
In this work, the generation of difference frequency waves arising from
the interaction of dual-incident beams on a solid sphere is considered. The
high-frequency incident beams induce a radiation force onto the fluid-saturated sphere causing the scatterer to vibrate. An analysis on the contribution
between the difference frequency sound and radiation force pressure is of
particular interest. The scattered pressure due to the two primary waves are
2256
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
obtained as solutions to the Kirchhoff–Helmholtz integral equation for the
fluid–solid boundary. Due to the contrasting material properties between the
host fluid and solid sphere, high-order approximations are used to evaluate
the integral equation.
11:15
4aPAb4. Effect of surface irregularities on the stability of Stokes boundary. Katherine Aho, Jenny Au, Charles Thompson, and Kavitha Chandra
(Elec. and Comput. Eng., Univ. of Massachusetts Lowell, 1 University Ave,
Lowell, MA 01854, katherine_aho@student.uml.edu)
In this work, we examine that impact that wall surface roughness plays
on the stability of an oscillatory Stokes boundary layer. The temporal
growth of three-dimensional disturbances excited by wall height variations
is of particular interest. Floquet theory is used to identify linearly unstable
region in parameter space. It is shown that disturbances become unstable at
critical value of the Taylor number for a given surface curvature. The case
of oscillatory flow in a two-dimensional rigid walled channel is considered
in detail.
11:30
4aPAb5. Novel optoacoustic source for arbitrarily shaped acoustic
wavefronts. Weiwei Chan, Yuanxiang Yang, Manish Arora, and ClausDieter Ohl (Phys. and Appl. Phys., Nanyang Technolog. Univ., Nanyang
Link 21 School of Physical and Mathematical Sci. Nanyang Technolog.
University, Singapore 637371, Singapore, chan0700@e.ntu.edu.sg)
We present a novel approach to generate arbitrary acoustic wavefronts
using the optoacoustic effect on custom designed PDMS substrates. PDMS
blocks are casted into the desired shape with a 3D-printed mold and coated
with a layer of an optical absorber. Acoustic wavefront corresponding to the
geometry of coated surface is generated by exposing this structure to nanosecond laser pulse (Nd:YAG, k = 532 nm). For a spherical shell design, pressure pulses of amplitude up to 6.1 bar peak to peak and frequency >30 MHz
could be generated. By utilizing other geometries, we focus the acoustic
waves from different sections of the transmitter onto a single focal point at
different time delay, thus permitting generation of double-peak acoustic
pulse from a single laser pulse. Further modification of the structure permits
designing of multi-foci, multi-peak acoustic pulses from a single optical
pulse.
11:45
4aPAb6. Accuracy of local Kramers–Kronig relations between material
damping and dynamic elastic properties. Tamas Pritz (Budapest Univ. of
Technol. and Economics, Apostol u 23, Budapest 1025, Hungary, tampri@
eik.bme.hu)
The local Kramers–Kronig (KK) relations are the differential form
approximations of the general KK integral equations linking the damping
properties (loss modulus or loss factor) and dynamic modulus of elasticity
(shear, bulk, etc.) of linear solid viscoelastic materials. The local KK
168th Meeting: Acoustical Society of America
2256
relations are not exact; therefore, their accuracy is known to depend on the
rate of frequency variations of material dynamic properties. The accuracy of
the local KK relations is investigated in this paper under the assumption that
the frequency dependence of the loss modulus obeys a simple power law. It
is shown by analytic calculations that the accuracy of prediction of the local
KK relations is better than 10% if the exponent in the loss modulus-
frequency function is smaller than 0.35. This conclusion supports the result
of an earlier numerical study. Some experimental data verifying the theoretical results will be presented. The conclusions drawn in the paper can easily
be extended to acoustic wave propagation, namely to the accuracy of local
KK relations between attenuation and dispersion of phase velocity.
THURSDAY MORNING, 30 OCTOBER 2014
MARRIOTT 1/2, 8:30 A.M. TO 12:00 NOON
Session 4aPP
Psychological and Physiological Acoustics: Physiological and Psychological Aspects of Central Auditory
Processing Dysfunction I
Frederick J. Gallun, Cochair
National Center for Rehabilitative Auditory Research, Portland VA Medical Center, 3710 SW US Veterans Hospital Rd.,
Portland, OR 97239
Adrian KC Lee, Cochair
Box 357988, University of Washington, Seattle, WA 98195
Chair’s Introduction—8:30
Invited Papers
8:35
4aPP1. Auditory processing disorder: Clinical and international perspective. David R. Moore (Commun. Sci. Res. Ctr., Cincinnati
Children’s Hospital, 240 Albert Sabin Way, Rm. S1.603, Cincinnati, OH 45229, david.moore2@cchmc.org)
4a THU. AM
APD may be considered a developmental, hearing or neurological disorder, depending on etiology, but in all cases, it is a listening
difficulty without an abnormality of pure tone sensitivity. It has been variously defined as a disorder of the central auditory system associated with impaired spatial hearing, auditory discrimination, temporal processing, and performance with competing or degraded sounds.
Clinical testing typically examines perception, intelligibility and ordering of both speech and non-speech sounds. While deficits in
higher-order cognitive, communicative, and language functions are excluded in some definitions, recent consensus accepts that these
functions may be inseparable from active listening. Some believe that APD in children is predominantly or exclusively cognitive in origin, while others insist that true APD has its origins within the auditory brainstem. However, children or their carers presenting at clinics
typically complain of difficulty hearing speech in noise, remembering or understanding instructions, and attending to sounds. APD usually occurs alongside other developmental disorders (e.g., language impairment) and may be indistinguishable from them. Consequently,
clinicians are uncertain how to diagnose or manage APD; both test procedures and interventions vary widely, even within a single clinic.
Effective remediation primarily consists of improving the listening environment and providing communication devices.
9:05
4aPP2. Caught in the middle: The central problem in diagnosing auditory-processing disorders in adults. Larry E. Humes (Indiana
Univ., Dept. Speech & Hearing Sci., Bloomington, IN 47405-7002, humes@indiana.edu)
It is challenging to establish the existence of higher-level auditory-processing disorders in military veterans with mild Traumatic
Brain Injury (TBI). Yet, mild TBI appears to be a highly prevalent disorder among U.S. veterans returning from recent military conflicts
in Iraq and Afghanistan. Recent prevalence estimates for mild TBI, for example, among these military veterans have suggest a rate of 7–
9% [Carlson, K.F. et al. (2011), “Prevalence, assessment and treatment of mild Traumatic Brain Injury an Posttraumatic Stress Disorder:
a systematic review of the evidence,” J. Head Trauma Rehabil., 26, 103–115]. A key factor in diagnosing central components for auditory-processing disorders may lie in the potentially confounding influences of concomitant peripheral auditory and cognitive dysfunction
in many veterans with TBI. This situation is strikingly similar to that observed in many older adults. Many older adults, for example, exhibit peripheral hearing loss and typical cognitive-processing deficits often associated with healthy aging. These concomitant problems
make the diagnosis of centrally located auditory-processing problems in older adults extremely difficult. After building a case for many
similarities between young veterans with mild TBI and older adults with presbycusis, this presentation will focus on several of the lessons learned from research with older adults. [Work supported, in part, by research grant R01 AG008293 from the NIA.]
2257
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2257
9:35
4aPP3. Lack of a coherent theory limits the diagnosis and prognostic value of the central auditory processing disorder. Anthony
T. Cacace (Commun. Sci. & Disord., Wayne State Univ., 207 Rackham, 60 Farnsworth, Detroit, MI 48202, cacacea@wayne.edu) and
Dennis J. McFarland (Lab. of Neural Injury and Repair, Wadsworth Labs, NYS Health Dept., Albany, NY)
Spanning almost 6 decades, CAPD, defined as a modality specific perceptual dysfunction not due to peripheral hearing loss, still
remains controversial and requires further development if it is to become a useful clinical entity. Early attempts to quantify the effects of
central auditory nervous system lesions based on the use of filtered-speech material, dichotic presentation of digits, and various nonspeech tests have generally been abandoned due to lack-of-success. Site-of-lesion approaches have given way to functional considerations whereby attempts to understand underlying processes, improve specificity-of-diagnosis, and delineate modality-specific (auditory)
disorders from “non-specific supramodal dysfunctions” like those related to attention and memory have begun to fill the gap. Furthermore, because previous work was generally limited to auditory tasks alone, functional dissociations could not be established and consequently, the need to show the modality-specific nature of the observed deficits has been compromised; further limiting progress in this
area. When viewed as a whole, including information from consensus conferences, organizational guidelines, representative studies,
etc., what is conspicuously absent is a well-defined theory that permeates all areas of this domain, including the neural substrates of auditory processing. We will discuss the implications of this shortcoming and propose ways to move forward in a meaningful manner.
10:05–10:30 Break
10:30
4aPP4. Cochlear synaptopathy and neurodegeneration in noise and aging: Peripheral contributions to auditory dysfunction with
normal thresholds. Sharon G. Kujawa (Dept. of Otology and Laryngology, Harvard Med. School and Massachusetts Eye and Ear Infirmary, Massachusetts Eye and Ear Infirmary, 243 Charles St., Boston, MA 02114, sharon_kujawa@meei.harvard.edu)
Declining auditory performance in listeners with normal audiometric thresholds is often attributed to changes in central circuits,
based on the widespread view that normal thresholds indicate a lack of cochlear involvement. Recent work in animal models of noise
and aging, however, demonstrates that there can be functionally important loss of sensory inner hair cell—afferent fiber communications
that go undetected by conventional threshold metrics. We have described a progressive cochlear synaptopathy that leads to proportional
neural loss with age, well before loss of hair cells or age-related changes in threshold sensitivity. Similar synaptic and neural losses occur
after noise, even when thresholds return to normal. Since the IHC-afferent fiber synapse is the primary conduit for information to flow
from the cochlea to the brain, and since each of these cochlear nerve fibers makes synaptic contact with one inner hair cell only, these
losses should have significant perceptual consequences, even if thresholds are preserved. The prevalence of such pathology in the human
is likely to be high, underscoring the importance of considering peripheral status when studying central contributions to auditory performance declines. [Research supported by R01 DC 008577 and P30 DC 05029.]
11:00
4aPP5. Quantifying supra-threshold sensory deficits in listeners with normal hearing thresholds. Barbara Shinn-Cunningham (Biomedical Eng., Boston Univ., 677 Beacon St., Boston, MA 02215-3201, shinn@bu.edu), Hari Bharadwaj, Inyong Choi, Hannah Goldberg
(Ctr. for Computational Neurosci. and Neural Technol., Boston Univ., Boston, MA), Salwa Masud, and Golbarg Mehraei (Speech and
Hearing BioSci. and Technol., Harvard/MIT, Boston, MA)
There is growing suspicion that some listeners with normal-hearing thresholds may be suffering from a specific form of sensory deficit—a loss of afferent auditory nerve fibers. We believe such deficits manifest behaviorally in conditions where perception depends
upon precise spectro-temporal coding of supra-threshold sound. In our lab, we find striking inter-subject differences in perceptual ability
even among listeners with normal hearing thresholds who have no complaints of hearing difficulty and have never sought clinical intervention. Among such ordinary listeners, those who perform relatively poorly on selective attention tasks (requiring the listener to focus
on one sound stream presented amidst competing sound streams) also exhibit relatively weak temporal coding in subcortical responses
and have poor thresholds for detecting fine temporal cues in supra-threshold sound. Here, we review the evidence for supra-threshold
hearing deficits and describe measures that reveal this sensory loss. Given our findings in ordinary adult listeners, it stands to reason that
at least a portion of the listeners who are diagnosed with central auditory processing dysfunction may suffer from similar sensory deficits, explaining why they have trouble communicating in many everyday social settings.
11:30
4aPP6. Neural correlates of central auditory processing deficits in the auditory midbrain in an animal model of age-related hearing loss. Joseph P. Walton (Commun. Sci. and Disord., Univ. of South Florida, 4202 Fowler Ave., PCD 1017, Tampa, FL 33620, jwalton1@usf.edu)
Age-related hearing loss (ARHL), clinically referred to as presbycusis, affects over 10 million Americans and is considered to be the
most common communication disorder in the elderly. Presbycusis can be associated with at least two underlying etiologies, a decline in
cochlear function resulting in sensorineural hearing loss, and deficits in auditory processing within the central auditory system. Previous
psychoacoustic studies have revealed that aged human listeners display deficits in temporal acuity that worsen with the addition of background noise. Spectral and temporal acuity is essential for following the rapid changes in frequency and intensity that comprise most natural sounds including speech. The perceptual analysis of complex sounds depends to a large extent on the ability of the auditory system
to follow and even sharpen neural encoding of rapidly changing acoustic signals, and the inferior colliculus (IC) is a key auditory nucleus involved in temporal and spectral processing. In this talk, I will review neural correlates of temporal and signal-in-noise processing
at the level of the auditory midbrain in an animal model of ARHL. Understanding the neural substrate of these perceptual deficits will
assist in its diagnosis and rehabilitation, and be crucial to further advances in the design of hearing aids and therapeutic interventions.
2258
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2258
THURSDAY MORNING, 30 OCTOBER 2014
SANTA FE, 8:00 A.M. TO 10:20 A.M.
Session 4aSCa
Speech Communication: Subglottal Resonances in Speech Production and Perception
Abeer Alwan, Cochair
Dept. of Electrical Eng., UCLA, 405 Hilgard Ave., Los Angeles, CA 90095
Steven M. Lulich, Cochair
Speech and Hearing Sciences, Indiana University, 4789 N White River Drive, Bloomington, IN 47404
Mitchell Sommers, Cochair
Psychology, Washington University, Campus Box 1125, 1 Brookings Drive, Saint Louis, MO 63130
Chair’s Introduction—8:00
Invited Papers
8:05
4aSCa1. The role of subglottal acoustics in speech production and perception. Mitchell Sommers (Indiana Univ., Saint Louis, MS),
Abeer Alwan (Psych., Washington Univ., Los Angeles, CA), and Steven Lulich (Psych., Washington Univ., Dept. of Speech and Hearing Sci., Indiana University, Bloomington, IN, slulich@indiana.edu)
In this talk, we present an overview of subglottal acoustics, with emphasis on the significant anatomical structures that define subglottal resonances, and we present results from our experiments incorporating subglottal resonances into automatic speaker normalization and speech recognition technologies. Speech samples used in the modeling and perception studies were obtained from a new speech
corpus (the UCLA-WashU subglottal database) of simultaneous microphone and (subglottal) accelerometer recordings of 50 adult
speakers of American English (AE). We will discuss new findings about the Young’s Modulus of tracheal soft tissue, the viscosity of tracheal cartilage, and the effect of going from a circular cross-section to a rectangular cross-section in the conus elasticus. We also present
results from studies demonstrating a small, but significant, role of subglottal resonances in discriminating speaker height and of the interaction between subglottal resonances and formants in height discrimination.
8:25
4a THU. AM
4aSCa2. The effect of subglottal acoustics on vocal fold vibration. Ingo R. Titze (National Ctr. for Voice and Speech, 136 South
Main St., Ste. 320, Salt Lake City, UT 84101-3306, ingo.titze@ncvs2.org) and Ingo R. Titze (Dept. of Commun. Sci. and Disord., Univ.
of Iowa, Iowa City, IA)
Acoustic pressures above and below the vocal folds produce a push-pull action on the vocal folds which can either help or hinder
vocal fold vibration. The key variable is acoustic reactance, the energy-storage part of the complex acoustic impedance. For the subglottal airway, inertive (positive) reactance does not help vocal fold vibration, but helps to skew the glottal airflow waveform for high frequency harmonic excitation. Compliant (negative) reactance, on the contrary, helps vocal fold vibration but does not skew the
waveform. Thus, the benefit of subglottal reactance is mixed. For supraglottal reactance, the benefit is additive. Inertive supraglottal reactance helps vocal fold vibration and skews the waveform, whereas compliant supraglottal reactance does neither. The effects will be
demonstrated with source-filter interactive simulation.
8:45
4aSCa3. Impact of subglottal resonances on bifurcations and register changes in laboratory models of phonation. David Berry,
Juergen Neubauer, and Zhaoyan Zhang (Surgery, UCLA, 31-24 Rehab, Los Angeles, CA 90095-1794, daberry@ucla.edu)
Many laboratory studies of phonation have failed to fully specify the subglottal system employed during research. Many of these
same studies have reported a variety of nonlinear phenomena, such as bifurcations and vocal register changes. While such phenomena
are often presumed to result from changes in the biomechanical properties of the larynx, such phenomena may also be a manifestation of
coupling between the voice source and the subglottal tract. Using laboratory models of phonation, a variety of examples will be given of
nonlinear phenomena induced by both laryngeal and subglottal mechanisms. Moreover, using tracheal tube lengths commonly reported
in the literature, it will be shown that most of the nonlinear phenomena commonly reported in voice production may be replicated solely
on the acoustical resonances of the subglottal system. Finally, recommendations will be given regarding the experimental design of laboratory experiments which may allow laryngeally induced bifurcations to be distinguished from subglottally induced bifurcations.
9:05–9:25 Break
2259
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2259
9:25
4aSCa4. Subglottal ambulatory monitoring of vocal function to improve voice disorder assessment. Robert E. Hillman, Daryush
Mehta, Jarrad H. Van Stan (Ctr. for Laryngeal Surgery and Voice Rehabilitation, Massachusetts General Hospital, One Bowdoin Square,
11th Fl., Boston, MA 02114, daryush.mehta@alum.mit.edu), Matias Zanartu (Dept. of Electron. Eng., Universidad Tecnica Federico
Santa Mariıa, Valparaiso, Chile), Marzyeh Ghassemi, and John V. Guttag (Comput. Sci. and Artificial Intelligence Lab., Massachusetts
Inst. of Technol., Cambridge, MA)
Many common voice disorders are chronic or recurring conditions that are likely to result from inefficient and/or abusive patterns of
vocal behavior, referred to as vocal hyperfunction. The clinical management of hyperfunctional disorders would be greatly enhanced by
the ability to monitor and quantify detrimental vocal behaviors during an individual’s activities of daily life. This presentation will provide an update about ongoing work that is using a miniature accelerometer on the subglottal neck surface to collect a large set of ambulatory data on patients with hyperfunctional voice disorders (before and after treatment) and matched control subjects. Three types of
analysis approaches are being employed in an effort to identify the best set of measures for differentiating among hyperfunctional and
normal patterns of vocal behavior: (1) previously developed ambulatory measures of vocal function that include vocal dosages; (2)
measures based on estimates of glottal airflow that are extracted from the accelerometer signal using a vocal system model, and (3) classification based on machine learning approaches that have been used successfully in analyzing long-term recordings of other physiologic
signals (e.g., electrocardiograms).
9:45
4aSCa5. Do subglottal resonances lead to quantal effects resulting in the features [back] and [low]?: A review. Helen Hanson
(ECE Dept., Union College, 807 Union St., Schenectady, NY 12308, helen.hanson@alum.mit.edu) and Stefanie Shattuck-Hufnagel
(Speech Commun. Group, Res. Lab. of Electronics, Massachusetts Inst. of Technol., Cambridge, MA)
A question of general interest is why languages have the sound categories that they do. K. N. Stevens proposed the Quantal Theory
of phonological contrasts, suggesting that regions of discontinuity in the articulatory-acoustic mapping serve as category boundaries. H.
M. Hanson and K. N. Stevens [Proc. ICPhS, 182–185, 1995] modeled the interaction of subglottal resonances with the vocal-tract filter,
showing that when a changing supraglottal formant strays into the territory of a stationary tracheal formant, a discontinuity in supraglottal formant frequency and attenuation of the formant peak occurs. They suggested that vowel space and quality could thus be affected.
K. N. Stevens [Acoustic Phonetics, MIT Press, 1998] went further, musing that because the first and second subglottal resonances lead
to instabilities in supraglottal formant frequency and amplitude, vowel systems would benefit by avoiding vowels with formants at these
frequencies. Avoiding the first subglottal resonance would naturally lead to the division of vowels into those with a low vs. non-low
tongue body; avoiding the second would lead to the division of vowels into those having a back vs. front tongue body. We will review
subsequent research that offers substantial support for this hypothesis, justifying inclusion of the effects of subglottal resonances in phonological models.
Contributed Paper
10:05
4aSCa6. Relationship between lung volumes and subglottal resonances.
Natalie E. Duvanenko (Speech and Hearing Sci., Indiana Univ., 2416 Cibuta
Court, West Lafayette, IN 47906, nduvanen@umail.iu.edu) and Steven M.
Lulich (Speech and Hearing Sci., Indiana Univ., Bloomington, IN)
Subglottal resonances are dependent the anatomical structure of the
lungs, but efforts to detect changes in subglottal resonances throughout an
2260
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
utterance have failed to show any effect of lung volume. In this study, we
present the results of an experiment investigating the relationship between
lung volumes and subglottal resonances. The pulmonary subdivisions for
several speakers were established using a whole-body plethysmograph. Subsequently, lung volume and subglottal resonances were recorded simultaneously using a spirometer and an accelerometer while the speakers produced
long sustained vowels.
168th Meeting: Acoustical Society of America
2260
THURSDAY MORNING, 30 OCTOBER 2014
MARRIOTT 5, 8:00 A.M. TO 12:00 NOON
Session 4aSCb
Speech Communication: Learning and Acquisition of Speech (Poster Session)
Maria V. Kondaurova, Chair
Otolaryngology – Head & Neck Surgery, Indiana University School of Medicine, 699 Riley Hospital Drive – RR044,
Indianapolis, IN 46202
All posters will be on display from 8:00 a.m. to 12:00 noon. To allow contributors the opportunity to see other posters, the contributors
of odd-numbered papers will be at their posters from 8:00 a.m. and 10:00 a.m. and contributors of even-numbered papers will be at their
posters from 10:00 a.m. to 12:00 noon.
Contributed Papers
4aSCb1. Labels facilitate the learning of competing abstract perceptual
mappings. Shannon L. Heald, Nina Bartram, Brendan Colson, and Howard
C. Nusbaum (Psych., Univ. of Chicago, 5848 S. University Ave., B406, Chicago, IL 60637, smbowdre@uchicago.edu)
4aSCb3. A comparison of acoustic and perceptual changes in children’s
productions of American English /r/. Sarah Hamilton (Commun. Sci. and
Disord., Univ. of Cincinnati, Cincinnati, OH), Casey Keck (Commun. Sci.
and Disord., Univ. of Cincinnati, 408 Glengarry Way, Fort Wright, KY
41011, stewarce@mail.uc.edu), and Suzanne Boyce (Commun. Sci. and
Disord., Univ. of Cincinnati, Cincinnati, OH)
Listeners are able to quickly adapt to synthetic speech, even though it
contains misleading and degraded acoustic information. Previous research
has shown that testing and training on a given synthesizer using only novel
words leads listeners to form abstract or generalized knowledge for how that
particular synthesizer maps different acoustic patterns onto their pre-existing phonological categories. Prior to consolidation, this knowledge has been
shown to be susceptible to interference. Given that labels have been argued
to stabilize abstract ideas in working memory and to help learners form category representations that are robust against interference, we examined how
learning for a given synthesizer is affected by labeled or unlabeled immediate training on an additional synthesizer, which uses a different acoustic to
phonetic mapping. We demonstrated that the learning of an additional synthesizer interferes with the retention of a previously learned synthesizer but
that this is ameliorated if the additional synthesizer is labeled. Our findings
indicate that labeling may be important in facilitating daytime learning for
competing abstract perceptual mappings prior to consolidation and suggests
that speech perception may be best understood through the lens of perceptual categorization.
Speech-language pathologists rely primarily on their perceptual judgments when evaluating whether children have made progress in speech
sound therapy. Speech sound perception in normal listeners has been characterized as largely categorical, such that slight articulatory changes may go
unnoticed unless they reach a specific acoustic signature assigned to a different category. While perception may be categorical, acoustic phenomena
are largely measured in continuous units, meaning that there is a potential
mismatch between the two methods of recording change. Clinicians, using
perceptual categorization, commonly report that some children make no
progress in therapy, yet acoustically, the children’s productions may be
shifting toward acceptable acoustic characteristics. Using subtle changes in
the acoustic signal during therapy could potentially prevent these clients
from being discharged due to a perceived lack of progress. This poster evaluates acoustic changes compared to perceptual changes in children’s productions of the American English phoneme /r/ after receiving speech
therapy using ultrasound supplemented with telepractice home practice. Preliminary data indicate that there are significant differences between participants’ acoustic values of /r/ and perceptual ratings by clinicians.
4aSCb2. When more is not better: Variable input in the formation of
robust word representations. Andrea K. Davis (Linguist, Univ. of Arizona, 1076 Palomino Rd., Cloverdale, CA 95425, davisak@email.arizona.
edu) and LouAnn Gerken (Linguist, Univ. of Arizona, Tucson, AZ)
4aSCb4. Perceptual categorization of /r/ for children with residual
sound errors. Sarah M. Hamilton, Suzanne Boyce, and Lindsay Mullins
(Commun. Sci. and Disord., Univ. of Cincinnati, 3433 Clifton Ave., Cincinnati, OH 45220, hamilsm@mail.uc.edu)
A number of studies with infants and with young children suggest that
hearing words produced by multiple talkers helps learners to develop more
robust word representations (Richtsmeier et al., 2009; Rost & McMurray,
2009, 2010). Native adult learners, however, do not seem to derive the same
benefit from multiple talkers. A word-learning study with native adults was
conducted, and a second study with second language learners will have been
completed by this fall. Native-speaking participants learned four new minimal English-like minimal pair words either from a single talker or from multiple talkers. They were then tested with (a) a perceptual task, in which they
saw the two pictures corresponding to a minimal pair, heard one of the pair,
and had to choose the picture corresponding to the word they heard; (b) a
speeded production task, in which they had to repeat the words they had just
learned as quickly as possible. Unlike infants, the two groups did not differ
significantly in perceptual accuracy. However, the single talker group had
significantly higher variance in the speeded production task. It is hypothesized that this greater variance is due to individual differences in learning
strategies, which are masked when learning from multiple talkers.
2261
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Many studies have found that children with resistant speech sound errors
(RSSD) show (1) atypical category boundaries, and (2) difficulty identifying
whether their own productions are correct or misarticulated. Historically,
perceptual category discrimination tests use synthesized speech representing
incremental change along an acoustic continuum, while tests of a child’s
self-perception are confined to categorical correct vs. error choices. Thus, it
has not been possible to explore the boundaries of RSSD children’s categorical self-perception in any detail or to customize perceptual training for therapeutic purposes. Following an observation of Hagiwara (1995), who noted
that typical speakers show F3 values for /r/ between 80% and 60% of their
average vowel F3, Hamilton et al (2014) found that this threshold largely
replicates adult listener judgments, such that productions above and below
the 80% threshold sounded consistently “incorrect” or “correct,” but that
productions closest to the 80% threshold were given more ambiguous judgments. In this study, we apply this notion of an F3 threshold to investigate
whether children with RSD respond like adult listeners when presented with
natural-speech stimuli along a continuum of correct and incorrect /r/. Preliminary results indicate that children with RSD do not make adult-like decisions when categorizing /r/ productions.
168th Meeting: Acoustical Society of America
2261
4a THU. AM
8:00
4aSCb5. A child-specific compensatory mechanism in the acquisition of
English /s/. Hye-Young Bang, Meghan Clayards, and Heather Goad (Linguist, McGill Univ., 1085 Dr. Penfield, Montreal, QC H3A 1A7, Canada,
hye-young.bang@mail.mcgill.ca)
This study examines corpus data involving word-initial [sV] productions
from 79 children aged 2–5 (Edwards & Beckman 2008) in comparison with
a corpus of word-initial [sV] syllables produced by 13 adults. We quantified
target-like /s/ production using spectral moment analysis on the frication
portion (high center of gravity, low SD, and low skewness). In adults, we
found that higher vowels (low F1 after normalization) were associated with
more target-like /s/ productions, likely reflecting a tighter constriction. In
children, older subjects produced more target-like outputs overall. However,
unlike adults, children’s outputs before low vowels were more target-like,
regardless of age. This is unexpected given the articulatory challenges of
producing /s/ in low vowel contexts. Further investigation found that high
F1 (low vowels) was associated with louder /s/ (relative to V) and more
encroachment of sibilant noise on the following vowel (high harmonics-tonoise ratio). This finding suggests that young children may be increasing airflow during /s/ production to compensate for a less tight constriction when
the jaw must lower for the following vowel. Thus, children may adopt a
more accessible mechanism, different from adults, to compensate for their
immature lingual gestures, possibly in an attempt to maximize phonological
contrasts in word-initial position.
4aSCb6. Moving targets and unsteady states: “Shifting” productions of
sibilant fricatives by young children. Patrick Reidy (Dept. of Linguist,
The Ohio State Univ., 24A Oxley Hall, 1712 Neil Ave., Columbus, OH
43210, patrick.francis.reidy@gmail.com)
The English voiceless sibilant /s/–/S/ contrast is one that many children
do not acquire until their adolescent years. This protracted acquisition may
be due to the high level of articulatory control that is necessary to the successful production of an adult-like sibilant, which involves the coordination
of lingual, mandibular, and pulmonic gestures. Poor coordination among
these gestures can result in the acoustic properties of the noise source or the
vocal tract filter changing throughout the timecourse of the frication, to the
extent that the phonetic percept of the frication noise changes across its duration. The present study examined such “shifting” productions of sibilant
fricatives by native English-acquiring two- through five-year-old children,
which were identified from the Paidologos corpus as those productions
where the interval of frication was transcribed phonetically as a sequence of
fricative sounds. There were two types of shift in frication quality: (1) a
gradual change in the resonant frequencies in the spectrogram, suggesting a
repositioning of the oral constriction; and (2) an abrupt change in the level
of the frication, suggesting a switch in the noise source. Work is underway
to develop measures that differentiate these two types of shift, and that suggest their underlying articulatory causes.
4aSCb7. Effects of spectral smearing on sentence recognition by adults
and children. Joanna H. Lowenstein (Otolaryngology-Head & Neck Surgery, Ohio State Univ., 915 Olentangy River Rd., Ste. 4000, Columbus, OH
43212, lowenstein.6@osu.edu), Eric Tarr (Audio Eng. Technol., Belmont
Univ., Nashville, TN), and Susan Nittrouer (Otolaryngology-Head & Neck
Surgery, Ohio State Univ., Columbus, OH)
Children’s speech perception depends on dynamic formant patterns
more than that of adults. Spectral smearing of formants, as found with the
broadened auditory filters associated with hearing loss, should disproportionately affect children because of this greater dependence on formant patterns. Making formants more prominent, on the other hand, may result in
improved recognition. Adults (40) and children age 5 and 7 (20 of each age)
listened to 75 four-word syntactically correct, semantically anomalous sentences processed so that excursions around the mean spectral slope were
sharpened by 50% (making individual formants more prominent), flattened
by 50% (smearing individual formants), or left unchanged. These sentences
were presented to children and to half of the adults in speech-shaped noise
at 0 dB SNR. The rest of the adults listened to the sentences at -3 dB SNR.
Results indicate that all listeners did more poorly with the smeared formants, with 5-year-olds showing the largest decrement in performance at 0
dB SNR. However, adults at -3 dB SNR showed an even greater decrement
2262
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
in performance. Making formants more prominent did not improve recognition, perhaps due to harmonic-formant mismatches. Thus, there is reason to
explore processing strategies that might enhance formant prominence for
listeners with hearing loss.
4aSCb8. Acoustic-phonetic characteristics of older children’s spontaneous speech in interactions in conversational and clear speaking styles.
Valerie Hazan, Michèle Pettinato, Outi Tuomainen, and Sonia Granlund
(Speech, Hearing and Phonetic Sci., UCL, Chandler House, 2, Wakefield
St., London WC1N 1PF, United Kingdom, v.hazan@ucl.ac.uk)
This study investigated (a) the acoustic-phonetic characteristics of spontaneous speech produced by talkers aged 9–14 years in an interactive (diapix) task with an interlocutor of the same age and gender (NB condition)
and (b) the adaptations these talkers made to clarify their speech when
speech intelligibility was artificially degraded for their interlocutor (VOC
condition). Recordings were made for 96 child talkers (50 F, 46 M); the
adult reference values came from the LUCID corpus recorded under the
same conditions [Baker and Hazan, J. Acoustic. Soc. Am. 130, 2139–2152
(2011)]. Articulation rate, pause frequency, fundamental frequency, vowel
area, and mean intensity (1–3 kHz range) were analyzed to establish
whether they had reached adult-like values and whether young talkers
showed similar clear speech strategies as adults in difficult communicative
situations. In the NB condition, children (including the 13–14 year group)
differed from adults in terms of their articulation rate, vowel area, median
F0, and intensity. Child talkers made adaptations to their speech in the VOC
condition, but adults and children differed in their use of F0 range, vowel
hyperarticulation, and pause frequency as clear speech strategies. This suggests that further developments in speech production take place during later
adolescence. [Work supported by ESRC.]
4aSCb9. Acoustic characteristics of infant-directed speech to normalhearing and hearing-impaired twins with hearing aids and cochlear
implants: A case study. Maria V. Kondaurova, Tonya R. Bergeson-Dana
(Otolaryngol. – Head & Neck Surgery, Indiana Univ. School of Medicine,
699 Riley Hospital Dr. – RR044, Indianapolis, IN 46202, mkondaur@iupui.
edu), and Neil A. Wright (The Richard and Roxelyn Pepper Dept. of Commun. Sci. and Disord., Northwestern Univ., Evanston, IL)
The study examined acoustic characteristics of maternal speech to normal-hearing (NH) and hearing-impaired (HI) twins who received hearing
aids (HAs) or a unilateral cochlear implant (CI). A mother of female-male
NH twins (NH-NH; age 15.8 months), a mother of two male twins, one NH
and another HI with HAs (NH-HA; age 11.8 months) and a mother of a NH
female twin and a HI male twin with a CI (NH-CI; age 14.8 months) were
recorded playing with their infants during three sessions across a 12-month
period. We measured pitch characteristics (normalized F0 mean, F0 range,
and F0 SD), utterance and pause duration, syllable number, and speaking
rate. ANOVAs demonstrated that speech to NH-NH twins was characterized
by lower, more variable pitch with greater pitch range as compared to
speech to NH-HA and NH-CI pairs. Mothers produced more syllables, had
faster speaking rate and longer utterance duration in speech to NH-NH than
the other pairs. The results suggest that the pediatric hearing loss in one sibling affects maternal speech properties to both NH and HI infants in the
same pair. Future research will investigate vowel space and lexical properties of IDS to three twin pairs as well as their language outcome measures.
4aSCb10. Effects of vowel position and place of articulation on voice
onset time in children: Longitudinal data. Elaine R. Hitchcock (Dept. of
Commun. Sci. and Disord., Montclair State Univ., 1515 BRd. St., Bloomfield, NJ 07444, hitchcocke@mail.montclair.edu) and Laura L. Koenig
(Dept. of Commun. Sci. and Disord., Long Island Univ., Queens, NY)
Voice onset time (VOT) has been found to vary according to phonetic
context, but past studies report varying magnitudes of effect, and no past
work has evaluated the degree to which such effects are consistent over time
for a single speaker. This study explores the relationships between vowel
position, consonant place of articulation [POA], and voice onset time
(VOT) in children, comparing the results to past adult work. VOT in CV/
CVC words was measured in nine children ages 5;3–7;6 every two-four
168th Meeting: Acoustical Society of America
2262
4aSCb11. Longitudinal data on the production of content versus function words in children’s spontaneous speech. Jeffrey Kallay and Melissa
A. Redford (Linguist, Univ. of Oregon, 1455 Moss St., Apt. 215, Eugene,
Ohio 97403, jkallay@uoregon.edu)
Allen and Hawkins (1978; 1980) were among the first to note rhythmic
differences in the speech of children and adults. Sirsa and Redford (2011)
found that rhythmic differences between younger and older children’s
speech was best accounted for by age-related differences in function word
production. In other on-going work (Redford, Kallay & Dilley) we found an
effect of age on the perceived prominence of function words in children’s
speech, but no effect on content words. The current longitudinal study investigated the effect of word class (content versus function words) on the development of reduction in terms of syllable duration and pitch range (a
correlate of accenting). Spontaneous speech was elicited for 3 years from 36
children aged 5; 2–6; 11 at time of first recording. There were effects of
word class (content > function) and of time on median duration, but no
interaction between these factors. The median duration decreased 13% in
function words from the 1st to 3rd year; a similar decrease (15%) was found
for content words. Pitch range only varied systematically with word class.
Other spectral measures are being collected to further investigate the development of reduction in children’s speech. [Work supported by NICHD.]
4aSCb12. Audiovisual speech integration development at varying levels
of perceptual processing. Kaylah Lalonde (Speech and Hearing Sci., Indiana Univ., 200 South Jordan Ave., Bloomington, IN 47405, klalonde@indiana.edu) and Rachael Frush Holt (Speech and Hearing Sci., Ohio State
Univ., Columbus, OH)
There are multiple mechanisms of audiovisual (AV) speech integration
with independent maturational time courses. This study investigated development of both basic perceptual and speech-specific mechanisms of AV
speech integration by examining AV speech integration development across
three levels of perceptual processing. Twenty-two adults and 24 6- to 8year-old children completed three auditory-only and AV yes/no tasks varying only in the level of perceptual processing required to complete them:
detection, discrimination, and recognition. Both groups demonstrated benefits from matched AV speech and interference from mismatched AV speech
relative to auditory-only conditions. Adults, but not children, demonstrated
greater integration effects at higher levels of perceptual processing (i.e., recognition). Adults seem to rely on both general perceptual mechanisms of
speech integration that apply to all levels of perceptual processing and
speech-specific mechanisms of integration that apply when making phonetic
decisions and/or accessing the lexicon; 6- to 8-year-old children seem to
rely only on general perceptual mechanisms of AV speech integration. The
general perceptual mechanism allows children to attain the same degree of
AV benefit to detection and discrimination as adults, but the lack of a
speech-specific mechanism in children might explain why they attain less
AV recognition benefit than adults.
4aSCb13. Developmental and linguistic factors of audiovisual speech
perception across different masker types. Rachel Reetzke, Boji Lam,
Zilong Xie, Li Sheng, and Bharath Chandrasekaran (Commun. Sci. and Disord., Univ. of Texas at Austin, The University of Texas at Austin, 2504A
Whitis Ave., Austin, TX 78751, rreetzke@gmail.com)
Developmental and linguistic factors have been found to influence listeners’ ability to recognize speech-in-noise. However, there is paucity of
evidence exploring how these factors modulate speech perception in
2263
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
everyday listening situations, such as multisensory environments and backgrounds with informational maskers. This study assessed sentence recognition for 30 children (14 monolingual, 16 simultaneous bilingual; ages 6–10)
and 31 adults (21 monolingual, ten simultaneous bilingual; ages 18–22).
Our experimental design included three within-subject variables: (a) masker
type: pink noise or two-talker babble, (b) modality: audio-only and audiovisual, and (c) signal-to-noise ratio (SNR): 0 to -16 dB. Results revealed that
across both modalities and noise types, adults performed better than children, and simultaneous bilinguals performed similarly to monolinguals. The
age effect was largest at the lowest SNRs of -12 and -16 dB in the audiovisual two-talker babble condition. These findings suggest that children experience greater difficulty in segregation of target speech in informational
maskers relative to adults, even with audiovisual cues. This may provide
evidence for children’s less developed higher-level cognitive strategies in
dealing with speech-in-noise (e.g., selective attention). Findings from the
second analysis suggest that despite two competing lexicons, simultaneous
bilinguals do not experience a speech perception-in-noise deficit relative to
monolinguals.
4aSCb14. Experience-independent effects of matching and non-matching visual information on speech perception. D. Kyle Danielson, Alison J.
Greuel, Padmapriya Kandhadai, and Janet F. Werker (Psych., Univ. of Br.
Columbia, 2136 West Mall, Vancouver, BC V6T 1Z4, Canada, kdanielson@psych.ubc.ca)
Infants are sensitive to the correspondence between visual and auditory
speech. Infants exhibit the McGurk effect, and matching audiovisual information may facilitate discrimination of similar consonant sounds in an
infant’s native language (e.g., Teinonen et al., 2008). However, because
most existing research in audiovisual speech perception has been conducted
using native speech sounds with infants in their first year of life, little work
has explored whether this link between the auditory and visual modalities of
speech perception arises due to experience with the native language. In the
present set of studies, English-learning six- and ten-month-old infants are
tested for discrimination of a non-English speech contrast following familiarization with matching and mismatching audiovisual speech. Furthermore,
the looking fixation behaviors of the two age groups are compared between
the two conditions. Although it has been demonstrated that infants in the
younger age range attend preferentially to the eye region when viewing
matched audiovisual speech and that infants in the older age range temporarily attend to the mouth region (Lewkowicz & Hansen-Tift, 2012), here
deviations in this behavior for matching and mismatching non-native speech
are examined (a link that has only been previously explored in the native
language (Tomalski et al., 2013)).
4aSCb15. Switched-dominance bilingual speech production: Continuous usage versus early exposure. Michael Blasingame and Ann R.
Bradlow (Linguist, Northwestern Univ., 2016 Sheridan Rd., Evanston, IL
60208, mblasingame@u.northwestern.edu)
Switched dominance bilinguals (i.e., “heritage speakers,” HS, with L2
rather than L1 dominance) have exhibited native-like heritage language (L1)
sound perception (e.g., Korean three-way VOT contrast discrimination by Korean HS; Oh, Jun, Knightly, & Au, 2003) and sound production (e.g., Spanish
VOT productions by Spanish HS; Au, Knightly, Jun, & Oh, 2002), but far
from native-like proficiency in other aspects of L1 function, including morphosyntax (Montrul, 2010). We investigated whether native-like L1 sound
production proficiency extended to heritage language sentence-in-noise intelligibility. We recorded English and Spanish sentences by Spanish HS (SHS)
and monolingual English controls (English only). Native listeners of each language transcribed these recordings under easy (-4 dB SNR) and hard (-8 dB
SNR) conditions. In easy conditions, SHS English and Spanish intelligibility
were not significantly different, yet in hard conditions, SHS English intelligibility was significantly higher than SHS Spanish intelligibility. Furthermore,
we observed no differences between SHS English and English-control intelligibility in both conditions. These results suggest for SHS, while early Spanish
exposure provided some resistance to heritage language/L1 intelligibility degradation, the absence of continuous Spanish usage impacted intelligibility in
severely degraded conditions. In contrast, the absence of early English exposure was entirely overcome by later English dominance.
168th Meeting: Acoustical Society of America
2263
4a THU. AM
weeks for 10 months, for a total of 18 sessions yielding approximately
18,000 tokens for analysis. Bilabial and velar cognate pairs targeted a frontback vowel difference (/i/-/u/, /e/-/o/), while alveolar cognate pairs targeted
a mid high-low vowel difference (/o/-/A/). VOT variability over time was
also evaluated. Preliminary results suggest that POA yields a robust pattern
of bilabial < alveolar < velar, but vowel effects are less clear. Vowel height
shows the most obvious effect with consistently longer VOT values
observed for mid high vowels. Front-back vowel comparisons yielded no
obvious differences. On the whole, contextual variations based on POA and
vowel context do not show clear correlations with overall VOT variation.
4aSCb16. Genetic variation in catechol-O-methyl transferase activity
impacts speech category learning. Han-Gyol Yi (Commun. Sci. and Disord., The Univ. of Texas at Austin, 2504 Whitis Ave., A1100, Austin, TX
78712, gyol@utexas.edu), W. T. Maddox (Psych., The Univ. of Texas at
Austin, Austin, TX), Valerie S. Knopik (Behavioral Genetics, Rhode Island
Hospital, Providence, RI), John E. McGeary (Providence Veterans Affairs
Medical Ctr. , Providence, RI), and Bharath Chandrasekaran (Commun. Sci.
and Disord., The Univ. of Texas at Austin, Austin, TX)
Learning non-native speech categories is a challenging task. Little is
known about the neurobiology underlying speech category learning. In
vision, two dopaminergic neurobiological learning systems have been identified: a rule-based reflective learning system mediated by the prefrontal cortex, wherein processing is under deliberative control, and an implicit
reflexive learning system mediated by the striatum. During speech learning,
successful learners initially use simple reflective rules but eventually
transition to a multidimensional reflexive strategy during later learning. We
use a neurocognitive-genetic approach to identify intermediate phenotypes
that modulate reflective brain function and examine their effects on speech
learning. We focus on the COMT Val158Met polymorphism, which is
linked to altered prefrontal function. The COMT-Val variant catabolizes dopamine more rapidly and is linked to poorer performance on prefrontallymediated tasks. Adults (Met-Met: = 40; Met-Val= 75; Val-Val = 54) learned
to categorize non-native Mandarin tones over five blocks of feedback-based
training. Learning rates were the highest for the Met-Met genotype; the ValVal genotype was associated with poorer overall learning. Poorer learning
indicates increased perseveration of reflective unidimensional rule use,
thereby preventing the transition to the reflexive system. We conclude that
genetic variation is an important source of individual differences in complex
phenotypes such as speech learning.
THURSDAY MORNING, 30 OCTOBER 2014
INDIANA G, 9:00 A.M. TO 10:00 A.M.
Session 4aSPa
Signal Processing in Acoustics: Imaging and Classification
Grace A. Clark, Chair
Grace Clark Signal Sciences, 532 Alden Lane, Livermore, CA 94550
Contributed Papers
9:00
9:15
4aSPa1. Optimal smoothing splines improve efficiency of entropy imaging for detection of therapeutic benefit in muscular dystrophy. Michael
Hughes (Int. Med./Cardiology, Washington Univ. School of Medicine, 1632
Ridge Bend Dr., St Louis, MO 63108, mshatctrain@gmail.com), John
McCarthy (Dept. of Mathematics, Washington Univ., St. Louis, MO), Jon
Marsh (Int. Med./Cardiology, Washington Univ. School of Medicine, Saint
Louis, MO), and Samuel Wickline (Dept. of Mathematics, Washington
Univ., Saint Louis, MO)
4aSPa2. Waveform processing using entropy instead of energy: A quantitative comparison based on the heat equation. Michael Hughes (Int.
Med./Cardiology, Washington Univ. School of Medicine, 1632 Ridge Bend
Dr., St Louis, MO 63108, mshatctrain@gmail.com), John McCarthy (Mathematics, Washington Univ., St Louis, MO), Jon Marsh (Int. Med./Cardiology, Washington Univ. School of Medicine, Saint Louis, MO), and Samuel
Wickline (Mathematics, Washington Univ., Saint Louis, MO)
We have reported previously on sensitivity comparisons of signal energy
and several entropies to changes in skeletal muscle architecture in experimental muscular dystrophy before and after pharmacological therapeutic intervention [M. S. Hughes, IEEE Trans. UFFC. 54, 2291–2299 (2007)]. The study
was based on a moving window analysis of simple cubic splines that were fit
to the backscattered ultrasound and required that the radio frequency ultrasound (RF) be highly oversampled. The current study employs optimal
smoothing splines instead to determine the effect of analyzing the same data
with increasing levels of decimation. The RF data were obtained from
selected skeletal muscles of muscular dystrophy mice (mdx: dystrophin -/-)
that were randomly blocked into two groups: 4 receiving steroid treatment
over 2 weeks, and 4 untreated positive controls. Ultrasonic imaging was performed on day 15. All mice were anesthetized then each forelimb was imaged
in transverse cross sections using a Vevo-660 with a single-element 40 MHz
wobbler-transducer (model RMV-704, Visualsonics). The result of each scan
was a three dimensional data set 384 8192 # frames in size. We find the
equivalent sensitivity of this new approach for detecting treatment benefits as
before (p<0.03), but now at a decimated sampling rate slightly below the
Nyquist frequency. This implies that optimal smoothing splines are useful for
analysis of data acquired from point of care imaging devices where hardware
cost and power consumption must be minimized.
2264
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Virtually all modern imaging devices function by collecting electromagnetic or acoustic backscattered waves and using the energy carried by these
waves to determine pixel values that build up what is basically an “energy”
picture. However, waves also carry “information” that also may be used to
compute the pixel values in an image. We have employed several measures
of information, most sensitive being the “joint entropy” of the backscattered
wave and a reference signal. Numerous published studies have demonstrated
the advantages of “information imaging,” over conventional methods for
materials characterization and medical imaging. A typical study is comprised of repeated acquisition of backscattered waves from a specimen that
is changing slowing with acquisition time or location. The sensitivity of
repeated experimental observations of such a slowly changing quantity may
be defined as the mean variation (i.e., observed change) divided by mean
variance (i.e., observed noise). Assuming the noise is Gaussian and using
Wiener integration to compute the required mean values and variances, solutions to the Heat equation may be used to express the sensitivity for joint
entropy and signal energy measurements. There always exists a reference
such that joint entropy has larger variation and smaller variance than the
corresponding quantities for signal energy, matching observations of several
studies. A general prescription for finding an “optimal” reference for the
joint entropy emerges, which has been validated in several studies.
168th Meeting: Acoustical Society of America
2264
9:30
9:45
4aSPa3. The classification of underwater acoustic target signals based
on wave structure and support vector machine. Qingxin Meng, Shie
Yang, and Shengchun Piao (Sci. and Technol. on Underwater Acoust. Lab.,
Harbin Eng. Univ., No.145,Nantong St.,Nangang District, Harbin City, Heilongjiang Province 150001, China, mengqingxin005@hrbeu.edu.cn)
4aSPa4. Determination of Room Impulse Response for synthetic data
acquisition and ASR testing. Philippe Moquin (Microsoft, One Microsoft
Way, Redmond, WA 98052, pmoquin@microsoft.com), Kevin Venalainen
(Univ. of Br. Columbia, Vancouver, BC, Canada), and Dinei A. Flor^encio
(Microsoft, Redmond, WA)
The sound of propeller is a remarkable feature of ship-radiated noise,
the loudness and timbre of which are usually applied to identify types of
ships. Since the information of loudness and timbre is indicated in the wave
structure of time series, the feature of wave structure can be extracted to
classify types of various underwater acoustic targets. In this paper, the
method of feature vector extraction of underwater acoustic signals based on
wave structure is studied. The nine-dimension features are constructed via
signal statistical characteristics of zero-crossing wavelength, peek-to-peek
amplitude, zero-crossing wavelength difference, and wave train areas. And
then, the support vector machine (SVM) is applied as a classifier for two
kinds of underwater acoustic target signals. The kernel function is set radial
basis function (RBF). By properly setting the penalty factor and parameter
of RBF, the recognition rate reaches over 89.5%, respectively. The sea-test
data shows the validity of target recognition ability of the method above.
Automatic Speech Recognition (ASR) works best when the speech signal best matches the ones used for training. Training, however, may require
thousands of hours of speech, and it is impractical to directly acquire them
in a realistic scenario. Some improvement can be obtained by incorporating
typical building acoustics measurement parameters such as RT, Cx, LF,
etc., with limited gain. Instead, we estimate Room Impulse Responses
(RIRs), and convolve speech and noise signals with the estimated RIRs.
This produces realistic signals, which can then be processed by the audio
pipeline, and used for ASR training. In our research, we use rooms with
variable acoustics and repeatable source-receiver positions. The receivers
are microphone arrays making the relative phase and magnitude critical. A
standard mouth simulator for voice signals at various positions in the room
is under robot control. A limited corpus of speech data as well as noise sources is recorded and the RIR at these 27 positions is determined using a variety of methods (chirp, MLS, impulse, and noise). The convolved RIR with
the “clean speech” is compared to the actual measurements. Test methods
used, differences from the measurements, and the difficulty of determining
the unique RIR will be presented.
THURSDAY MORNING, 30 OCTOBER 2014
INDIANA G, 10:15 A.M. TO 12:00 NOON
Session 4aSPb
Signal Processing in Acoustics: Beamforming, Spectral Estimation, and Sonar Design
Brian E. Anderson, Cochair
Geophysics Group, Los Alamos National Laboratory, MS D443, Los Alamos, NM 87545
4a THU. AM
R. Lee Culver, Cochair
ARL, Penn State University, PO Box 30, State College, PA 16804
Contributed Papers
10:15
10:30
4aSPb1. Quantifying the depth profile of time reversal focusing in elastic media. Brian E. Anderson, Marcel C. Remillieux, Timothy J. Ulrich,
and Pierre-Yves Le Bas (Geophys. Group (EES-17), Los Alamos National
Lab., MS D446, Los Alamos, NM 87545, bea@lanl.gov)
4aSPb2. Competitive algorithm blending for enhanced source separation of convolutive speech mixtures. Keith Gilbert (Elec. and Comput.
Eng., Univ. of Massachusetts Dartmouth, 36 Walnut St., Berlin, MA 01503,
kgilbert@umassd.edu), Karen Payton (Elec. and Comput. Eng., Univ. of
Massachusetts Dartmouth, N. Dartmouth, MA), Richard Goldhor, and Joel
MacAuslan (Speech Technol. & Appl. Res., Corp., Bedford, MA)
A focus of elastic energy on the surface of a solid sample can be useful
to nondestructively evaluate whether the surface or the near-surficial region
is damaged. Time reversal techniques allow one to focus energy in this manner. In order to quantify the degree to which a time reversal focus can probe
near-surficial features, the depth profile of a time reversal focus must be
quantified. This presentation will discuss numerical modeling and experimental results used to quantify the depth profile. [This work was supported
by the U.S. Dept. of Energy, Fuel Cycle R&D, Used Fuel Disposition (Storage) Campaign.]
2265
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
This work investigates an adaptive filter network in which multiple blind
source separation methods are run in parallel, and the individual outputs are
combined to produce estimates of acoustic sources. Each individual algorithm makes assumptions about the environment (dimensions of enclosure,
reflections, reverberation, etc.) and the sources (speech, interfering noise,
position, etc.), which constitutes an individual hypothesis about the
observed microphone outputs. The goal of this competitive algorithm blending (CAB) approach is to achieve the performance of the “true” method,
i.e., the method that has full knowledge of the environment’s and the sources’ characteristics a priori, without any prior information. Results are given
for time-invariant, critically- and over- determined, convolutive mixtures of
168th Meeting: Acoustical Society of America
2265
speech and interfering noise sources, and the performance of the CAB
method is compared with the “true” method in both the transient adaptation
phase and in steady state.
10:45
4aSPb3. Structural infrasound signals in an urban environment. Sarah
McComas, Henry Diaz-Alvarez, Mike Pace, and Mihan McKenna (US
Army Engineer Res. and Development Ctr., 3909 Halls Ferry Rd., Vicksburg, MS 39180, sarah.mccomas@usace.army.mil)
Historically, infrasound arrays have been deployed in rural environments where anthropological noise sources are limited. As interest in monitoring sources at local distances grows in the infrasound community, it will
be vital to understand how to monitor infrasound sources in an urban environment. Arrays deployed in urban centers have to overcome the decreased
signal to noise ratio and reduced amount of real estate available to deploy
an array. To advance the understanding of monitoring infrasound sources in
urban environments, we deployed local and regional infrasound arrays on
building rooftops of the campus of Southern Methodist University (SMU)
and collected data for one seasonal cycle. The data was evaluated for structural source signals (continuous-wave packets) and when a signal was identified the back azimuth to the source was determined through frequency
wavenumber analysis. This information was used to identify hypothesized
structural sources; these sources were verified through direct measurement,
structural numerical modeling and/or full waveform propagation modeling.
Permission to publish was granted by Director, Geotechnical & Structures
Laboratory.
11:00
4aSPb4. Design of a speaker array system based on adaptive time reversal method. Gee-Pinn J. Too, Yi-Tong Chen, and Shen-Jer Lin (Dept. of
Systems and Naval Mechatronic Eng., National Cheng Kung Univ., No. 1
University Rd., Tainan 701, Taiwan, z8008070@email.ncku.edu.tw)
A system for focusing sound around desired locations by using a speaker
array of controlled sources is proposed. To increase acoustic signal in certain locations where the user is within but to reduce it in the other certain
locations by controlling source signals is the main objective of this study.
Based on adaptive time reversal theory, input weighting coefficients for
speakers are evaluated for the speaker sources. Experiments and simulations
with a speaker array of controlled sources are established in order to observe
the distribution of sound field under different boundary and control conditions. The results indicate that based on the current algorithm, the difference
of sound pressure level between bright point and dark point can be as high
as 12 dB with an eight speakers array system.
11:15
4aSPb5. Focusing the acoustic signal of a maneuvering rotorcraft. Geoffrey H. Goldman (U.S. Army Res. Lab., 2800 Powder Mill Rd., Adelphi,
MD 20783-1197, geoffrey.h.goldman.civ@mail.mil)
An algorithm was developed and tested to blindly focus the acoustic
spectra of a rotorcraft that was blurred by time-varying Doppler shifts and
other effects such atmospheric distortion. First, the fundamental frequency
generated by the main rotor blades of a rotorcraft was tracked using a fixlag smoother. Then, the frequency estimates were used to resample the data
in time using interpolation. Next, the motion compensated data were further
focused using a technique based upon the phase gradient autofocus algorithm. The performance of the focusing algorithm was evaluated by analyz-
2266
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
ing the increase in the amplitude of the harmonics. For most of the data, the
algorithm focused the harmonics between approximately 10–90 Hz to
within 1–2 dB of an estimated upper bound obtained from conservation of
energy and estimates of the Doppler shift. In addition, the algorithm was
able to separate two closely spaced frequencies in the spectra of the rotorcraft. The algorithm developed can be used to preprocess data for classification, nulling, and tracking algorithms.
11:30
4aSPb6. Representing the structure of underwater acoustic communication data using probabilistic graphical models. Atulya Yellepeddi (Elec.
Engineering/Appl. Ocean Phys. and Eng., Massachusetts Inst. of Technology/Woods Hole Oceanographic Inst., 77 Massachusetts Ave., Bldg. 36683, Cambridge, MA 02139, atulya@mit.edu) and James C. Preisig (Appl.
Ocean Phys. and Eng., Woods Hole Oceanographic Inst., Woods Hole,
MA)
Exploiting the structure in the output of the underwater acoustic communication channel in order to improve the performance of the communication
system is a problem that has received much recent interest. Methods such as
physical constraints and sparsity have been used to represent such structure
in the past. In this work, we consider representing the structure of the
received signal using probabilistic graphical models (more specifically Markov random fields), which capture the conditional dependencies amongst a
collection of random variables. In the frequency domain, the inverse covariance matrix of the received signal is shown to have a sparse structure. Under
the assumption that the signal may be modeled as a multivariate Gaussian
random variable, this corresponds to a Markov random field. It is argued
that the underlying cause of the structure is the cyclostationary nature of the
signal. In practice, the received signal is not exactly cyclostationary, but
data from the SPACE08 acoustic communication experiment is used to
demonstrate that field data exhibits exploitable structure. Finally, techniques
to exploit graphical model structure to improve the performance of wireless
underwater acoustic communication are briefly considered.
11:45
4aSPb7. Choice of acoustics signals family in multi-users environment.
Benjamin Ollivier, Frederic Maussang, and Rene Garello (ITI, Institut
Mines-Telecom / Telecom Bretagne - Lab-STICC, 655 Ave. du Technopole,
Plouzane 29200, France, benjamin.ollivier@telecom-bretagne.eu)
Our application concerns a system immersed in an underwater acoustical
context, with Nt transmitters and Nr slowly moving receivers. The objective
is that all receivers detect the transmitted signals, in order to estimate the
time of arrival (TOA) and then to facilitate the localization when several
TOA (more than 3) are present. We have to choose a method to generate a
number Ns of broad-band signals to use the Code Division Multiple Access
(CDMA) modulation, specially adapted to our problem. This work is
devoted to selecting Nt signals among the Ns available. The aim is to choose
the most distinctly detectable ones. First, in a no Doppler context, the criterion of signals selection is based on a ratio between maximum of auto-correlation and cross-correlation. Second, in the presence of Doppler, we rely on
Ambiguity Function which allows representing the correlation function to
several frequency Doppler shifts. The choice of Nt signals is then based on
ratio between maximum of auto-ambiguity and cross-ambiguity. In this paper we will highlight the relevance of the criteria (correlation, ambiguity
function) in the choice of the most appropriate signals in function of the
multi-users context.
168th Meeting: Acoustical Society of America
2266
THURSDAY MORNING, 30 OCTOBER 2014
INDIANA F, 8:00 A.M. TO 11:05 A.M.
Session 4aUW
Underwater Acoustics: Shallow Water Reverberation II
Brian T. Hefner, Chair
Applied Physics Laboratory, University of Washington, Applied Physics Laboratory, University of Washington,
1013 NE 40th Street, Seattle, WA 98105
Chair’s Introduction—8:00
Contributed Paper
8:05
lem at the basic research level, both propagation and scattering physics need
to be properly addressed. A major goal of TREX13 (Target and Reverberation EXperiment 2013) is to quantitatively investigate reverberation with
sufficient environmental measurement to support full modeling of reverberation data. Along a particular reverberation track at the TREX13 site, TL
and direct-path backscatter were separately measured. Environmental data
were extensively collected along this track. This talk will bring together all
the components of the SONAR equation measured separately at the
TREX13 site to provide an assessment of the reverberation process along
with environmental factors impacting each of the components.
4aUW1. SONAR Equation perspective on TREX13 measurements.
Dajun Tang and Brian T. Hefner (Appl. Phys. Lab, Univ of Washington,
1013 NE 40th St., Seattle, WA 98105, djtang@apl.washington.edu)
Modeling shallow water reverberation is a problem that can be approximated as two-way propagation (including multiple forward scatter) and a
single backward scatter. This can be effectively expressed in terms of the
SONAR equation: RL = SL-2 x TL + SS, where RL is reverberation level,
SL is the source level, TL is the one way transmission loss, and SS is the
integrated scattering strength. In order to understand the reverberation prob-
Invited Papers
8:20
4aUW2. Environmental measurements collected during TREX13 to support acoustic modeling. Brian T. Hefner and Dajun Tang
(Appl. Phys. Lab., Univ. of Washington, Appl. Phys. Lab., University of Washington, 1013 NE 40th St., Seattle, WA 98105, hefner@
apl.washington.edu)
4a THU. AM
The major goal of TREX13 (Target and Reverberation EXperiment 2013) was to quantitatively investigate reverberation with sufficient environmental measurements to support full modeling of reverberation data. The collection of environmental data to support reverberation modeling is usually limited by the large ranges (10s of km) involved, the temporal and spatial variability of the environment
and the time variation of towed source/receiver locations within this environment. In order to overcome these difficulties, TREX13 was
carried out in a 20 m deep shelf environment using horizontal line arrays mounted on the seafloor. The water depth and well controlled
array geometry allowed environmental characterization to be focused on the main beam of the array, i.e., along a track roughly 5 km
long and 500 m wide. This talk presents an overview of the efforts made to characterize the sea surface, water column, seafloor, and subbottom along this track to support the modeling of acoustic data collected over the course of the experiment. [Work supported by ONR
Ocean Acoustics.]
8:40
4aUW3. Persistence of sharp acoustic backscatter transitions observed in repeat 400 kHz multibeam echosounder surveys offshore Panama City, Florida, over 1 and 24 months. Christian de Moustier (10dBx LLC, PO Box 81777, San Diego, CA 92138,
cpm@ieee.org) and Barbara J. Kraft (10dBx LLC, Barrington, New Hampshire)
The Target and Reverberation Experiment 2013 (TREX13), conducted offshore Panama City, FL, from April to June 2013, sought
to determine which environmental parameters contribute the most to acoustic reverberation and control sonar performance prediction
modeling for acoustic frequencies between 1 kHz and 10 kHz. In that context, a multibeam echosounder operated at 400 kHz was used
to map the seafloor relief and its high-frequency acoustic backscatter characteristics along the acoustic propagation path of the reverberation experiment. Repeat surveys were conducted a month apart, before and after the main reverberation experiment. In addition, repeat
surveys were conducted at 200 kHz in April 2014. Similar mapping work was also conducted in April 2011 during a pilot experiment
(GulfEx11) near the site chosen for TREX13. Both experiments revealed a persistent occurrence of sharp transitions from high to low
acoustic backscatter at the bottom of swales. Hypotheses are presented for observable differences in bathymetry and acoustic backscatter
in the overlap region between the GulfEx11 survey and the TREX13 surveys conducted 2 y apart. [Work supported by ONR 322 OA.]
2267
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2267
Contributed Papers
9:00
9:30
4aUW4. Roughness measurement by laser profiler and acoustic scattering
strength of a sandy bottom. Nicholas P. Chotiros, Marcia J. Isakson, Oscar
E. Siliceo, and Paul M. Abkowitz (Appl. Res. Labs., Univ. of Texas at Austin,
PO Box 8029, Austin, TX 78713-8029, chotiros@arlut.utexas.edu)
4aUW6. Seabed characterisation using a low cost digital thin line array:
Results from the Target and Reverberation Experiments 2013. Unnikrishnan K. Chandrika, Venugopalan Pallayil (Acoust. Res. Lab, TMSI,
National Univ. of Singapore, Acoust. Res. Lab, 18 Kent Ridge Rd., Singapore 119227, Singapore, venu@arl.nus.edu.sg), Nicholas Chotiros (Appl.
Res. Lab, Univ. of Texas, Austin, TX), and Marcia Isakson (Appl. Res. Lab,
Univ. of Texas, Ausitn, TX)
The roughness of a sandy seabed off Panama City, FL, was measured
with a laser profiler. This was the site of the target and reverberation experiment of 2013 (TREX13) in which propagation loss and reverberation
strength were measured. The area may be characterized as having small
scale roughness due to bioturbation overlaying larger sand ripples due to
current activity. The area was largely composed of sand with shell hash
crossed by ribbons of softer sediment at regular intervals. The roughness
measurements were concentrated in the areas where the ribbons intersected
the designated sound propagation track. Laser lines projected on the sand
were imaged by a high-definition video recorder. The video images were
processed to yield bottom profiles in three dimensions. Finally, the roughness data are used to estimate acoustic bottom scattering strength. [Work
supported by the Office of Naval Research, Ocean Acoustics Program.]
9:15
4aUW5. Seafloor sub-bottom Imaging along the TREX reverberation
track. Joseph L. Lopes, Rodolf Arrieta, Iris Paustian, Nick Pineda (NSWC
PCD, 110 Vernon Ave, Panama City, FL 32407-7001, joseph.l.lopes@navy.
mil), and Kevin Williams (Appl. Phys. Lab. / Univ. of Washington, Seattle,
WA)
The Buried Object Scanning Sonar (BOSS) integrated into a Bluefin12
autonomous underwater vehicle was used to collect seafloor sub-bottom
data along the TREX reverberation track. BOSS is a downward looking sonar and employs an omni-directional source to transmit a 3 to 20 kHz linear
frequency modulated (LFM) pulse. Backscattered signals are received by
two 20-channel linear hydrophone arrays. The BOSS survey was carried out
to support long-range reverberation measurements at 3 kHz. The data were
beamformed in three dimensions and processed into 10cm x 10cm x 10cm
voxel maps of backscattering to a depth of 1 m. Comparison of the BOSS
imagery with 400 kHz multibeam sonar imagery of the seafloor allows
muddy regions to be identified and shows differences rationalized by the differences in sediment penetration of the two frequency ranges utilized. Processed BOSS data are consistent with observations from diver cores and the
reverberation data collected by stationary arrays deployed on the seafloor.
Specifically, stronger and deeper backscattering from muddy regions is
observed (relative to near-by sandy regions). This correlates well with the
large amounts of detritus (e.g., shell fragments) and complicated vertical
layering within cores, and the enhanced reverberation, from those regions.
[Work supported by ONR.]
During TREX-13 experiments in the Gulf of Mexico in May 2013, the
use of a low cost digital thin line array (DTLA) developed at the Acoustic
Research Lab, National University of Singapore was explored towards seabottom characterisation. The array, developed for use from AUV platforms,
was hosted on a Sea-eye ROV from UT Austin and towed using R/V Smith,
as no AUV platform was available during the course of the above experiment. The ROV was also hosting a wide-band acoustic source sending out
chirp waveforms in the frequency range of 3 to 15 kHz. It has been observed
that despite the complexity of set-up used, the array dynamics could be
maintained well during the tow test and also the data collected were useful
in estimating the bottom type from reflection coefficient measurements and
comparing with the models available. Our analysis by matched filtering the
received data and estimating the bottom reflection coefficient showed that
the bottom type at the experimental site was sandy-silt, which fairly compared with observations on the same by other means. Details of experiments
performed and the results from the data analyzed would be presented during
the meeting. Some suggestions for improvement for future experiments will
be discussed.
9:45
4aUW7. Wide-angle reflection measurements (TREX13): Evidence of
strong seabed lateral heterogeneity at two scales. Charles W. Holland,
Chad Smith (Appl. Res. Lab., The Penn State Univ., P.O. Box 30, State College, PA 16804, cwh10@psu.edu), Paul Hines (Elec. and Comput. Eng.,
Dalhousie Univ., Dalhousie, NS, Canada), Jan Dettmer, Stan Dosso (School
of Earth and Ocean Sci., Univ. of Victoria, Victoria, BC, Canada), and
Samuel Pinson (Appl. Res. Lab., The Penn State Univ., State College,
PA)
Broadband wide-angle reflection data possess high information content,
yielding both depth and frequency dependence of sediment wave velocities,
attenuations, and density. Measurements at two locations off Panama City,
FL (TREX13), however, presented a surprise: over the measurement aperture (a few tens of meters) the sediment was strongly laterally variable. This
prevented the usual analysis in terms of depth dependent geoacoustic properties. Only rough estimates could be made. On the other hand, the data provide clear evidence of lateral heterogeneity at O(100-101) m scale. The two
sites were separated by ~6 km, one on a ridge (lateral dimension 102 m) and
one in a swale of comparable dimension; the respective sound speeds are
roughly 1680 m/s and 1585 m/s. The lateral variability, especially at the 1–
10 m scale is expected to impact both propagation and reverberation. Characteristics of the reflection data and its attendant “surprise” suggest the possibility of objectively separating the intermingled angle and range
dependence; this would open the door to detailed geoacoustic estimation in
areas of strong lateral variability. [Research supported by ONR Ocean
Acoustics.]
Invited Papers
2268
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2268
10:00
4aUW8. Modeling reverberation in a complex environment with the finite element method. Marcia J. Isakson and Nicholas P. Chotiros (Appl. Res. Labs., The Univ. of Texas at Austin, 10000 Burnet Rd., Austin, TX 78713, misakson@arlut.utexas.edu)
Acoustic finite element models solve the Helmholtz equation exactly and are customizable to the scale of the discretization of the
environment. This makes them an ideal candidate for reverberation studies in complex environments. In this study, reverberation is calculated for a realistic shallow water waveguide. The environmental parameters are taken from the extensive characterization completed
for the Target and Reverberation Experiment (TREX) conducted off the coast of the Florida panhandle in 2013. Measured sound speed
profiles, sea surface roughness, bathymetry, and measured ocean bottom roughness are included in the model. Measurements of the normal incidence bottom loss are used as a proxy for range dependent sediment density. Results are compared with a closed form solution
for reverberation. [Work sponsored by ONR, Ocean Acoustics.]
10:20–10:35 Break
Contributed Papers
10:35
10:50
4aUW9. Normal incidence reflection measurements (TREX13): Inferences for lateral heterogeneity over a range of scales. Charles W. Holland, Chad Smith (Appl. Res. Lab., The Penn State Univ., P.O. Box 30,
State College, PA 16804, cwh10@psu.edu), and Paul Hines (Elec. and
Comput. Eng., Dalhousie Univ., Dalhousie, NS, Canada)
4aUW10. Acoustic measurements on mid-shelf sediments with cobble:
Implications for reverberation. Charles W. Holland (Appl. Res. Lab., The
Penn State Univ., P.O. Box 30, State College, PA 16804, cwh10@psu.edu),
Gavin Steininger, Jan Dettmer, Stan Dosso (School of Earth and Ocean Sci.,
Univ. of Victoria, Victoria, BC, Canada), and Allen Lowrie (Picayune,
MS)
The vast majority of sediment acoustics research has focused on rather
homogeneous sandy sediments. Measurements for sediments containing
cobbles (grain size greater than 6 cm) are rare. Here, measurements are presented for mid-shelf sediments containing pebbles/cobbles mixed with other
grain sizes spanning 7 orders of magnitude, including silty clay, sand, and
shell hash. The 2 kHz sediment sound speed in two distinct layers with cobble is 153165 m/s and 1800620 m/s at the 95% credibility interval. The
dispersion over the 400–2000 Hz band was relatively weak, 2 and 7 m/s
respectively. The objective is to (1) present results for a sediment type for
which little is known, (2) motivate development of theoretical wave propagation models for wide grain size distributions, and (3) speculate on the possibility of cobble as a scattering mechanism for mid shelf reverberation. The
presence of cobbles from 1 to 3 m (possibly extending to 6 m) sub-bottom
suggest they are the dominant scattering mechanism at this site. Though
sediments with cobbles might be considered unusual, especially on the midshelf, they may be more common than the paucity of measurements would
suggest since typical direct sampling techniques (e.g., cores and grab samples) have fundamental sampling limitations. [Research supported by ONR
Ocean Acoustics.]
4a THU. AM
Normal incidence seabed reflection data suffer from a variety of ambiguities that make quantitative interpretation difficult. The reflection coefficient has an inseparable ambiguity between bulk density and compressional
sound speed. Even more serious, reflection data are a function of other sediment characteristics including interface roughness, volume heterogeneities,
and local bathymetry. Seafloor interface curvature is especially important
and can lead to focusing/defocusing of the reflected field. An attempt is
made with ancillary data including bathymetry, 400 kHz backscatter, and
wide angle seabed reflection data to separate some of the mechanisms.
Resulting analysis of 1–12 kHz reflection data suggest: (1) strong lateral
sediment heterogeneity exists on scales of 10–100 m; (2) there are distinct
geoacoustic regimes on the lee and stoss side of the ridge crest, and also
between crest and the swale, and (3) the ridge crest geoacoustic properties
are similar across distances of 6 km along two perpendicular transects (1
correlation). [Research supported by ONR Ocean Acoustics.]
2269
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2269
THURSDAY AFTERNOON, 30 OCTOBER 2014
INDIANA G, 1:10 P.M. TO 5:45 P.M.
Session 4pAAa
Architectural Acoustics and Speech Communication: Acoustic Trick-or-Treat: Eerie Noises, Spooky
Speech, and Creative Masking
Alexander U. Case, Cochair
Sound Recording Technology, University of Massachusetts Lowell, 35 Wilder St., Suite 3, Lowell, MA 01854
Eric J. Hunter, Cochair
Department of Communicative Sci., Michigan State University, 1026 Red Cedar Road, East Lansing, MI 48824
Chair’s Introduction—1:10
Invited Papers
1:15
4pAAa1. Auditory illusions of supernatural spirits: Archaeological evidence and experimental results. Steven J. Waller (Rock Art
Acoust., 5415 Lake Murray Blvd. #8, La Mesa, CA 91942, wallersj@yahoo.com) and Miriam A. Kolar (Amherst College, Amherst,
MA 01002)
Sound reflection, reverberation, ricochets, and interference patterns were perceived in the past as eerie sounds attributable to invisible echo spirits, thunder gods, ghosts, and sound-absorbing bodies. These beliefs in the supernatural were recorded in ancient myths, and
expressed in tangible archaeological evidence including canyon petroglyphs, cave paintings, and megalithic stone circles including
Stonehenge. Disembodied voices echoing throughout canyons gave the impression of echo spirits calling out from the rocks. Thunderous
reverberation filling deep caves gave the impression of the same thundering stampedes of invisible hoofed animals that were believed to
accompany thunder gods in stormy skies. If you did not know about sound wave reflection, would the inexplicable noise of a ricochet in
a large room have given you the impression of a ghost moaning “BOOoo” over your shoulder? Mysterious silent zones in an open field
gave the impression of a ring of large phantom objects blocking pipers’ music. Complex behaviors of sound waves such as reflection
and interference (which scientists today dismiss as acoustical artifacts) can experimentally give rise to psychoacoustic misperceptions in
which such unseen sonic phenomena are attributed to the supernatural. See https://sites.google.com/site/rockartacoustics/ for further
details.
1:35
4pAAa2. Pututus, resonance and beats: Acoustic wave interference effects at Ancient Chavın de Hu
antar, Per
u. Miriam A. Kolar
(Program in Architectural Studies and Dept. of Music, Amherst College, Barrett Hall, 21 Barrett Hill Dr., AC# 2255, PO Box 5000,
Amherst, MA 01002, mkolar@amherst.edu)
Acoustic wave interference produces audible effects observed and measured in archaeoacoustic research at the 3,000-year-old
Andean Formative site at Chavın de Huantar, Per
u. The ceremonial center’s highly-coupled network of labyrinthine interior spaces is
riddled with resonances excited by the lower-frequency range of site-excavated conch shell horns. These pututus, when played together
in near-unison tones, produce a distinct “beat” effect heard as the result of the amplitude variation that characterizes this linear interaction. Despite the straightforward acoustic explanation for this architecturally enhanced instrumental sound effect, the performative act
reveals an intriguing perceptual complication. While playing pututus inside Chavın’s substantially intact stone-and-earthen-mortar buildings, pututu performers have reported an experience of having their instruments’ tones “guided” or “pulled” into tune with the dominant
spatial resonances of particular locations. In an ancient ritual context, the recognition and understanding of such a sensory component
would relate to a particular worldview beyond the reach of present-day investigators. Despite our temporal distance, an examination of
the intertwined acoustic phenomena operative to this architectural–instrumental–experiential puzzle enriches the interdisciplinary
research perspective, and substantiates perceptual claims.
1:55
4pAAa3. Tapping into the theatre of the mind; creating the eerie scene through sound. Jonathon Whiting (Media and Information,
Michigan State Univ., College of Commun. Arts and Sci., 404 Wilson Rd., Rm. 409, East Lansing, MI 48824, whitin26@msu.edu)
Jaws. Psycho. Halloween. Halo. Movies and video games depend on music and acoustics to evoke certain emotional states in the
audience or game player. But what is the recipe for creating a haunting scene? A creaky door, a scream, a minor chord on a piano. How
and why are certain emotions pulled out of a listener in response to sound? From sound environments to mental expectations, the media
industry uses a variety of techniques to elicit responses from an audience. This presentation will discuss and present examples of the
principles behind the sound of fright.
2270
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2270
2:15
4pAAa4. Disquiet: Epistemological bogeymen and other exploits in audition. Ean White (unaffiliated, 1 Westinghouse Plaza C-216,
Boston, MA 02136-2079, ean@eanwhite.org)
Beginning with an interest in “physiological musics,” Ean White’s sound art exploits interstices in our sensory apparatus with
unnerving results. He will recount a series of audio experiments with effects ranging from involuntary muscle contractions to the creation of sounds eerily unique to each listener. The presentation will include discussion of his techniques and how they inform his artistic
practice.
2:35
4pAAa5. Removing the mask in multitrack music mixing. Alexander U. Case (Sound Recording Technol., Univ. of Massachusetts
Lowell, 35 Wilder St., Ste. 3, Lowell, MA 01854, alex@fermata.biz)
The sound recording heard via stereo loudspeakers and headphones is made up of many dozens—sometimes more than 100—discrete tracks of musical elements. Multiple individual performances across a variety of instruments are fused into the final, two-channel
recording—left and right—that is released to consumers. Achieving sonic success in this many-into-two challenge requires strategic,
creative release from masking. Part of the artistry of multitrack mixing includes finding innovative signal processing approaches that
enable the full arrangement and the associated interaction among the multitrack components of the music to be heard and enjoyed.
Masking among tracks clutters and obscures the music. But audio engineers are not afraid. They want you hear what’s behind the mask.
Hear how. Happy Halloween.
2:55
4pAAa6. Documenting and identifying things that go bump in the night. Eric L. Reuter (Reuter Assoc., LLC, 10 Vaughan Mall, Ste.
201A, Portsmouth, NH 03801, ereuter@reuterassociates.com)
Acoustical consultants are occasionally asked to help diagnose mysterious noises in buildings, and it can be difficult to be present
and ready to make measurements when such noises occur. This paper will presents some of the tools and methods the author uses for recording and analyzing these events. These include the use of tablet-based measurement devices and high-speed playback of long-term
recordings.
3:15–3:30 Break
3:30
4pAAa7. Inaudible information, disappearing declamations, misattributed locations, and other spooky ways your brain fools
you—every day. Barbara Shinn-Cunningham (Biomedical Eng., Boston Univ., 677 Beacon St., Boston, MA 02215-3201, shinn@bu.
edu)
We bumble through life convinced that our senses provide reliable, faithful information about the world. Yet on closer inspection,
our brains constantly misinform us, creepily convincing us of “truths” that are just plain false. We hear information that is not really
there. We are oblivious to sounds that are perfectly audible. For sounds that we do hear, we cannot tell when they actually occurred. We
completely overlook changes that even a simple acoustic analysis would detect with 100% accuracy. In short, we misinterpret the sounds
reaching our ears all the time, and do not even realize it. This talk will review the evidence for how unreliable and biased we are in interpreting the world—and why the chilling failures of our perceptual machinery may be excusable, or even useful, as we navigate the complex world in which we live.
3:50
4pAAa8. The mysterious case of the singing toilets and other nerve wracking tales of unwanted sound. David S. Woolworth
(Oxford Acoust., 356 CR 102, Oxford, MS 38655, dave@oxfordacoustics.com)
4p THU. PM
Lightweight construction nightmares, devilish designs that never see acoustic review, improper purposing of spaces, and other stories
involving the relentless torture of building occupants. Will they survive?
4:10
4pAAa9. Sound effects with AUditory syntaX—A high-level scripting language for sound processing. Bomjun J. Kwon (Hearing,
Speech and Lang., Gallaudet University, 800 Florida Ave NE, Washington, DC 20002, bomjun.kwon@gallaudet.edu)
AUditory syntaX (AUX) is a high-level scripting programming language specifically crafted for the generation and processing of auditory signals (Kwon, 2012; Behav Rev 44, 361–373). AUX does not require knowledge or prior experience in computer programming.
Rather, AUX provides an intuitive and descriptive environment where users focus on perceptual components of the sound, without tedious tasks unrelated to the perception such as memory management or array handling often required in other computer languages such as
C + + or MATLAB that are popularly used in auditory science. This presentation provides a demonstration of AUX for the generation and
processing of various sound effects, particularly “fun” or “spooky” sounds. Processing methods for sound effects widely used in arts,
films and other media, such as reverberation, echoes, modulation, pitch shift, and flanger/phaser, will be reviewed and coding in AUX to
generate those effects and the generated sounds will be demonstrated.
2271
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2271
4:30
4pAAa10. Eerie voices: Odd combinations, extremes, and irregularities. Brad H. Story (Speech, Lang., and Hearing Sci., Univ. of
Arizona, 1131 E. 2nd St., P.O. Box 210071, Tucson, AZ 85721, bstory@email.arizona.edu)
The human voice can project an eerie quality when certain characteristics are present in a particular context. Some types of eerie voices may be derived from physiological scaling of the speech production system that is either humanly impossible or nearly so. By combining previous work on adult speech, and current research on speech development, the purpose of this study was to simulate
vocalizations and speech based on unusual configurations of the vocal tract and vocal folds, and by imposing irregularities on movement
and vibration. The resulting sound contains qualities that are human-like, but not typical, and hence may give the perceptual impression
of eeriness. [Supported in part by NIH R01-DC011275.]
4:50
4pAAa11. Segregation of ambiguous pulse-echo streams and suppression of clutter masking in FM bat sonar by anticorrelation
signal processing. James A. Simmons (Neurosci., Brown Univ., 185 Meeting St., Box GL-N, Providence, RI 02912, james_simmons@
brown.edu)
Big brown bats often fly in conditions where the density and spatial extent of clutter requires a high rate of pulse emissions. Echoes
from one broadcast still are arriving when the next broadcast is sent out, creating ambiguity about matching echoes to corresponding
broadcasts. Biosonar sounds are widely beamed and impinge on the entire surrounding scene. Numerous clutter echoes typically are
received from different directions at similar times. The multitude of overlapping echoes and the occurrence of pulse-to-echo ambiguity
compromises the bat’s ability to peer into the upcoming path and determine whether it is free of collision hazards. Bats have to associate
echoes with their corresponding broadcasts to prevent ambiguity, and off-side clutter echoes have to be segregated from on-axis echoes
that inform the bat about its immediate forward path. In general, auditory streaming to resolve elements of an auditory scene depends on
differences in pitch and temporal pattern. Bats use a combination of temporal and spectral pitch to assign echoes to “target” and “clutter”
categories within the scene, which prevents clutter masking, and they associate incoming echoes with the corresponding broadcast by
treating the mismatch of echoes with the wrong broadcast as a type of clutter. [Supported by ONR.]
5:10
4pAAa12. Are you hearing voices in the high frequencies of human speech and voice? Brian B. Monson (Pediatric Newborn Medicine, Brigham and Women’s Hospital, Harvard Med. School, 75 Francis St., Boston, MA 02115, bmonson@research.bwh.harvard.edu)
The human voice produces acoustic energy at frequencies above 6 kHz. Energy in this high-frequency region has long been known
to affect perception of speech and voice quality, but also provides non-qualitative information about a speech signal. This presentation
will demonstrate how much useful information can be gleaned from the high frequencies with a report on studies where listeners were
presented with only high-frequency energy extracted from speech and singing. Come to test your own abilities and decide if you can
hear strange voices or just chirps and whistles in the high frequencies of human speech and voice.
Contributed Paper
5:30
4pAAa13. Measuring the impact of room acoustics on emotional
responses to music using functional neuroimaging: A pilot study. Martin
S. Lawless and Michelle C. Vigeant (Graduate Program in Acoust., The
Penn State Univ., 201 Appl. Sci. Bldg., University Park, PA 16802,
msl224@psu.edu)
Past cognitive neuroscience studies have established links between
music and an individual’s emotional response. Specifically, music can
induce activations in brain regions most commonly associated with reward
and pleasure (Blood/Zatorre PNAS 2001). To further develop concert hall
design criteria, functional magnetic resonance imaging (fMRI) techniques
can be used to investigate the emotional preferences of room acoustics
2272
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
stimuli. Auralizations were created under various settings ranging from
anechoic to extremely reverberant. These stimuli were presented to five participants in an MRI machine, and the subjects were prompted to rate the
stimuli in terms of preference. Noise stimuli that matched the acoustic stimuli temporally and spectrally were also presented to the participants for the
analysis of main contrasts of interest. In addition, the participants were first
tested in a mock scanner to acclimatize the subjects to the environment and
later validate the results of the study. Voxel-wise region of interest analysis
was used to locate the emotion and reward epicenters of the brain that were
activated when the subjects enjoyed a hall’s acoustics. The activation levels
of these regions, which are associated with positive-valence emotions, were
examined to determine if the activations correlate with preference ratings.
168th Meeting: Acoustical Society of America
2272
THURSDAY AFTERNOON, 30 OCTOBER 2014
MARRIOTT 7/8, 1:15 P.M. TO 3:20 P.M.
Session 4pAAb
Architectural Acoustics, Speech Communication, and Noise: Room Acoustics Effects on Speech
Comprehension and Recall II
Lily M. Wang, Cochair
Durham School of Architectural Engineering and Construction, University of Nebraska - Lincoln, PKI 101A, 1110 S. 67th St.,
Omaha, NE 68182-0816
David H. Griesinger, Cochair
Research, David Griesinger Acoustics, 221 Mt Auburn St #504, Cambridge, MA 02138
Invited Papers
1:15
4pAAb1. Challenges for second-language learners in difficult acoustic environments. Catherine L. Rogers (Dept. of Commun. Sci.
and Disord., Univ. of South Florida, USF, 4202 E. Fowler Ave., PCD1017, Tampa, FL 33620, crogers2@usf.edu)
Most anyone who has lived in a foreign country for any length of time knows that even everyday tasks can become tiring and frustrating when one must accomplish them while navigating a seemingly endless maze of unfamiliar social customs, vocabulary and speech
that seem far removed from one’s language laboratory experience. Add to these challenges noise, reverberation, and/or cognitive
demand (e.g., learning caculus, responding to multiple customer, and co-worker demands) and even experienced learners may begin to
question their proficiency. This presentation will provide an overview of the speech perception and production challenges faced by
second-language learners in difficult acoustic environments that we may encounter every day, such as in large lecture halls, retail or customer service, to name a few. Past and current research investigating the effects of various environmental challenges on both relatively
early and later learners of a second language will be considered, as well as strategies that may mitigate challenges for both speakers and
listeners in some of these conditions.
1:35
4pAAb2. Development of speech perception under adverse listening conditions. Tessa Bent (Dept. of Speech and Hearing Sci., Indiana Univ., 200 S. Jordan Ave., Bloomington, IN 47405, tbent@indiana.edu)
4p THU. PM
Speech communication success is dependent on interactions among the talker, listener, and listening environment. One such important interaction is between the listener’s age and the noise and reverberation in the environment. Previous work has demonstrated that
children have greater difficulty than adults in noisy and highly reverberant environments, such as those frequently found in classrooms. I
will review research that considers how a talker’s production patterns also contribute to speech comprehension, focusing on nonnative
talkers. Studies from my lab have demonstrated that children have more difficulty than adults perceiving speech that deviates from
native language norms, even in quiet listening conditions in which adults are highly accurate. When a nonnative talker’s voice was combined with noise, children’s word recognition was particularly poor. Therefore, similar to the developmental trajectory for speech perception in noise or reverberation, the ability to accurately perceive speech produced by nonnative talkers continues to develop well into
childhood. Metrics to quantify speech intelligibility in specific rooms must consider both listener characteristics, talker characteristics,
and their interaction. Future research should investigate how children’s speech comprehension is influenced by the interaction between
specific types of background noise and reverberation and talker production characteristics. [Work supported by NIH-R21DC010027.]
1:55
4pAAb3. Measurement and prediction of speech intelligibility in noise and reverberation for different sentence materials, speakers, and languages. Anna Warzybok, Sabine Hochmuth (Cluster of Excellence Hearing4All, Medical Phys. Group, Universit€at Oldenburg, Oldenburg D-26111, Germany, a.warzybok@uni-oldenburg.de), Jan Rennies (Cluster of Excellence Hearing4All, Project Group
Hearing, Speech and Audio Technol., Fraunhofer Inst. for Digital Media Technol. IDMT, Oldenburg, Germany), Thomas Brand, and
Birger Kollmeier (Cluster of Excellence Hearing4All, Medical Phys. Group, Universit€at Oldenburg, Oldenburg, Germany)
The present study investigates the role of the speech material type, speaker, and language for speech intelligibility in noise and reverberation. The experimental data are compared to predictions of the speech transmission index. First, the effect of noise only, reverberation only, and the combination of noise and reverberation was systematically investigated for two types of sentence tests. The hypothesis
to be tested was that speech intelligibility is more affected by reverberation when using an open-set speech material consisting of everyday sentences than when using a closed-set test with syntactically fixed and semantically unpredictable sentences. In order to distinguish
between the effect of speaker and language on speech intelligibility in noise and reverberation, the closed-set speech material was
recorded using bilingual speakers of German-Spanish and German-Russian. The experimental data confirmed that the effect of
2273
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2273
reverberation was stronger for an open-set test than for a closed-set test. However, this cannot be predicted by the speech transmission
index. Furthermore, the inter-language differences in speech reception thresholds were on average up to 5 dB, whereas inter-talker differences were of about 3 dB. The Spanish language suffered more under reverberation than German and Russian, what again challenged
the predictions of the speech transmission index.
2:15
4pAAb4. Speech comprehension in realistic classrooms: Effects of room acoustics and foreign accent. Zhao Peng, Brenna N.
Boyd, Kristin E. Hanna, and Lily M. Wang (Durham School of Architectural Eng. and Construction, Univ. of Nebraska-Lincoln, 1110
S. 67th St., Omaha, NE 68182, zpeng@huskers.unl.edu)
The current classroom acoustics standard (ANSI S12.60) recommends that core learning spaces shall not exceed reverberation time
(RT) of 0.6 second and background noise level (BNL) of 35 dBA, based on speech intelligibility performance mainly by the native English-speaking population. This paper presents two studies on the effects of RT and BNL on more realistic classroom learning experiences. How do native and non-native English-speaking listeners perform on speech comprehension tasks under adverse acoustic
conditions, if the English speech is produced by talkers whose native language is English (Study 1) versus Mandarin Chinese (Study 2)?
Speech comprehension materials were played back in a listening chamber to individual listeners: native and non-native English-speaking
in Study 1; native English, native Mandarin Chinese, and other non-native English-speaking in Study 2. Each listener was screened for
baseline English proficiency for use as a covariate in the statistical analysis. Participants completed dual tasks simultaneously (speech
comprehension and adaptive dot-tracing) under 15 different acoustic conditions, comprised of three BNL conditions (RC-30, 40, and 50)
and five RT scenarios (0.4–1.2 s). Results do show distinct differences between the listening groups. [Work supported by a UNL Durham
School Seed Grant and the Paul S. Veneklasen Research Foundation.]
Contributed Papers
2:35
4pAAb5. Speech clarity in lively theatres. Gregory A. Miller and Carl
Giegold (Threshold Acoust., LLC, 53 W. Jackson Boulevard, Ste. 815, Chicago, IL 60604, gmiller@thresholdacoustics.com)
By their very nature, theatres must be “lively” acoustic spaces. The audience must hear one another, so laughter and applause can ripple around the
room, and they must have the aural sensation of being in a large space
heightens the excitement of being at a live performance. Similarly, the theatre must reflect sound back to the actors in a way that helps them to gauge
how well their voices are filling the room, and to gauge audience response
throughout the performance. And yet this liveliness runs counter to much of
conventional wisdom regarding the acoustic conditions to support speech
clarity. This paper will describe ways in which the acoustic response of a
room can be built up to support both speech clarity and liveliness, with a
particular emphasis on theatre spaces in which the actors are placed in the
same volume as the audience (thrust and surround stages).
2:50
4pAAb6. Speech communication in noise to valid the virtual sound capturing system. Hyung Suk Jang, Seongmin Oh, and Jin Yong Jeon (Dept.
of Architectural Eng., Hanyang Univ., Seoul 133-791, South Korea, janghyungs@gmail.com)
The microphone systems were designed to capture the real sound field
for the creation of the remote virtual coexistence space: omnidirectional
microphone, binaural dummy head, linear array microphones, and spherical
microphone. The captured signals were applied to synthesize into the binaural signal. These binaural cues were generated using head-related transfer
function (HRTF) through headphone. For the validation, the sentence
2274
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
recognition tests were carried out to quantify the ability of speech perception with the sentence lists for normal listeners. In addition, the readability
and the naturalness were used to assess the quality of the synthesized
sounds. The different noise environments were applied with different signal
to noise ratio and an efficient sound capturing system was suggested by the
comparing the results of the sentence recognition tests.
3:05
4pAAb7. Quantifying a measure and exploring the effect of varying
reflection densities from realistic room impulse responses. Hyun Hong
and Lily M. Wang (Durham School of Architectural Eng. and Construction,
Univ. of Nebraska-Lincoln, 1110 S. 67th St., Omaha, NE 68182-0816,
hhong@huskers.unl.edu)
Perceptual studies using objective acoustic metrics calculated from
room impulse responses, such as reverberation time and clarity index, are
common. Less work has been conducted looking explicitly at the reflection
density, or the number of reflections per second. The reflection density,
though, may well have its own perceptual influence when reverberation
time and source-receiver distances are controlled, particularly in relation to
room size perception. This paper presents first an investigation into quantifying the reflection density from realistic room impulse responses that may
be measured or simulated. The resolution of the sampling frequency, time
window applied, and cut-off level for including a reflection in the count are
considered. The quantification method is subsequently applied to select a
range of realistic RIRs for use in a perceptual study on determining the maximum audible reflection density by humans, using both speech and clapping
signals. Results from this study are compared to those from similar previous
work by the authors which used artificially simulated impulse responses
with constant reflection densities over time.
168th Meeting: Acoustical Society of America
2274
THURSDAY AFTERNOON, 30 OCTOBER 2014
LINCOLN, 1:15 P.M. TO 5:15 P.M.
Session 4pAB
Animal Bioacoustics and Acoustical Oceanography: Use of Passive Acoustics for Estimation of Animal
Population Density II
Tina M. Yack, Cochair
Bio-Waves, Inc., 364 2nd Street, Suite #3, Encinitas, CA 92024
Danielle Harris, Cochair
Centre for Research into Ecological and Environmental Modelling, University of St. Andrews, The Observatory, Buchanan
Gardens, St. Andrews KY16 9LZ, United Kingdom
Chair’s Introduction—1:15
Invited Papers
1:20
4pAB1. Estimating singing fin whale population density using frequency band energy. David K. Mellinger (Cooperative Inst. for
Marine Resources Studies, Oregon State Univ., 2030 SE Marine Sci. Dr., Newport, OR 97365, David.Mellinger@oregonstate.edu),
Elizabeth T. K€
usel (NW Electromagnetics and Acoust. Res. Lab., Portland State Univ., Portland, OR), Danielle Harris, Len Thomas
(Ctr. for Res. into Ecological and Environ. Modelling, Univ. of St. Andrews, St. Andrews, United Kingdom), and Luis Matias (Instituto
Dom Luiz, Faculdade de Ci^encias, Universidade de Lisboa, Lisbon, Portugal)
Fin whale (Balaenoptera physalus) song occurs in a narrow frequency band between approximately 15 and 25 Hz. During the breeding season, the sound from many distant fin whales in tropical and subtropical parts of the world may be seen as a “hump” in this band
of the ocean acoustic spectrum. Since a higher density of singing whales leads to more energy in the band, the size of this hump—the
total received acoustic energy in this frequency band—may be used to estimate the population density of singing fin whales in the vicinity of a sensor. To estimate density, a fixed density of singing whales is simulated; using acoustic propagation modeling, the energy they
emit is propagated to the sensor, and the received level calculated. Since received energy in the fin whale band increases proportionally
with the density of whales, the density of whales may then be estimated from the measured received energy. This method is applied to a
case study of sound recorded on ocean-bottom recorders southwest of Portugal; issues covered include variance due to acoustic propagation modeling, reception area, variation in whale song acoustic level and frequency, and elimination of interfering sounds. [Funding
from ONR.]
1:40
4pAB2. Large-scale passive-acoustics-based population estimation of African forest elephants. Yu Shiu, Sara Keen, Peter H.
Wrege, and Elizabeth Rowland (BioAcoust. Res. Program, Cornell Univ., 159 Sapsucker Woods Rd, Ithaca, NY 14850, atoultaro@
gmail.com)
2275
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
4p THU. PM
African forest elephants (Loxodonta cyclotis) live in tropical rainforests in Central Africa and often use low-frequency vocalizations
for long-distance communication and coordination of group activities. There is great interest in monitoring population size in this species; however, the dense rainforest canopy severely limits visibility, making it difficult to estimate abundance using traditional methods
such as aerial surveys. Passive acoustic monitoring offers an alternative approach of estimating its abundance in a low visibility environment. The work we present here can be divided into three steps. First, we apply an automatic elephant call detector, which enables the
processing of large-scale acoustic signals in a reasonable amount of time. Second, we apply a density estimation method we designed
for a single microphone. Because microphones are often positioned far apart in order to cover a large area in the rainforest, meaning that
the same call will not produce multiple arrivals on different recording units. Lastly, we examine results from our historic data across five
years in six locations in central Africa, which includes over 1000 days of sound stream. We will address the feasibility of long-term population monitoring and also the potential impact of human activity on elephant calling behavior.
2275
2:00
4pAB3. A generalized random encounter model for estimating animal density with remote sensor data. Elizabeth Moorcroft, Tim
C. D. Lucas (Ctr. for Mathematics, Phys. and Eng. in the Life Sci. and Experimental Biology, UCL, CoMPLEX, University College
London, Gower St., London WC1E 6BT, United Kingdom, e.moorcroft@ucl.ac.uk), Robin Freeman, Marcus J. Rowcliffe (Inst. of Zoology, Zoological Society of London, London, United Kingdom), and Kate E. Jones (Ctr. for Biodiversity and Environment Res., UCL,
London, United Kingdom)
Acoustic detectors are commonly being used to monitor wildlife. Current estimators of abundance or density require recognition of
individuals or the distance of the animal from the sensor, which is often difficult. The random encounter model (REM) has been successfully applied to count data without these requirements. However, count data from acoustic detectors do not fit the assumptions of the
REM due to the directionality of animal signals. We developed a generalized REM (gREM), to estimate animal density from count data,
derived for different combinations of sensor detection widths and animal signal widths. We tested the accuracy and precision of this
model using simulations for different combinations of sensor detection and animal signal widths, number of captures, and animal movement models. The gREM produces accurate estimates of absolute animal density. However, larger sensor detection and animal signal
widths, and larger number of captures give more precise estimates. Different animal movement models had no effect on the gREM. We
conclude that the gREM provides an effective method to estimate animal densities in both marine and terrestrial environments. As
acoustic detectors become more ubiquitous, the gREM will be increasingly useful for monitoring animal populations across broad spatial, temporal, and taxonomic scales.
2:20
4pAB4. Using sound propagation modeling to estimate the number of calling fish in an aggregation from single-hydrophone
sound recordings. Mark W. Sprague (Phys., East Carolina Univ., M.S. 563, Greenville, NC 27858, spraguem@ecu.edu) and Joseph J.
Luczkovich (Biology, East Carolina Univ., Greenville, NC)
Many fishes make sounds during spawning events that can be used to estimate abundance. Spawning stock size is a measure of fish
population size that is used by fishery biologists to manage harvests levels. It is desirable that such an estimate be assessed easily and
remotely using passive acoustics. Passive acoustics techniques (hydrophones) can be used to identify sound-producing species, but it is
difficult to count individual sound sources in the sea, where it is dark, background noise levels can be high, but species can be identified
by their sounds. We have developed a method that can estimate the density of calling fish in an aggregation from single-hydrophone
recordings. Our method requires a sound propagation model for the area in which the aggregation is located. We generate a library of
modeled sounds of virtual Monte-Carlo generated distributions of fish to determine the range of fish population densities that match the
characteristics of single-hydrophone sound recording. Such a model could be used from a fixed station (e.g., an observatory) to estimate
the population size of the sound producers. In this presentation, we will present some calculations made using this method and will
examine the benefits and limitations of the technique.
Contributed Papers
2:40
3:15
4pAB5. An experimental evaluation of the performance of acoustic recording systems for estimating avian species richness and abundance.
Antonio Celis Murillo (Natural Resources and Environmental Sci., Univ. of
Illinois at Urbana-Champaign, 1704 Harrington Dr., Champaign, IL 61821,
celismu1@illinois.edu), Jill Deppe (Biological Sci., Eastern Illinois Univ.,
Champaign, IL), Jason Riddle (Natural Resources, Univ. of Wisconsin at
Stevens Point, Stevens Point, WI), Michael P. Ward (Natural Resources and
Environmental Sci., Univ. of Illinois at Urbana-Champaign, Champaign,
IL), and Theodore Simons (USGS cooperative fish and Wildlife Res. unit,
North Carolina State Univ., Raleigh, NC)
4pAB6. Spatial variation of the underwater soundscape over coral reefs
in the Northwestern Hawaiian Islands. Simon E. Freeman (Marine Physical Lab., Scripps Inst. of Oceanogr., 7038 Old Brentford Rd., Alexandria,
VA 22310, simon.freeman@gmail.com), Lauren A. Freeman (Marine Physical Lab., Scripps Inst. of Oceanogr., La Jolla, CA), Marc O. Lammers
(Oceanwide Sci. Inst., Honolulu, HI), and Michael J. Buckingham (Marine
Physical Lab., Scripps Inst. of Oceanogr., La Jolla, CA)
Comparisons between field observers and acoustic recording systems
have shown great promise for sampling birds using acoustics methods.
Comparisons provide information about the performance of recording systems and field observers but do not provide a robust validation of their true
sampling performance—i.e., precision and accuracy relative to known population size and richness. We used a 35-speaker bird song simulation system
to experimentally test the accuracy and precision of two stereo (Telinga and
SS1) and one quadraphonic recording system (SRS) for estimating species
richness, abundance, and total abundance (across all species) of vocalizing
birds. We simulated 25 bird communities under natural field conditions by
placing speakers in a wooded area at 4–119 m from the center of the survey
at differing heights and orientations. We assigned recordings randomly to
one of eight skilled observers. We found a significant difference among
microphones in their ability to accurately estimate richness (p = 0.0019) and
total bird abundance (p = < 0.0001). Our study demonstrates that acoustic
recording systems can potentially estimate bird abundance and species richness accurately; however, their performance is likely to vary by its technical
characteristics (recording pattern, microphone arrangement, etc.).
Coral reefs create a complex acoustic environment, dominated by
sounds produced by benthic creatures such as crustaceans and echinoderms.
While there is growing interest in the use of ambient underwater biological
sound as a gauge of ecological state, extracting meaningful information
from recordings is a challenging task. Single hydrophone (omnidirectional)
recorders can provide summary time and frequency information, but as the
spatial distribution of reef creatures is heterogeneous, the properties of reef
sound arriving at the receiver vary with position and arrival angle. Consequently, the locations and acoustic characteristics of individual sound producers remain unknown. An L-shaped hydrophone array, providing
direction-and-range sensing capability, can be used to reveal the spatial variability of reef sounds. Comparisons can then be made between sound sources and other spatially referenced information such as photographic data.
During the summer of 2012, such an array was deployed near four different
benthic ecosystems in the Northwestern Hawaiian Islands, ranging from
high-latitude coral reefs to communities dominated by algal turf. Using conventional and adaptive acoustic focusing (equivalent to curved-wavefront
beamforming), time-varying maps of sound production from benthic organisms were created. Comparisons with the distribution of nearby sea floor
features, and the makeup of benthic communities, will be discussed.
2:55–3:15 Break
2276
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2276
4pAB7. Density estimates of odontocetes in an active military base using
passive acoustic monitoring. Bethany L. Roberts (School of Biology,
Univ. of St. Andrews, Sea Mammal Res. Unit, St. Andrews, Fife KY16
8LB, United Kingdom, blr2@st-andrews.ac.uk), Zach Swaim, and Andrew
J. Read (Duke Marine Lab, Duke Univ., Beaufort, NC)
We deployed passive acoustic monitoring devices in Camp Lejeune,
North Carolina, USA, to estimate density of odontocete populations. Four
C-PODs (echolocation click detectors) were deployed in water depths ranging from 13 to 21 meters from 30 November 2012 to 13 November 2013.
Two species of odontocetes are known to inhabit the survey area: bottlenose
dolphins and Atlantic spotted dolphins. These methods incorporate (i) the
rate at which the animals produce echolocation cues, (ii) the probability of
detecting cues, and (iii) the false positive rate of detections. To determine
the cue rate of bottlenose dolphins, we attached DTAGs to 14 bottlenose
dolphins during 2012 and 2013 in Sarasota, Florida. To determine cue rate
of spotted dolphins, we used six recordings of focal follows from 2001-2003
in an area adjacent to C-POD deployment locations. Echolocation playbacks
to C-PODs were used to obtain false positive rate and detection radius of
each C-POD. Furthermore, we obtained proportions of bottlenose and spotted dolphins in the survey area from concurrent line transect surveys. Preliminary results indicate that dolphins were detected on all four C-PODs
during every month of the survey period. Future studies in areas where multiple species are present could potentially use methods described here.
3:45
4pAB8. Preliminary calculation of individual echolocation signal emission
rate of Franciscana dolphins (Pontoporia blainvillei). Artur Andriolo (Zoology Dept., Federal Univ. of Juiz de Fora, Universidade Federal de Juiz de
Fora, Rua Jose Lourenço Kelmer, s/n - Campus Universitario Bairro S~ao
Pedro, Juiz de Fora, Minas Gerais 36036-900, Brazil, artur.andriolo@ufjf.edu.
br), Federico Sucunza (Ecology Graduate Program, Federal Univ. of Juiz de
Fora, Juiz de Fora, Brazil), Alexandre N. Zerbini (Ecology, Instituto Aqualie,
Juiz de Fora, Brazil), Daniel Danilewicz (Zoology Graduate Program, State
Univ. of Santa Cruz, Ilheus, Brazil), Marta J. Cremer (Biological Sci., Univ. of
Joinville Region, Joinville, Brazil), and Annelise C. Holz (Graduate Program
in Health and Environment, Univ. of Joinville Region, Joinville, Brazil)
Calculation of echolocation signals emission rate is necessary to estimate how many individuals are vocalizing, especially if passive acoustic
density estimation methods are to be implemented. We calculated the individual emission rate of echolocation signals of franciscana dolphin. Fieldwork was between 22 and 31 January of 2014 at Babitonga Bay, Brazil.
Acoustic data and group size were registered when animals were within visual range at maximum distance of 50 meters. We used a Cetacean
ResearchTM hydrophone. The sound was digitized by Analogic/Digital
IOtech, stored as wav-files and analyzed with Raven software. A band limited energy detector was set to automatically extract echolocation signals.
The emission rate was calculated dividing the clicks registered for each file
by the file duration and by the number of individuals in the group. We analyzed 240 min of sound of 36 groups. A total of 29,164 clicks were detected.
The median individual click rate was 0.290 clicks/s (10th = 0.036 and
90th = 1.166 percentiles). The result is a general approximation of the individual echolocation signal emission rate. Sound production rates are potentially dependent on a number of factors, like season, group size, sex, or even
density itself. [This study was supported by IWC/Australia, Petrobras,
Fundo de Apoio a Pesquisa/UNIVILLE.]
4:00
4pAB9. Investigating the potential of a wave glider for cetacean density
estimation—A Scottish study. Danielle Harris (Ctr. for Res. into Ecological and Environ. Modelling, Univ. of St. Andrews, The Observatory, Buchanan Gardens, St. Andrews KY16 9LZ, United Kingdom, dh17@standrews.ac.uk) and Douglas Gillespie (Sea Mammal Res. Unit, Univ. of St.
Andrews, St. Andrews, United Kingdom)
A major advantage of autonomous vehicles is their ability to provide both
spatial and temporal coverage of an area during a survey. However, there is a
need to assess whether these technologies are suitable for monitoring cetacean
population densities. Data are presented from a Wave Glider deployed off the
2277
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
east coast of Scotland between March and April 2014. Key areas of survey
design, data collection, and analysis were investigated. First, the ability of the
glider to complete a designed line transect survey was assessed. Second, the
encounter rates of all detected species were estimated. Harbour porpoise (Phocoena phocoena) was the most commonly encountered species and became
the focal species in this study. Using the harbor porpoise encounter rate, the
amount of survey effort required to estimate density with a suitable level of
uncertainty was estimated. A separate experiment was designed to estimate
the average probability of harbor porpoise detection by the glider. The glider
was deployed near an array of nine C-PODs (odontocete detection instruments) and the same harbor porpoise click events were matched across instruments. Such matches can be analyzed using spatially explicit capture recapture
methods, which allow the detection efficiency of the glider to be estimated.
4:15
4pAB10. Toward acoustically derived population estimates in marine
conservation: An application of the spatially-explicit capture-recapture
methodology for North Atlantic right whales. Danielle Cholewiak, Steven
Brady, Peter Corkeron, Genevieve Davis, and Sofie Van Parijs (Protected
Species Branch, NOAA Northeast Fisheries Sci. Ctr., 166 Water St., Woods
Hole, MA 02543, danielle.cholewiak@noaa.gov)
Passive acoustics provide a flexible tool for developing understanding of
the ecology and behavior of vocalizing marine animals. Yet despite a robust
capacity for detecting species presence, our ability to estimate population
abundance from acoustics still remains poor. Critically, abundance estimates
are precisely what conservation practitioners and policymakers often
require. In the current study, we explored the application of acoustic data in
the spatially-explicit capture-recapture (SECR) methodology, to evaluate
whether acoustics can be used to infer abundance in the endangered North
Atlantic right whale. We sub-sampled a year-long acoustic dataset from
archival recorders deployed in Massachusetts Bay. Multichannel data were
reviewed for the presence of up-calls. A total of 1659 unique up-calls were
detected. Estimates of up-call density ranged from zero to 608 (6 70 SE)
up-calls/hour. Estimates of daily abundance, when corrected for average
calling rate, ranged from 0—69 (6 21 SE) individuals per day. These results
qualitatively compare well with patterns in right whale occurrence reported
from aerial-based visual surveys. Since acoustic abundance calculations are
affected by variation in calling behavior, estimates should be interpreted
cautiously; however, these results indicate that passive acoustics has the
potential to directly inform conservation and management strategies.
4:30
4pAB11. Statistical mechanics techniques applied to the analysis of
humpback whale inter-call intervals. Gerald L. D’Spain (Scripps Inst. of
Oceanogr., Univ. of California, San Diego, 291 Rosecrans St., San Diego,
CA 92106, gdspain@ucsd.edu), Tyler A. Helble (SPAWAR SSC Pacific,
San Diego, CA), Heidi A. Batchelor, and Dennis Rimington (Scripps Inst.
of Oceanogr., Univ. of California, San Diego, San Diego, CA)
Techniques developed in statistical mechanics recently have been applied
to the analysis of the topology of complex human communication networks.
These methods examine the network’s macroscopic statistical properties
rather than the details of individual interactions. Here, these methods are
applied to the analysis of the time intervals between humpback whale calls
detected in passive acoustic monitoring data collected by the bottom-mounted
hydrophones on the Pacific Missile Range Facility (PMRF) west of Kauai,
Hawaii. Recently developed localization and tracking algorithms for use with
PMRF data have been applied to separate the calls of an individual animal
from those of a collection of animals. As with the distributions of time intervals between human communications, the distributions of time intervals
between humpback whale call detections are distinctly different than those
expected for a purely independent, random (Poisson) process. This conclusion
holds both for time intervals between calls from individual animals and from
the collection of animals vocalizing simultaneously. although significant differences in these probability distributions occur. A model based on the migration of clusters of animals is developed to fit the distributions. Possible
mechanisms giving rise to aspects of the distributions are discussed. [Work
supported by the Office of Naval Research, Code 322-MMB.]
4:45–5:15 Panel Discussion
168th Meeting: Acoustical Society of America
2277
4p THU. PM
3:30
THURSDAY AFTERNOON, 30 OCTOBER 2014
INDIANA A/B, 1:30 P.M. TO 5:15 P.M.
Session 4pBA
Biomedical Acoustics: Mechanical Tissue Fractionation by Ultrasound: Methods, Tissue Effects, and
Clinical Applications II
Vera A. Khokhlova, Cochair
University of Washington, 1013 NE 40th Street, Seattle, WA 98105
Jeffrey B. Fowlkes, Cochair
Univ. of Michigan Health System, 3226C Medical Sciences Building I, 1301 Catherine Street, Ann Arbor, MI 48109-5667
Invited Papers
1:30
4pBA1. High intensity focused ultrasound-induced bubbles stimulate the release of nucleic acid cancer biomarkers. Tatiana
Khokhlova (Medicine, Univ. of Washington, Harborview Medical Ctr., 325 9th Ave. Box 359634, Seattle, WA 98104, tdk7@uw.edu),
John R. Chevillet (Inst. for Systems Biology, Seattle, WA), George R. Schade (Urology, Univ. of Washington, Seattle, WA), Maria D.
Giraldez (Medicine, Univ. of Michigan, Ann Arbor, MI), Yak-Nam Wang (Appl. Phys. Lab., Univ. of Washington, Seattle, WA), Joo
Ha Hwang (Medicine, Univ. of Washington, Seattle, WA), and Muneesh Tewari (Medicine, Univ. of Michigan, Ann Arbor, MI)
Recently, several nucleic acid cancer biomarkers, e.g., microRNA and mutant DNA, have been identified and shown promise for
improving cancer diagnostics. However, the abundance of these biomarker classes in the circulation is low, impeding reliable detection
and adoption into clinical practice. Here, the ability of HIFU-induced bubbles to stimulate release of cancer-associated microRNAs by
tissue fractionation or permeabilization was investigated in a heterotopic syngeneic rat prostate cancer model. A 1.5 MHz HIFU transducer was used to either mechanically fractionate subcutaneous tumor with boiling histotripsy (BH) (~20 kW/cm2, 10 ms pulses, and
duty factor 0.01) or to permeabilize tumor tissue with inertial cavitation activity (p- = 16 MPa, 1 ms pulses, duty factor 0.001). Blood
was collected immediately prior to and serially up to 24-hours after treatments. Plasma concentrations of microRNAs were measured by
quantitative RT-PCR. Both exposures resulted in a rapid (within 15 min), short (3 h) and dramatic (over ten-fold) increase in relative
plasma concentrations of tumor-associated microRNAs, Histologic examination of excised tumor confirmed complete fractionation of
targeted tumor by BH and localized areas of intraparenchymal hemorrhage and tissue disruption by cavitation-based treatment. These
data suggest a clinically useful application of HIFU-induced bubbles for non-invasive molecular biopsy. [Grant support: NIH
1K01EB015745, R01CA154451, R01DK085714.]
1:50
4pBA2. Tissue decellularization with boiling histotripsy and the potential in regenerative medicine. Yak-Nam Wang (APL, CIMU,
Univ. of Washington, 1013 NE 40th St., Seattle, WA 98105, ynwang@u.washington.edu), Tatiana Khokhlova (Dept. of Medicine, Univ.
of Washington, Seattle, WA), Adam Maxwell (Dept. of Urology, Univ. of Washington, Seattle, WA), Wayne Kreider (APL, CIMU,
Univ. of Washington, Seattle, WA), Ari Partanen (Clinical Sci. MR Therapy, Philips Healthcare, Andover, Maryland), Navid Farr
(Dept. of BioEng., Univ. of Washington, Seattle, WA), George Schade (Dept. of Urology, Univ. of Washington, Seattle, WA), Michael
Bailey (APL, CIMU, Univ. of Washington, Seattle, WA), and Vera Khokhlova (Dept. of Acoust., Phys. Faculty, Moscow State Univ.,
Moscow, Russian Federation)
There have been major advances in the development of replacement organs by tissue engineering (TE); however, one of the holy
grails is still in the development of biomimetic structures that replicate the complex 3-D vasculature. Creation of bioartificial organs by
decellularization shows greater promise in reaching the clinic compared to TE. However, current decellularization techniques require
the use of chemical and biological agents, often in combination with physical force, which could result in damage to the matrix. Here
we evaluate the use of boiling histotripsy (BH) to selectively decellularize large volumes of tissue. BH lesions (10–20 mm diameter)
were produced in bovine liver with a clinical 1.2 MHz MR-HIFU system (Sonalleve, Philips, Finland), using thirty 10 ms pulses, and
pulse repetition frequencies of 1–10 Hz. Peak acoustic powers corresponding to an estimated in situ shock front amplitude of 65 MPa
were used. Macroscopic and histological evaluation revealed treatment conditions that produced decellularized lesions in which major fibrous structures such as stroma and vasculature remained intact while parenchymal cells were mostly lyzed. With further tailoring of the
pulsing scheme parameters, this treatment modality could potentially be optimized for organ decellularization. [Work supported by NIH
EB007643, K01-EB-015745-01, T32-DK007779, and NSBRI NASA-NCC 9-58.]
2278
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2278
2:10
4pBA3. Destruction of microorganisms by high-energy pulsed focused ultrasound. Timothy A. Bigelow (Elec. and Comput. Eng.,
Mech. Eng., Iowa State Univ., 2113 Coover Hall, Ames, IA 50011, bigelow@iastate.edu)
The use of high-energy ultrasound pulses to generate and excite clouds of microbubbles has shown great potential to mechanically
destroy soft tissue in a wide range of clinical applications. In our work, we have focused on extending the application of cavitation based
histotripsy to the destruction of microorganisms such as bacteria biofilms and microalgae. Bacteria biofilms pose a significant problem
when treating infections on medical implants while the fractionation of microalgae in an efficient manner could lower the production
cost of biofuels. In the past, we have shown a 4.4-log10 reduction of viable Escherichia coli bacteria capable of forming a colony in a
biofilm following a high-energy pulsed focused ultrasound exposure. We have also shown complete removal of Pseudomonas aeruginosa biofilms from a Pyrolytic graphite substrate based on fluorescence imaging following live/dead staining. We also showed minimal
temperature increase when the appropriate ultrasound pulse parameters were utilized. Recently, we have shown that high-energy pulsed
ultrasound at 1.1 MHz can fractionate the microalgae model system Chlamydomonas reinhardtii for lipid extraction/biofuel production
in both flow and stationary exposure systems with improved efficiency over traditional sonicators. In these studies, the fractionation of
the cells was quantified by protein and chlorophyll release following exposure.
Contributed Papers
2:30
3:15
4pBA4. Dependence of ablative ability of high-intensity focused ultrasound cavitation-based histotripsy on mechanical properties of agar. Jin
Xu (Eng., John Brown Univ., Siloam Springs, AR), Timothy Bigelow (Elec.
and Comput. Eng., Iowa State Univ., Iowa State University, 2113 Coover
Hall, Ames, IA 50011, bigelow@iastate.edu), Gabriel Davis, Alex Avendano, Pranav Shrotriya, Kevin Bergler (Mech. Eng., Iowa State Univ.,
Ames, IA), and Zhong Hu (Elec. and Comput. Eng., Iowa State Univ.,
Ames, IA)
4pBA6. Acoustic field characterization of the Waterlase2: Acoustic characterization and high speed photomicrography of a clinical laser generated shock wave therapy device for the treatment of periodontal biofilms
in orthodontics and periodontics. Camilo Perez, Yak-Nam Wang (BioEng.
and Ctr. for Industrial and Medical Ultrasound, CIMU, Appl. Phys. Lab.,
Univ. of Washington, 1013 NE 40th St., Seattle, WA 98105-6698, camipiri@
uw.edu), Alina Sivriver, Dmitri Boutoussov, Vladimir Netchitailo (Biolase
Inc., Irvine, CA), and Thomas J. Matula (Ctr. for Industrial and Medical
Ultrasound, Appl. Phys. Lab., Univ. of Washington, Seattle, WA)
2:45
4pBA5. Shear waves induced by Lorentz force in soft tissues. Stefan
Catheline, Graland-Mongrain Pol, Ali Zorgani, Remi Souchon, Cyril Lafon,
and Jean-yves Chapelon (LabTAU, INSERM, Univ. of Lyon, 151 cours
albert thomas, Lyon 69003, France, stefan.catheline@inserm.fr)
This study presents the observation of elastic shear waves generated in
soft solids using a dynamic electromagnetic field. The first and second
experiments of this study show that Lorentz force can induce a displacement
in a soft phantom and that this displacement is detectable by an ultrasound
scanner using speckle-tracking algorithms. For a 100 mT magnetic field and
a 10 ms, 100 mA peak-to-peak electrical burst, the displacement reached a
magnitude of 1 m. In the third experiment, we show that Lorentz force can
induce shear waves in a phantom. A physical model using electromagnetic
and elasticity equations is proposed and computer simulations are in good
agreement with experimental results. The shear waves induced by Lorentz
force are used in the last experiment to estimate the elasticity of a swine
liver sample.
3:00–3:15 Break
2279
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Recent applications in endodontics and periodontics use erbium solid
state lasers with fiber delivery in order to effectively kill bacteria and biofilms. In this paper, the acoustic field together with the bubble dynamics of
a clinical portable Er,Cr:YSGG laser-generating device (Waterlase 2) was
characterized. Field mapping with a calibrated PVDF hydrophone together
with high speed imaging were performed in water for two different tip geometries (flat or tapered), three different tip diameters (200, 300, or 400mm),
and two different laser pulse durations (60 or 700ms) at several laser pulse
energy settings (5 mJ–400 mJ) for individual pulses and at different pulse
repetition frequencies (5, 20, and 100 Hz). Peak positive pressures 5–50 mm
away from the tip ranged from 0.1 to 2MPa, while peak negative pressures
ranged from 0.1 to 1.2 MPa. There was a strong correlation between the
acoustic emissions generated by the bubble and the high speed imaging dynamics of the bubble. An initial thermoelastic response, initial bubble collapse and further rebounds where analyzed individually and compared
across different test parameters. For the initial thermoelastic pulse (laser
generated), pulse rise times ranged from 40 to 200 ns. Differences between
flat and tapered tips will be discussed.
3:30
4pBA7. Simulations of focused shear shock waves in soft solids and the
brain. Bruno Giammarinaro, François Coulouvrat, and Gianmarco Pinton
(Institut Jean le Rond d’Alembert UMR 7190, CNRS, Universite Pierre et
Marie Curie, d’alembert, case 162, 4, Pl. Jussieu, Paris cedex 05 75252,
France, bruno.giam@hotmail.fr)
Because of a very small speed, shear waves in soft solids are extremely
nonlinear, with nonlinearities four orders of magnitude larger than in classical solids. Consequently, these nonlinear shear waves can transition from a
smooth to a shock profile in less than one wavelength. We hypothesize that
traumatic brain injuries (TBI) could be caused by the sharp gradients resulting from shear shock waves. However, shear shock waves are not currently
modeled by simulations of TBI. The objective of this paper is to describe
shear shock wave propagation in soft solids within the brain, with source geometry determined by the skull. A 2D nonlinear paraxial equation with
cubic nonlinearities is used as a starting point. We present a numerical
scheme based on a second order operator splitting which allows the application of optimized numerical methods for each terms. We then validate the
scheme with Guiraud’s nonlinear self-similarity law applied to cusped caustics. Once validated, the numerical scheme is then applied to a blast wave
168th Meeting: Acoustical Society of America
2279
4p THU. PM
Cavitation-based histotripsy uses high-intensity focused ultrasound
(HIFU) at low duty factor to create bubble clouds inside tissue to liquefy a
region and provides better fidelity to planned lesion coordinates and the
ability to perform real-time monitoring. The goal of this study was to identify the most important mechanical properties for predicting lesion dimensions, among these three: Young’s modulus, bending strength, and fracture
toughness. Lesions were generated inside tissue-mimicking agar, and correlations were examined between the mechanical properties and the lesion
dimensions, quantified by lesion volume and by the width and length of the
equivalent bubble cluster. Histotripsy was applied to agar samples with varied properties. A cuboid of 4.5 mm width (lateral to focal plane) and 6 mm
depth (along beam axis) was scanned in a raster pattern with respective step
sizes of 0.75 mm and 3 mm. The exposure at each treatment location was 15
s, 30 s, or 60 s long. Results showed that only Young’s modulus influenced
histotripsy’s ablative ability and was significantly correlated with lesion volume and bubble cluster dimensions. The other two properties had negligible
effects on lesion formation. Also, exposure time differentially affected the
width and depth of the bubble cluster volume.
problem. A CT measurement of the human skull is used to determine the
initial conditions and shear shock wave simulations are presented to demonstrate the focusing effects of the skull geometry.
3:45
4pBA8. Tissue damage produced by cavitation: The role of viscoelasticity. Eric Johnsen (Mech. Eng., Univ. of Michigan, 1231 Beal Ave., Ann
Arbor, MI 48104, ejohnsen@umich.edu) and Matthew Warnez (Eng. Phys.,
Univ. of Michigan, Ann Arbor, MI)
Cavitation may cause damage at the cellular level in a variety of medical
applications, e.g., therapeutic and diagnostic ultrasound. While cavitation
damage to bodies in water has been studied for over a century, the dynamics
of bubbles in soft tissue remain vastly unexplored. One difficulty lies in the
viscoelasticity of tissue, which introduces additional physics and time
scales. We developed a numerical model to investigate acoustic cavitation
in soft tissue, which accounts for liquid compressibility, full thermal effects,
and viscoelasticity (including nonlinear relaxation and elasticity). The bubble dynamics are represented by a Keller-Miksis formulation and a spectral
collocation method is used to solve for the stresses in the surrounding medium. Our numerical studies of a gas bubble exposed to a relevant waveform
indicate that under inertial conditions high pressures and velocities are generated at collapse, though they are lower than those observed in water due to
the elasticity and viscosity of the medium. We further find that significant
deviatoric stresses and increased heating in tissue are attributable to viscoelasticity, due to material properties and different bubble responses compared
to water.
4:00
4pBA9. Comparison of Gilmore-Akulichev’s, Keller-Miksis’s and Rayleigh-Plesset’s equations on therapeutic ultrasound bubble cavitation.
Zhong Hu (Elec. and Comput. Eng., Mech. Eng., Iowa State Univ., 2201
Coover Hall, Ames, IA 50011, zhonghu@iastate.edu), Jin Xu (Eng., John
Brown Univ., Siloam Springs, AR), and Timothy A. Bigelow (Elec. and
Comput. Eng., Mech. Eng., Iowa State Univ., Ames, IA)
Many models have been utilized to simulate inertial cavitation for ultrasound therapies such as histotripsy. The models range from the very simple
Rayleigh-Plesset model to the complex Gilmore-Akulichev model. The
computational time increases with the complexity of the model, so it is important to know when the results from the simpler models are sufficient. In
this paper the simulation performance of the widely used Rayleigh-Plesset
model, Keller-Miksis model, and Gilmore-Akulichev model both with and
without gas diffusion are compared by calculating the bubble radius
response and bubble wall velocity as a function the ultrasonic pressure and
frequency. The bubble oscillates similarly with the three models within the
first collapse for small pressures (<3MPa), but the Keller-Miksis model
diverges at higher pressures. In contrast, the maximum expansion radius of
the bubble is similar at all pressures with Rayleigh-Plesset and GilmoreAKulichev although the collapse velocity is unrealistically high with Rayleigh-Plesset model. After multiple cycles, the Rayleigh-Plesset model starts
to behave disparately both in the expansion and collapse stages. The inclusion of rectified gas diffusion lengthens the collapse time and increases the
expansion radius. However, for frequency smaller than 1 MHz, the impact
of gas diffusion is not significant.
4:15
4pBA10. Removal of residual bubble nuclei to enhance histotripsy soft
tissue fractionation at high rate. Alexander P. Duryea, Charles A. Cain
(Biomedical Eng., Univ. of Michigan, 2131 Gerstacker Bldg., 2200 Bonisteel Blvd., Ann Arbor, MI 48109, duryalex@umich.edu), William W. Roberts (Urology, Univ. of Michigan, Ann Arbor, MI), and Timothy L. Hall
(Biomedical Eng., Univ. of Michigan, Ann Arbor, MI)
Previous work has shown that the efficacy of histotripsy soft tissue fractionation is dependent on pulse repetition frequency, with histotripsy delivered at low rates producing more efficient homogenization of the target
volume in comparison to histotripsy delivered at high rates. This is attributed to the cavitation memory effect: microscopic residual cavitation nuclei
that persist for hundreds of milliseconds following bubble cloud collapse
can seed the repetitive nucleation of cavitation at a discrete set of sites
2280
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
within the target volume, producing heterogeneous lesion development. To
mitigate this effect, we have developed low amplitude (MI<1) acoustic
pulses to actively remove residual nuclei from the field. These bubble removal pulses utilize the Bjerknes forces to stimulate the aggregation and
subsequent coalescence of remnant nuclei, consolidating the population
from a very large number to a countably small number of remnant bubbles
within several milliseconds. The effect is attainable in soft tissue mimicking
phantoms following a very minimal degree of fractionation (within the first
ten histotripsy pulses). Incorporation of this bubble removal scheme in histotripsy tissue phantom treatments at high rate (100 pulses/second) resulted
in highly homogeneous lesions that closely approximated those achieved
using an equal number of pulses applied at low rate (1 pulse/second); lesions
generated at high rate without bubble removal had heterogeneous structure
with increased collateral damage.
4:30
4pBA11. Two-dimensional speckle tracking using zero phase crossing
with Riesz transform. Mohamed Khaled Almekkawy (Elec. Eng., Western
New England, 2056 Knapp St., Saint Paul, MN 55108, alme0078@umn.
edu), Yasaman Adibi, Fei Zheng (Elec. Eng., Univ. of Minnesota, Minneapolis, MN), Mohan Chirala (Samsung Res. America, Richardson, TX), and
Emad S. Ebbini (Elec. Eng., Univ. of Minnesota, Minneapolis, MN)
Ultrasound speckle tracking provides robust estimates of fine tissue displacements along the beam direction due to the analytic nature of echo data.
We introduce a new multi-dimensional ST method (MDST) with subsample
accuracy in all dimensions. The algorithm based on the gradient of the magnitude and the zero-phase crossing of 2D complex correlation of the generalized analytic signal. The generalization method utilizes the Riesz
transform which is the vector extension of the Hilbert transform. Robustness
of the tracking algorithm is investigated using a realistic synthetic data
sequences created with (Field II) for which the bench mark displacement
was known. In addition, the new MDST method is used in the estimation of
the flow and surrounding tissue motion on human carotid artery in vivo. The
data was collected using a linear array probe of a Sonix RP ultrasound scanner at 325 fps. The vessel diameter has been calculated from the upper and
lower vessel walls displacement, and clearly shows a blood pressure wave
like pattern. The results obtained show that using Riesz transform produces
more robust estimation of the true displacement of the simulated model
compared to previously published results. This could have significant impact
on strain calculations near vessel walls.
4:45
4pBA12. 1-MHz ultrasound stimulates in vitro production of cardiac
and cerebrovascular endothelial cell vasodilators. Azzdine Y. Ammi
(Knight Carviovascular Inst., OHSU, 3181 SW Sam Jackson Park Rd., Portland, OR 97239, ammia@ohsu.edu), Catherine M. Davis (Dept. of Anesthesiology and Perioperative Medicine, OHSU, Portland, OR), Brian Mott
(Knight Carviovascular Inst., OHSU, Portland, OR), Nabil J. Alkayed
(Dept. of Anesthesiology and Perioperative Medicine, OHSU, Portland,
OR), and Sanjiv Kaul (Knight Carviovascular Inst., OHSU, Portland,
OR)
Ultrasound exposure of the heart and brain during vessel occlusion
reduces infarct size. Our aim was to study the production of vasodilatory
compounds by endothelial cells after ultrasound stimulation. A 1.05-MHz
single element transducer was used to insonify primary mouse endothelial
cells (ECs) from heart and brain with a 50-cycle tone burst at a pulse repetition frequency of 50 Hz. Two time points were studied after ultrasound exposure: 15 and 45 minutes. In heart ECs, EETs levels increased significantly
with 0.5 MPa (139 6 16%, p<0.05) and 0.3 MPa (137 6 15%, p<0.05) at
15 and 45 min post stimulation, respectively. HETEs and DHETs did not
change significantly. There was a trend toward increased adenosine, with
maximum release at 0.5 MPa (332 6 73% vs. 100% control, p<0.05). The
trend toward increased eNOS phosphorylation was greater at 15 than 45
min. In brain ECs adenosine release was increased, however increased
eNOS phosphorylation was not significant. 11, 12- and 14, 15- EETs were
increased while 5- and 15-HETEs were decreased. Pulsed ultrasound at 1.05
MHz has the ability to increase adenosine, p-eNOS, and EET production by
cardiac and cerebrovascular ECs. Interestingly, in brain ECs, the vasoconstricting HETEs were decreased.
168th Meeting: Acoustical Society of America
2280
5:00
4pBA13. Ultrasound-induced fractionation of the intervertebral disk.
Delphine Elbes, Olga Boubriak, Shan Qiao, Michael Molinari (Inst. of Biomedical Eng., Dept. of Eng. Sci., Univ. of Oxford, Oxford, United Kingdom), Jocelyn Urban (Dept. of Physiol., Anatomy and Genetics, Univ. of
Oxford, Oxford, United Kingdom), Robin Cleveland, and Constantin Coussios (Inst. of Biomedical Eng., Dept. of Eng. Sci., Univ. of Oxford, Inst. of
Biomedical Eng., Old Rd. Campus Res. Bldg., Oxford, Oxfordshire, United
Kingdom, constantin.coussios@eng.ox.ac.uk)
Current surgical treatments for lower back pain, which is strongly associated with degeneration of the intervertebral disk, are highly invasive and
have low long-term success rates. The present work thus aims to develop a
novel, minimally invasive therapy for disk replacement without the need for
surgical incision. Using ex vivo bovine coccygeal spinal segments as an experimental model, two confocally aligned 0.5 MHz HIFU transducers were
positioned with their focus inside the disc and used to generate peak rarefactional pressures in the range of 1–12 MPa. Cavitation activity was monitored, characterized, and localized in real time using both a single-element
passive cavitation detector and a 2D Passive Acoustic Mapping array. The
inertial cavitation threshold in the central portion of the disk, the nucleus
pulposus (NP), was first determined both in the absence and in the presence
of externally injected cavitation nuclei. HIFU exposure parameters were
subsequently optimized to maximize sustained inertial cavitation over 10
min and achieve fractionation of the NP. Following sectioning of treated
disks, staining of live and dead cells as well as microscopy under polarized
light were used to assess the impact of the treatment on cell viability and
collagen structure within the NP, inner annulus and outer annulus.
THURSDAY AFTERNOON, 30 OCTOBER 2014
MARRIOTT 9/10, 1:30 P.M. TO 4:00 P.M.
Session 4pEA
Engineering Acoustics: Acoustic Transduction: Theory and Practice II
Roger T. Richards, Chair
US Navy, 169 Payer Ln, Mystic, CT 06355
Contributed Papers
4pEA1. Vibration sensitivity measurements of silicon and acoustic-gradient microphones. Marc C. Reese (Harman Embedded Audio, Harman
Int.., 6602 E 75th St. Ste. 520, Indianapolis, IN 46250, marc.reese@harman.
com)
Microphones are often required to record audio while in a vibration
environment. Therefore, it is important to maximize the acoustic-to-vibration sensitivity of such microphones. It has previously been shown that the
vibration sensitivity of a microphone is, to first order, proportional to the
mass per unit area of the diaphragm including the air loading effect.
Although the air loading is generally minimal for omnidirectional condenser
microphones with thick diaphragms, these measurements show that it cannot
be ignored for newer silicon-based micro-electro-mechanical-system
(MEMS) and acoustic-gradient microphones. Additionally, since microphone vibration sensitivities are typically not reported by microphone manufacturers, nor measured using standardized equipment, the setup of an
inexpensive vibration measurement apparatus and associated challenges are
discussed.
1:45
4pEA2. Non-reciprocal acoustic devices based on spatio-temporal angular-momentum modulation. Romain Fleury, Dimitrios Sounas, and Andrea
Alu (ECE Dept., The Univ. of Texas at Austin, 1 University Station C0803,
Austin, TX 78712, romain.fleury@utexas.edu)
Acoustic devices that break reciprocity, for instance acoustic isolators or
circulators, may find exciting applications in a variety of fields, including
imaging, acoustic communication systems, and noise control. Non-reciprocal acoustic propagation has typically been achieved using non-linear phenomena, which require high input power levels and introduce distorsions. In
contrast, we have recently demonstrated compact linear isolation for audible
airborne sound by means of angular momentum bias [Fleury et al., Science
343, 516 (2014)], exploiting modal splitting in a ring cavity polarized by an
internal, constantly circulating fluid, whose motion is imparted using lownoise CPU fans. We present here an improved design with no moving parts,
2281
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
which is directly scalable to ultrasonic frequencies and fully integrable.
Instead of imparting angular momentum in the form of a moving medium as
in our previous approach, we make use of spatio-temporal acoustic modulation of three coupled acoustic cavities, a strategy that can be readily implemented in integrated ultrasonic devices, for instance, using piezoelectric
effects. In this new paradigm, the required modulation frequency is orders
of magnitude lower than the signal frequency, and the modulation efficiency
is maximized. This constitutes a pivotal step towards practically realizing
compact, linear, noise-free, tunable non-reciprocal acoustic components for
full-duplex acoustic communications and isolation.
2:00
4pEA3. An analysis of multi-year acoustic and energy performance
data for bathroom and utility residential ventilation fans. Wongyu Choi,
Antonio Gomez, Michael B. Pate, and James F. Sweeney (Mech. Eng.,
Texas A&M Univ., 2401 Welsh Ave. Apt. 615, 615, College Station, TX
77845, wongyuchoi@tamu.edu)
Loudness levels have been established as a new requirement in residential ventilation standards and codes including ASHRAE and IECC. Despite
the extensive application of various standards and codes, the control of loudness has not been a common target in past whole-house ventilation standards
and codes. In order to evaluate the appropriate loudness of ventilation fans,
especially in terms of leading standards and codes, a statistical analysis is
necessary. Therefore, this paper provides statistical data for bathroom and
utility ventilation fans over a nine year period from 2005 to 2013. Specifically, this paper presents an evaluation of changes in fan loudness over the 9
year test period and the relevance of loudness to leading standards including
HVI and ASHRAE. The loudness levels of brushless DC-motor fans are
also evaluated in comparison to the loudness of AC-motor fans. For AC and
DC motor fans, relationships between loudness and efficacy was determined
and then explained with regression models. Based on observations, this paper introduces a new “loudness-to-energy ratio” coefficient, L/E, which is a
measure of the acoustic and energy performance of a fan. Relationships
between acoustic and energy performances are established by using L/E
coefficients with supporting statistics for bathroom and utility fans.
168th Meeting: Acoustical Society of America
2281
4p THU. PM
1:30
2:15
4pEA4. Non contact ultrasound stethoscope. Nathan Jeger, Mathias Fink,
and Ros Kiri Ing (Institut Langevin, ESPCI ParisTech, 1 rue Jussieu, Paris
75005, France, nathan.jeger@espci.fr)
Heartbeat and respiration are very important vital signs that indicate
health and psychological states of a person. Recent technologies allow to
detect both physical parameters on a human subject by using different techniques with and without contact. Noncontact systems often use electromagnetic waves for contactless measurement but approaches based on
ultrasound waves, laser or video processes are also proposed. In this abstract
an alternative ultrasound system for non-contact and local measurement is
presented. The system works in echographic mode and ultrasound signals
are processed using two methods. The experimental setup uses an elliptic
mirror to focus ultrasonic waves onto the skin surface. Backscattered waves
are recorded by a microphone located close to the emitting transducer.
Heartbeat and respiration signals are determined from the skin displacement
caused by the chest-wall motion. For comparison purpose, the cross-correlation method, which uses broadband signal, and the Doppler method, which
uses narrowband signal, are applied to measure the skin displacement. Sensitivity and accuracy parameters of the two methods are compared. At least,
as the measurement is local, the system can act as a noncontact stethoscope
to listen the internal sounds of the human body even through the light
clothes of the patient.
2:30
4pEA5. High sensitivity imaging of resin-rich regions in graphite/epoxy
laminates using joint entropy. Michael Hughes (Int. Med./Cardiology,
Washington Univ. School of Medicine, School of Medicine Campus Box
8215, St. Louis, MO 63108, mshatctrain@gmail.com), John McCarthy
(Mathematics, Washington Univ., St. Louis, MO), Jon Marsh, and Samuel
Wickline (Int. Med./Cardiology, Washington Univ. School of Medicine,
Saint Louis, MO)
The continuing difficulty of detecting critical flaws in advanced materials requires novel approaches that enhance sensitivity to defects that might
impact performance. This study compares different approaches for imaging
a near-surface resin-rich defect in a thin graphite/epoxy plate using backscattered ultrasound. The specimen, having a resin-rich void immediately
below the top surface ply, was scanned with a 1 in. dia., 5 MHz center frequency, and 4 in. focal length transducer. A computer controlled apparatus
comprised of an x-y-z motion controller, a digitizer (LeCroy 9400A), and
an ultrasonic pulser/receiver (Panametrics 5800) was used to acquire data
on a 100 100 grid of points covering a 3 3 in. square. At each grid point
256 512-word, 8-bit backscattered waveforms, were digitized, signal averaged, and then stored on computer for off-line analysis. The same backscattered waveforms were used to produce peak-to-peak, signal energy, as well
as entropy images. All of the entropy images exhibit better border delineation and defect contrast than the either peak-to-peak or signal energy. The
best results are obtained using the joint entropy of the backscattered waveforms with a reference function. Two different references are examined: a
reflection from a stainless steel reflector, and an approximate optimum
obtained from an iterative parametric search. The joint entropy images produced using the optimum reference exhibit ~3 times the contrast obtained in
previous studies.
2:45
4pEA6. New compensation factors for the apparent propagation speed
In transmission line matrix uniform grid meshes. Alexandre Brandao
(Graduate Program in Elec. and Telecommunications Eng., Universidade
Federal Fluminense, Rua Passo da Patria, 156, Sao Domingos, Niteroi, RJ
24210-240, Brazil, abrand@operamail.com), Edson Cataldo (Appl. Mathematics Dept., Universidade Federal Fluminense, Niteroi, RJ, Brazil), and
Fabiana R. Leta (Mech. Eng. Dept., Universidade Federal Fluminense,
Niteroi, RJ, Brazil)
Numerical models consisting of two-dimensional (2D) and three-dimensional (3D) uniform grid meshes for the Transmission Line Matrix Method
(TLM), use sqrt(2) and sqrt(3), respectively, to compensate for the apparent
sound speed. In this work, new compensation factors are determined from a
2282
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
priori simulations, performed without compensation, in 2D and 3D TLM
one-section cylindrical waveguide acoustic models. The mistuned resonance
peaks obtained from these simulations are substituted in the corresponding
equations for the resonance frequencies in one-section cylindrical acoustical
waveguides to find the mesh apparent sound speed and, thus, the necessary
compensation. The TLM meshes are constructed over the voxels (Volumetric Picture Elements) of segmented MRI volumes, so that the extracted
mesh fits the segmented object. The TLM method provides a direct simulation approach instead of solving a PDE by variational methods that must
consider the plane wave assumption to run properly. Results confirm the
improvement over the conventional compensation factors, particularly for
frequencies above 4 kHz, providing a concrete reduction of the topology-dependent numerical dispersion for both 2D and 3D TLM lattices. Since this
dispersion problem is common to all TLM applications using uniform grids,
investigators in other areas of wave propagation can also benefit from these
findings.
3:00–3:15 Break
3:15
4pEA7. A low-cost alternative power supply for integrated electronic
piezoelectric transducers. Ricardo Brum, Sergio L. Aguirre, Stephan Paul,
and Fernando Corr^ea (Centro de Tecnologia, Universidade Federal de Santa
Maria, Rua Erly de Almeida Lima, 650, Santa Maria, RS 97105-120, Brazil,
ricardozbrum@yahoo.com.br)
Comercial hardware compatible with IEPE precision sensors normally
are expensive and often coupled to proprietary and expensive software packages. commercially available sound cards are a low cost option for AD, but
are incompatible with IEPE sensors. To create 4 mA constant current for
IEPE transducers commercial solutions are available and labs also have created such solutions, e.g., ITA at RWTH Aachen University. Unfortunately,
commercially available circuits are still to expensive for large scale classroom use in Brazil and circuits created elsewhere contain parts subject to
US export restrictions or require machines for creation of circuits. Thus,
based on a previous project, a new low-cost prototype was mounted on phenolic board. The circuit was tested with an IEPE microphone connected to a
commercial soundcard and ITA-Toolbox software and compared to a commercial hardware/software package. The results were very similar in the frequency range between 20 Hz and 10 kHz. The difference below 20 Hz
probably occurs due the different high pass filters in the AD-cards. The differences in the high frequency range are very likely due to differences in the
electrical background noise. The results suggest the device works well and
is a good alternative to make measurements with IEPE sensors.
3:30
4pEA8. Determination of the characteristic impedance and the complex
wavenumber of an absorptive material used in dissipative silencer. Key
F. Lima, Nilson Barbieri (Mech. Eng., PUCPR, Imaculada Conceiç~ao, 1155,
Curitiba, Parana 80215901, Brazil, keyflima@gmail.com), and Renato Barbieri (Mech. Eng., UDESC, Joinville, Brazil)
The silencers are acoustic filters that have the purpose of reducing
unwanted noise emitted by engines or equipment to acceptable levels. The
vehicular silencers have large volume and dissipative properties. Dissipative
silencers have absorptive material inside. These materials are typically fibrous and have good acoustic dissipation. However, few works depict the
acoustic behavior of silencers with absorptive materials. The difficulty in
evaluating this type of silencer is determining the acoustic properties of the
absorptive material: the characteristic impedance and the complex wavenumber. This work shows an inverse methodology for determining the
acoustic properties of the absorptive material used in silencers. First, it is
found the silencer’s acoustic efficiency in terms of the experimental sound
transmission loss. Second, the absorptive material properties are determined
with a parameters adjustment through a direct search optimization algorithm. In this step, the adjustment is done by applying The Finite Element
Method in the search for the silencer’s computational efficiency. The final
step is to verify the difference between the experimental and computational
results. For this work is used the acoustic efficiency of a silencer that has already been published in the literature. The results show good agreement.
168th Meeting: Acoustical Society of America
2282
3:45
mm thickness is biaxially prestretched and fixed on a polyurethane ring as
the vibrator, then ionic gel is painted on the center region of the membrane
as electrodes, finally, conducting wires which are also made by ionic gel is
attached to the edge of the electrodes for applying the AC voltage with a
DC bias. The ultrahigh transmittance of the VHB4905, gel, and polyurethane makes the transducer totally transparent, which is of great interest in
advanced media technology. The dynamic properties of the membrane are
studied experimentally along with its acoustic performance. It has been
found that the behavior of the dielectric elastomer membrane is quite complicated, both of the in plane and out of plane vibration mode exist. The
transducer shows better performance below 10 kHz for the low elastic modular of the membrane.
4pEA9. Flat, lightweight, transparent acoustic transducers based on
dielectric elastomer and gel. Kun Jia (The State Key Lab. for Strength and
Vib. of Mech. Structures, Xian Jiaotong Univ., South 307 Rm., 1st Teaching
Bldg.,West of the Xianning Rd. No.28, Xian, Shannxi 710049, China, kunjia@mail.xjtu.edu.cn)
The advances in flat-panel displays and Super Hi-Vision with a 22.2
multichannel sound system exhibit an entirely new viewing and listening
environment for the audience; however, flat and lightweight acoustic transducers are required to fulfill this prospect. In this paper, a flat lightweight
acoustic transducer with a rather simple structure is proposed. Polyacrylic
elastomer membrane (VHB4905, 3M corporation) with 4 mm diameter, 0.5
THURSDAY AFTERNOON, 30 OCTOBER 2014
SANTA FE, 1:00 P.M. TO 4:10 P.M.
Session 4pMU
Musical Acoustics: Assessing the Quality of Musical Instruments
Andrew C. H. Morrison, Chair
Joliet Junior College, 1215 Houbolt Rd., Natural Science Department, Joliet, IL 60431
Invited Papers
1:00
4pMU1. Bamboo musical instruments: Some physical and mechanical properties related to quality. James P. Cottingham (Phys.,
Coe College, 1220 First Ave., Cedar Rapids, IA 52402, jcotting@coe.edu)
Bamboo is one of the most widely used materials in musical instruments, including string instruments and percussion as well as
wind instruments. Bamboo pipe walls are complex, composed of a layered structure of fibers. The pipe walls exhibit non-uniformity in
radial structure and density, and there is a significant difference between the elastic moduli parallel to and perpendicular to the bamboo
fibers. This paper presents a summary of results from the literature on bamboo as a material for musical instruments. In addition, results
are presented from recent measurements of the physical and mechanical properties of materials used in some typical instruments. In particular, a case study will be presented comparing measurements made on reeds and pipes from two Southeast Asian khaen. Of the two
khaen discussed, one is a high quality khaen made by craftsmen in northeastern Thailand, while the other is an inexpensive instrument
purchased at an import shop. For this pair of instruments, analysis and comparison have been made of the material properties of the bamboo pipes and the composition and mechanical properties of the metal alloy reeds.
1:20
4p THU. PM
4pMU2. Descriptive maps to illustrate the quality of a clarinet. Whitney L. Coyle (The Penn State Univ., 201 Appl. Sci. Bldg., University Park, PA 16802, wlc5061@psu.edu), Philippe Guillemain, Jean-Baptiste Doc, Alexis Guilloteau, and Christophe Vergez (Laboratroire de mecanique et d’acoustique, Marseille, France)
Generally, subjective opinions and decisions are made when judging the quality of musical instruments. In an attempt to become
more objective, this research presents methods to numerically and experimentally create maps, over a range of control parameters, that
describe instrument behavior for a variety of different sounds features or “quality markers” (playing regime, intonation, loudness, etc.).
The behavior of instruments is highly dependent on the control parameters that are adjusted by the musician. Observing this behavior as
a function of one control parameter (e.g., blowing pressure) can hide diversity of the overall behavior. An isovalue quality marker can
be obtained for a multitude of control parameter combinations. Using multidimensional maps, where quality markers are a function of
two or more control parameters, can solve this problem. Numerically: in two dimensions, a regular discretization on a subspace of control parameters can be implemented while conserving a reasonable calculation time. However, in higher dimensions (if, for example,
aside from the blowing pressure and the lip force, we vary the reed parameters), it is necessary to use auto-adaptive sampling methods.
Experimentally: the use of an artificial mouth allows us to maintain control conditions while creating these maps. We can also use an
instrumented mouthpiece: this allows us to measure simultaneously and instantly these control parameters and create the maps “on the
fly.”
2283
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2283
1:40
4pMU3. Recent works on the (psycho-)acoustics of wind instruments. Adrien Mamou-Mani (IRCAM, 1 Pl. Stravinsky, Paris 75004,
France, adrien.mamou-mani@ircam.fr)
Two experiments aiming at linking acoustical properties and perception of wind instruments will be presented. The first one is a
comparison between five oboes of the same model type. An original methodology is proposed, based on discrimination tests in playing
conditions and default detection using acoustical measurements. The second experiment has been done on a simplified bass clarinet with
an embedded active control system. A comparison of perceptual attributes, like sound color and playability, for different acoustical configurations (frequency and damping of resonances) is possible to test using a single system. A specific methodology and first results will
be presented.
2:00
4pMU4. The importance of structural vibrations in brass instruments. Thomas R. Moore (Dept. of Phys., Rollins College, 1000
Holt Ave., Winter Park, FL 32789, tmoore@rollins.edu) and Wilfried Kausel (Inst. of Musical Acoust., Univ. of Music and Performing
Arts, Vienna, Austria)
It is often thought that the input impedance uniquely determines the quality of a brass wind instrument. However, it is known that
structural vibrations can also affect the playability and perceived sound produced by these instruments. The processes by which the
structural vibrations affect the quality of brass instruments are not completely understood, but it is likely that vibrations of the metal couple to the lips as well as introducing small changes in the input impedance. We discuss the mechanisms by which structural vibrations
can affect the quality of a brass instrument and suggest methods of incorporating these effects into an objective assessment of instrument
quality.
2:20–2:40 Break
Contributed Papers
2:40
3:10
4pMU5. Investigating the colloquial description of sound by musicians
and non-musicians. Jack Dostal (Phys., Wake Forest Univ., P.O. Box 7507,
Winston-Salem, NC 27109, dostalja@wfu.edu)
4pMU7. Modeling the low-frequency response of an acoustic guitar.
Micah R. Shepherd (Appl. Res. Lab, Penn State Univ., PO Box 30, mailstop
3220B, State College, PA 16801, mrs30@psu.edu)
What is meant by the words used in a subjective judgment of sound?
Interpreting these words accurately allows these musical descriptions of
sound to be related to scientific descriptions of sound. But do musicians, scientists, instrument makers, and others mean the same things by the same
words? When these groups converse about qualities of sound, they often use
an expansive lexicon of terms (bright, brassy, dark, pointed, muddy, etc.). It
may be inaccurate to assume that the same terms and phrases have the same
meaning to these different groups of people or even remain self-consistent
for a single individual. To investigate the use of words and phrases in this
lexicon, subjects with varying musical and scientific backgrounds were surveyed. The subjects were asked to listen to different pieces of recorded
music and asked to use their own colloquial language to describe the musical qualities and differences they perceived in these pieces. In this talk, I
describe some qualitative results of this survey and identify some of the
more problematic terms used by these various groups to describe sound
quality.
The low-frequency response of an acoustic guitar is strongly influenced
by the combined behavior of the air cavity and the top plate. The sound
hole–air cavity resonance (often referred to as the Helmholtz resonance)
interacts with the first elastic mode of the top plate creating a coupled oscillator with two resonance frequencies that are shifted away from the frequencies of the two original, uncoupled oscillators. This effect was modeled
using finite elements for the top plate and boundary elements for the air cavity with rigid sides and back and no strings. The natural frequencies of the
individual and combined oscillators were then predicted and compared to
measurements. The model predicts the mode shapes, natural frequencies,
and damping well thus validating the modeling procedure. The effect of
changing the cavity volume was then simulated to predict the behavior for a
deeper air cavity.
2:55
4pMU6. Chaotic behavior of the piccolo? Nicholas Giordano (Phys.,
Auburn Univ., College of Sci. and Mathematics, Auburn, AL 36849,
njg0003@auburn.edu)
A direct numerical solution of the Navier-Stokes equations has been
used to calculate the sound produced by a model of the piccolo. At low to
moderate blowing speeds and at appropriate blowing angles, the sound pressure is approximately periodic with the expected frequency. As the blowing
speed is increased or as the blowing angle is varied, the time dependence of
the sound pressure becomes more complicated, and examination of the spectrum and the sensitivity of the sound pressure to initial conditions suggest
that the behavior becomes chaotic. Similarities with the behavior found in
Taylor-Couette and Rayleigh-Bènard instabilities of fluids are noted and
possible implications for the nature of the piccolo tone are discussed.
2284
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
3:25
4pMU8. Experiment to evaluate musical qualities of violin strings. Maxime Baelde (Acoust. & Environ. HydroAcoust. Lab, Universite Libre de
Bruxelles, 109 rue Barthelemy Delespaul, Lille 59000, France, maxime.
baelde@centraliens-lille.org), Jessica De Saedeleer (Jessica De Saedeleer
Luthier, Brussels, Belgium), and Jean-Pierre Hermand (Acoust. & Environ.
hydroAcoust. lab, Universite Libre de Bruxelles, Brussels, Belgium)
Most of violin strings on the market are made of different materials and
size. They have different musical qualities: full, mellow, warm, and round,
for example. Nevertheless, this description is subjective and related to string
manufacturers. The aim of this study is to provide an experiment which
gives an evaluation of the musical qualities of strings. This study is based
on “musical descriptors,” which gives information about a musical sound
and psychoacoustics in order to match the musician point of view. “Musical
descriptors” are also used for music classification. We use two sets of topend strings model from two different brands. These strings are mounted on
two similar violins and the strings are excited on their normal modes with
harpsichord damper mechanism like and other means. The sound radiated is
168th Meeting: Acoustical Society of America
2284
3:55
recorded with a microphone and the vibration of the string with a coil-magnet device so as to have intrinsic and extrinsic string properties. Some musicians tried these strings and expressed what they thought about it. These
acoustical and psychoacoustical analyzes will give information to the luthiers to know what string property allow one adjustment, in order to provide
better advice aside from string manufacturers descriptions.
4pMU10. Experimental investigation of crash cymbal acoustic quality.
Devyn P. Curley (Mech. Eng., Tufts Univ., 200 College Ave., Medford, MA
02155), Zachary A. Hanan (Elec. Eng., Univ. of Colorado, Boulder, CO),
Dan Luo (Mech. Eng., Tufts Univ., Medford, MA), Christopher W. Penny
(Phys., Tufts Univ., Medford, MA), Christopher F. Rodriguez (Elec. and
Comput. Eng., Tufts Univ., Medford, MA), Paul D. Lehrman (Music, Tufts
Univ., Medford, MA), Chris B. Rogers, and Robert D. White (Mech. Eng.,
Tufts Univ., Medford, MA, r.white@tufts.edu)
3:40
4pMU9. Vibration study of Indian folk instrument sambal. Ratnaprabha
F. Surve (Phys., Nowrosjee Wadia College, 15 Tulip, Neco Gardens, Viman
Nagar, Pune 21 411001, India, rfsurve@hotmail.com), Keith Desa (Phys.,
Nowrosjee Wadia College, 27, Maharashtra, India), and Dilip S. Joaj (Phys.,
Univ. of Pune, Pune, Maharashtra, India)
A methodology to quantitatively evaluate the quality of the transmitted
acoustic signature of cymbals is under development. High speed video
recordings of a percussionist striking both a Zildjian 14 in. A-custom crash
cymbal and a Zildjian Gen 16 low volume 16 in. crash cymbal were
recorded and used to determine biometrically accurate crash and ride striking motions. A two degree of freedom robotic arm has been developed to
mimic human striking motion. The robotic arm includes a high torque elbow
joint driven in closed loop trajectory tracking and an impedance controlled
wrist joint to approximate the variable stiffness of the stick grip. A quantitative comparison of robotic and human strikes will be made using high speed
video. Repeatable strikes will be carried out using the robotic system in an
anechoic chamber for different grades of Zildjian cymbals, including low
volume Gen 16 cymbals. Acoustic features of the measured sound output
will be compared to seek quantitative metrics for evaluating cymbal sound
quality that compare favorably with the results of qualitative human assessments that are currently in use by the industry. Preliminary results indicate
noticeable differences in cymbal acoustic output including variations in
modal density, decay time, and beating phenomena.
The percussion instruments family, in its folk category has many instruments like Dholki, Dimdi, Duff, Halagi, and Sambal. The Sambal is a folk
membranophone made up of wood, played mainly in western India. Sambal
a traditional drum, which is used in some religious functions. It is played by
the people who are believed to be servants of goddess Mahalaxmi Devi.
This instrument is made up of two approximately cylindrical wooden drums
united along a common edge, having skin membranes stretched over their
mouths. This instrument is played using two wooden sticks, of which one
has a curved end. The right hand side drum’s pitch is higher than the left. Its
membrane is excited by striking repeatedly to generate sound of a constant
pitch. This paper relates to vibrational analysis of the Sambal. A study has
been carried out to check it’s vibrational properties like modes of the vibration. The study is done by spectrum analysis (Fast Fourier Transform) using
a simple Digital Storage Oscilloscope. The tonal quality of wood used for
the cylinders and membrane is compared.
THURSDAY AFTERNOON, 30 OCTOBER 2014
MARRIOTT 3/4, 1:15 P.M. TO 4:20 P.M.
Session 4pNS
Noise: Virtual Acoustic Simulation
Stephen A. Rizzi, Cochair
NASA Langley Research Center, 2 N Dryden St, MS 463, Hampton, VA 23681
Patricia Davies, Cochair
Ray W. Herrick Labs., School of Mechanical Engineering, Purdue University, 177 South Russell Street,
West Lafayette, IN 47907-2099
4p THU. PM
Chair’s Introduction—1:15
Invited Papers
1:20
4pNS1. Recent advances in aircraft source noise synthesis. Stephen A. Rizzi (AeroAcoust. Branch, NASA Langley Res. Ctr., 2 N
Dryden St., MS 463, Hampton, VA 23681, stephen.a.rizzi@nasa.gov), Daniel L. Palumbo (Structural Acoust. Branch, NASA Langley
Res. Ctr., Hampton, VA), Jonathan R. Hardwick (Dept. of Mech. Eng., Virginia Tech, Blacksburg, VA), and Andrew Christian (National
Inst. of Aerosp., Hampton, VA)
For several decades, research and development has been conducted at the NASA Langley Research Center directed at understanding
human response to aircraft flyover noise. More recently, a technology development effort has focused on the simulation of aircraft flyover noise associated with future, large commercial transports. Because recordings of future aircraft are not available, the approach
taken utilizes source noise predictions of engine and airframe components which serve as a basis for source noise syntheses. Human subject response studies have been conducted aimed at determining the fidelity of synthesized source noise, and the annoyance and
2285
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2285
detectability once the noise is propagated (via simulation) to the ground. Driven by various factors, human response to less common
noise sources are gaining interest. Some have been around for a long time (rotorcraft), some have come and gone, and are back again
(open rotors), and some are entirely new (distributed electric driven propeller systems). Each has unique challenges associated with
source noise synthesis. Discussed in this work are some of those challenges including source noise characterization from wind tunnel
data, flight data, or prediction; factors affecting perceptual fidelity including tonal/broadband separation, and amplitude and frequency
modulation; and a potentially expansive range of operating conditions.
1:40
4pNS2. An open architecture for auralization of dynamic soundscapes. Aric R. Aumann (Analytical Services & Mater., Inc., 107
Res. Dr., Hampton, VA 23666-1340, aric.r.aumann@nasa.gov), William L. Chapin (AuSIM, Inc., Mountain View, CA), and Stephen A.
Rizzi (AeroAcoust. Branch, NASA Langley Res. Ctr., Hampton, VA)
An open architecture for auralization has been developed by NASA to support research aimed at understanding human response to
sound within a complex and dynamic soundscape. The NASA Auralization Framework (NAF) supersedes an earlier auralization tool set
developed for aircraft flyover noise auralization and serves as a basis for a future auralization plug-in for the NASA Aircraft Noise Prediction Program (ANOPP2). It is structured as a set of building blocks in the form of dynamic link libraries, so that other soundscapes,
e.g., those involving ground transportation, wind turbines, etc., and other use cases, e.g., inverse problems, may easily be accommodated
The NAF allows users to access auralization capabilities in several ways. The NAF’s built-in functionality may be exercised utilizing either basic (e.g., console executable) or advanced (e.g., MATLAB, LabView, etc.) host environments. The NAF’s capabilities can also be
extended by augmenting or replacing major activities through programming its open architecture. In this regard, it is envisioned that
third parties will develop plug-in capabilities to augment those included in the NAF.
2:00
4pNS3. Simulated sound in advanced acoustic model videos. Kenneth Plotkin (Wyle, 200 12th St. South, Ste. 900, Arlington, VA
22202, kenneth.plotkin@wyle.com)
The Advanced Acoustic Model (AAM) and other time-step aircraft noise simulation models developed by Wyle can generate video
animations of the noise environment. The animations are valuable for understanding details of noise footprints and for community outreach. Using algorithms developed by NASA, audio simulation for jet aircraft noise has recently been added to the video capability.
Input data for the simulations consist of AAM’s one-third octave band sound level time history output, plus flight path geometry and
ground properties. Working at an audio sample rate of 44.1 kHz and a sample “hop” period of 0.0116 s, a random phase narrow band
sample is shaped to match spectral amplitudes. Ground reflection and low frequency oscillation are added to the hops, which are merged
into a WAV file. The WAV file is then mixed with an existing animation generated from the same AAM run. The process takes place in
near-real time, based on a location that a user selects from a site map. The presentation includes demonstrations of the results for a simple level flyover and for the departure of a high performance jet aircraft from an airbase.
2:20
4pNS4. Combining local source propagation modeling results with a global acoustic ray tracer. Michael Williams, Darrel Younk,
and Steve Mattson (Great Lakes Sound and Vib., 47140 N. Main St., Houghton, MI 49931, mikew@glsv.com)
A common method of sound auralization in large virtual environments is through acoustic ray tracing. The purpose of an acoustic
ray tracer is to supply accurate source to listener impulse response functions for a virtual scene. Currently, sources are modeled as an
omnidirectional point source in the ray tracer. This limits the fidelity of the results and is not accurate for complicated noise sources
involving multiple audible parts. The proposed method is to simulate local source propagation to a sphere using various energy modeling
techniques. These results may be used to increase the fidelity of a ray trace by giving directionality to the source and allowing for source
audio to be mixed from recordings of components of the source. This is especially relevant when a full source has not yet been constructed. Because of this, there are many real world applications in engineering, architecture, and other fields that need high fidelity auralization of future products.
2:40
4pNS5. Modelling sound propagation in the presence of atmospheric turbulence for the auralization of aircraft noise. Frederik
€
Rietdijk, Kurt Heutschi (Acoust. / Noise Control, Empa, Uberlandstrasse
129, D€
ubendorf, Zurich 8600, Switzerland, frederik.rietdijk@
empa.ch), and Jens Forssen (Appl. Acoust., Chalmers Univ. of Technol., Gothenburg, Sweden)
A new tool for the auralization of aircraft noise in an urban environment is in development. When listening to aircraft noise sound
level fluctuations caused by atmospheric turbulence are clearly audible. Therefore, to create a realistic auralization of aircraft noise,
atmospheric turbulence needs to be included. Due to spatial inhomogeneities of the wind velocity and temperature in the atmosphere
acoustic scattering occurs, affecting the transfer function between source and receiver. Both these inhomogeneities and the aircraft position are time-dependent, and therefore the transfer function varies with time resulting in the audible fluctuations. Assuming a stationary
(frozen) atmosphere, the movement of the aircraft alone gives rise to fluctuations. A simplified model describing the influence of turbulence on a moving elevated source is developed, which can then be used to simulate the influence of atmospheric turbulence in the auralization of aircraft noise.
3:00–3:20 Break
2286
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2286
3:20
4pNS6. Simulation of excess ground attenuation for aircraft flyover noise synthesis. Brian C. Tuttle (Analytical Mech. Assoc., Inc.,
1318 Wyndham Dr., Hampton, VA 23666, btuttle1@gmail.com) and Stephen A. Rizzi (AeroAcoust. Branch, NASA Langley Res. Ctr.,
Hampton, VA)
Subjective evaluations of noise from proposed aircraft and flight operations can be performed using simulated flyover noise. Such
simulations typically involve three components: generation of source noise, propagation of that noise to a receiver on or near the ground,
and reproduction of that sound in a subjective test environment. Previous work by the authors focused mainly on development of high-fidelity source noise synthesis techniques and sound reproduction methods while assuming a straight-line propagation path with standard
atmospheric absorption and simple (plane-wave) ground reflection models. For aircraft community noise applications, this is usually sufficient because the aircraft are nearly overhead. However, when simulating noise sources at low elevation angles, the plane-wave
assumption is no longer valid and must be replaced by a model that takes into account the reflection of spherical waves from a ground
surface of finite impedance. Recent additions to the NASA Community Noise Test Environment (CNoTE) software suite have improved
real-time simulation capabilities of ground-plane reflections for low incidence angles. The models are presented along with the resulting
frequency response of the filters representing excess ground attenuation. Discussion includes an assessment of the performance and limitations of the filters in a real-time simulation.
3:40
4pNS7. Evaluation of the perceptual fidelity of a novel rotorcraft noise synthesis technique. Jonathan R. Hardwick (Dept. of Mech.
Eng., Virginia Polytechnic Inst. and State Univ., Blacksburg, VA), Andrew Christian (National Inst. of Aerosp., 100 Exploration Way,
Hampton, VA 23666, andrew.christian@nasa.gov), and Stephen A. Rizzi (AeroAcoust. Branch, NASA Langley Res. Ctr., Hampton,
VA)
A human subject experiment was recently conducted at the NASA Langley Research Center to evaluate the perceptual fidelity of
synthesized rotorcraft source noise. The synthesis method combines the time record of a single blade passage (i.e., of a main or tail rotor)
with amplitude and frequency modulations observed in recorded rotorcraft noise. Here, the single blade passage record can be determined from a time-averaged recording or from a modern aeroacoustic analysis. Since there is no predictive model available, the amplitude and frequency modulations were derived empirically from measured flyover noise. Thus, one research question was directed at
determining the fidelity of four synthesis implementations (unmodulated and modulated main rotor only, and unmodulated and modulated main and tail rotor) under thickness and loading noise dominated conditions, using modulation data specific to those conditions. A
second research question was aimed at understanding the sensitivity of fidelity to the choice of modulation method. In particular, can
generic modulation data be used in lieu of data specific to the condition of interest, and how do modifications of generic and specific
modulation data affect fidelity? The latter is of importance for applying the source noise synthesis to the simulation of complete flyover
events.
4:00
4pNS8. A comparison of subjects’ annoyance ratings of rotorcraft noise in two different testing environments. Andrew McMullen
and Patricia Davies (Purdue Univ., 177 S Russel Dr, West Lafayette, IN 47906, almvz5@mail.missouri.edu)
4p THU. PM
Two subjective tests were conducted to investigate people’s responses to rotorcraft noise. In one test subjects heard the sounds in a
room designed to simulate aircraft flyovers. The frequency range of the Exterior Effects Room (EER) at NASA Langley is 17 Hz to
18,750 Hz. In the other test, subjects heard the sounds over earphones and the frequency range of the playback was 25 Hz–16 kHz.
Some of the sounds in this earphone test, high-pass filtered at 25 Hz, were also played in the EER. Forty subjects participated in each of
the tests. Subjects’ annoyance responses in each test were highly correlated with EPNL, ASEL, and Loudness exceeded 20% of the time
(correlation coefficient close to 0.9). However, at some metric values there was a large variation in response levels, which could be
linked to characteristics of harmonic families present in the sound. While the results for both tests are similar, subjects in the EER generally found the sounds less annoying than the subjects who heard the sounds over earphones. Certain groups of signals were rated similarly in one test environment, but differently in the other. This may be due to playback method, subject population, or other factors.
2287
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2287
THURSDAY AFTERNOON, 30 OCTOBER 2014
INDIANA C/D, 1:30 P.M. TO 4:45 P.M.
Session 4pPA
Physical Acoustics: Topics in Physical Acoustics II
Josh R. Gladden, Cochair
Physics & NCPA, University of Mississippi, 108 Lewis Hall, University, MS 38677
William Slaton, Cochair
Physics & Astronomy, The University of Central Arkansas, 201 Donaghey Ave., Conway, AR 72034
Contributed Papers
1:30
4pPA1. Aeroacoustic response of coaxial Helmholtz resonators in a lowspeed wind tunnel. William Slaton (Phys. & Astronomy, The Univ. of Central Arkansas, 201 Donaghey Ave., Conway, AR 72034, wvslaton@uca.
edu)
The aeroacoustic response of coaxial Helmholtz resonators with different neck geometries in a low-speed wind tunnel has been investigated. Experimental test results of this system reveal strong aeroacoustic response
over a Strouhal number range of 0.25–0.1 for both increasing and decreasing
the flow rate in the wind tunnel. Ninety-degree bends in the resonator necks
does not significantly change the aeroacoustic response of the system. Aeroacoustic response in the low-amplitude range has been successfully modeled
by describing-function analysis. This analysis, coupled with a turbulent flow
velocity distribution model, gives reasonable values for the location in the
flow of the undulating stream velocity that drives vortex shedding at the resonator mouth. Having an estimate for the stream velocity that drives the
flow-excited resonance is crucial when employing the describing-function
analysis to predict aeroacoustic response of resonators.
1:45
4pPA2. Separation of acoustic waves in isentropic flow perturbations.
Christian Henke (ATLAS ELEKTRONIK, Sebaldsbruecker Heerstrasse
235, Bremen 28309, Germany, christian.henke@atlas-elektronik.com)
The present contribution investigates the mechanisms of sound generation and propagation in the case of highly-unsteady flows. It is based on the
linearisation of the isentropic Navier-Stokes equation around a new pathline-averaged base flow. As a consequence of this unsteady and non-radiating base flow, the perturbation equations satisfy a conservation law. It is
demonstrated that this flow perturbations can be split into acoustic and vorticity modes, with the acoustic modes being independent of the vorticity
modes. Moreover, we conclude that the present acoustic perturbation is
propagated by the convective wave equation and fulfills Lighthill’s acoustic
analogy. Therefore, we can define the deviations from the convective wave
equation as the “true” sound sources. In contrast to other authors, no
assumptions on a slowly varying or irrotational flow are necessary.
2:00
4pPA3. The sliding mode controller on the rijke-type combustion systems with mean temperature gradients. Dan Zhao and Xinyan Li (Mech.
and Aerosp. Eng., Nanyang Technolog. Univ., 50 Nanyang Ave. Singapore,
Singapore 639798, Singapore, xli037@e.ntu.edu.sg)
Thermoacoustic instabilities are typically generated due to the dynamic
coupling between unsteady heat release and acoustic pressure waves. To
eliminate thermoacoustic instability, the coupling must be somehow interrupted. In this work, we designed and implemented a sliding mode controller to mitigate self-sustained thermoacoustic oscillations in a Rijke-type
combustion system. An acoustically-compact heat source is confined and
2288
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
modeled by using a modified King’s Law. The mean temperature gradient is
considered by expanding the acoustic waves via Galerkin series. Coupling
the unsteady heat release with the acoustic model enables the flow disturbances to be calculated, thus providing a platform on which to evaluate the
performance of the controller. As the controller is actuated, the limit cycle
oscillations are quickly dampened and the thermoacoustic system with multiple eigenmodes is stabilized. The successful demonstration indicates that
the sliding mode controller can be applied to stabilize unstable thermoacoustic systems.
2:15
4pPA4. Feedback control of thermoacoustic oscillation transient growth
of a premixed Laminar flame. Dan Zhao and Xy Li (Aerosp. Eng. Div.,
Nanyang Technolog. Univ., 50 Nanyang Ave., Singapore, Singapore,
XLI037@e.ntu.edu.sg)
Transient growth of combustion-excited oscillations could trigger thermoacoustic instability in a combustion system with nonorthogonal eigenmodes. In this work, feedback control of transient growth of combustionexcited oscillation in a simplified thermoacoustic system with Dirichlet
boundary conditions is considered. For this a thermoacoustic model of a premixed laminar flame with an actuator is developed. It is formulated in statespace by expanding acoustic disturbances via Galerkin series and linearizing
flame model and recasting it into the classical time-lag N-s for controllers
implementation. As a linear-quadratic-regulator (LQR) controller is implemented, the system becomes asymptotically stable. However, it is associated
with transient growth of thermoacoustic oscillations, which may potentially
trigger combustion instability. To eliminate the oscillations transient
growth, a strict dissipativity controller is then implemented. Comparison is
then made between the performances of these controllers. It is found that
the strict dissipativity controller achieves both exponential decay of the
oscillations and unity maximum transient growth.
2:30
4pPA5. Nonlinear self-sustained thermoacoustic instability in a combustor with three bifurcating branches. Dan Zhao and Shihuai Li (Aerosp.
Eng. Div., Nanyang Technolog. Univ., 50 Nanyang Ave., Singapore, Singapore, LISH0025@e.ntu.edu.sg)
In this work, experimental investigations of a bifurcating thermoacoustic
system are conducted first. It has a mother tube splitting into three bifurcating branches. It is surprisingly found that the flow oscillations in the bifurcating branches resulting from unsteady combustion in the bottom stem are
at different temperatures. Flow visualization reveals that one branch is associated with “cold” pulsating flow, while the other two branches are “hot.”
Such unique flow characteristics cannot be predicted by simply assuming
the bifurcating combustor consisting of three curved Rijke tube. 3D Numerical investigations are then conducted. Three parameters are identified and
studied one by one: (1) the heat source location, (2) the heat flux, and (3)
the flow direction in the bifurcating branches. As each of the parameters is
168th Meeting: Acoustical Society of America
2288
varied, the heat-driven acoustics signature is found to change. The main
nonlinearity is identified in the heat fluxes. Comparing the numerical and
experimental results reveals that good agreement is obtained in terms of
mode frequencies, mode shapes, sound pressure level and supercritical Hopf
bifurcating behavior.
problem, we use the Bayesian approximation error method which reduces
the overall computational demand. In this study, results in the two-dimensional case with simulated data are presented.
2:45
4pPA8. Surfactant-free emulsification in microfluidics using strongly
oscillating bubbles. Siew-Wan Ohl, Tandiono Tandiono, Evert Klaseboer
(Inst. of High Performance Computing, 1 Fusionopolis Way, #16-16 Connexis North, Singapore 138632, Singapore, ohlsw@ihpc.a-star.edu.sg),
Dave Ow, Andre Choo (Bioprocessing Technol. Inst., Singapore, Singapore), Fenfang Li, and Claus-Dieter Ohl (Division of Phys. and Appl. Phys.,
School of Physical and Mathematical Sci., Nanyang Technol. Univ., Singapore, Singapore)
Mach stem is a well-known structure typically observed in the process
of strong (acoustical Mach numbers greater than 0.4) step-shock waves
reflection from a rigid boundary. However, this phenomenon has been much
less studied for weak shocks in nonlinear acoustic fields where Mach numbers are in the range from 0.001 to 0.01 and pressure waveforms have more
complicated temporal structure than step shocks. In this work, the results
are reported for Mach stem formation observed in the experiment of Nwave reflections from a plane surface. Spherically divergent N-waves were
generated by a spark source in air and were measured using a Mach-Zehnder
interferometer. Pressure waveforms were reconstructed using the inverse
Abel transform applied to the phase of the interferometer measurement signal. Temporal resolution of 0.4 ls was achieved. Regular and irregular types
of reflection were observed. It was shown that the length of the Mach stem
increased linearly while the N-wave propagated along the surface. In addition, preliminary results of the influence of surface roughness on the Mach
stem formation will be presented. [Work supported by the President of Russia MK-5895.2013.2 grant, student stipend from the French Government,
and by LabEx CeLyA ANR-10-LABX-60/ANR-11-IDEX-0007.]
3:00–3:15 Break
3:15
4pPA7. Statistical inversion approach to estimating water content in an
aquifer from seismic data. Timo L€ahivaara (Appl. Phys., Univ. of Eastern
Finland, P.O. Box 1627, Kuopio 70211, Finland, timo.lahivaara@uef.fi),
Nicholas F. Dudley Ward (Otago Computational Modelling Group Ltd.,
Dunedin, New Zealand), Tomi Huttunen (Kuava Ltd, Kuopio, Finland), and
Jari P. Kaipio (Mathematics, Univ. of Auckland, Auckland, New
Zealand)
This study focuses on developing computational tools to estimate water
content in an aquifer from seismic measurements. The poroelastic signature
from an aquifer is simulated and methods that use this signature to estimate
the water table level and aquifer thickness are investigated. In this work, the
spectral-element method is used to solve the forward model that characterizes the propagation of seismic waves. The inverse problem is formulated in
the Bayesian framework, so that all uncertainties are explicitly modelled as
probability distributions, and the solution is given as summary statistics
over the posterior distribution of parameters relative to data. For the inverse
2289
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
In this study, two immiscible liquids in a microfluidics channel has been
successfully emulsified by acoustic cavitation bubbles. These bubbles are
generated by the attached piezo transducers which are driven to oscillate at
resonant frequency of the system (about 100 kHz) [1, 2]. The bubbles oscillate and induce strong mixing in the microchamber. They induce the rupture
of the liquid thin layer along the bubble surface due to the high shear stress
and fast liquid jetting at the interface. Also, they cause the big droplets to
fragment into small droplets. Both water-in-oil and oil-in-water emulsions
with viscosity ratio up to 1000 have been produced using this method without the application of surfactant. The system is highly efficient as submicron
monodisperse emulsions (especially for water-in-oil emulsion) could be created within milliseconds. It is found that with a longer ultrasound exposure,
the size of the droplets in the emulsions decreases, and the uniformity of the
emulsion increases. Reference: [1] Tandiono, SW Ohl et al., “Creation of
cavitation activity in a microfluidics device through acoustically driven capillary waves,” Lab Chip 10, 1848–1855 (2010). [2] Tandiono, SW Ohl et
al., “Sonochemistry and sonoluminescence in microfluidics,” Proc. Natl.
Acad. Sci. U.S.A. 108(15), 5996–5998 (2011).
3:45
4pPA9. Ultrasonic scattering from poroelastic materials using a mixed
displacement-pressure formulation. Max denis (Mayo Clinic, 200 First
St. SW, Rochester, MN 55905, denis.max@mayo.edu), Chrisna Nguon,
Kavitha Chandra, and Charles Thompson (Univ. of Massachusetts Lowell,
Lowell, MD)
In this work, a numerical technique suitable for evaluating the ultrasonic
scattering from a three-dimensional poroelastic material is presented. Following Biot’s derivation of the macroscopic governing equations for a fluid
saturated poroelastic material, the predicted two propagating wave equations are formulated in terms of displacement and pressure. Assuming that
porosity variations on a microscopic scale have a cumulative effect in generating a scattered field, the scattering attenuation coefficient of a Biot medium can be determined. The scattered fields of the wave equations are
numerically evaluated as Neumann series solutions of the Kirchhoff-Helmholtz integral equation. A Pade approximant technique is employed to
extrapolate beyond the Neumann series’ radius of convergence (weak scattering regime). In the case of bovine trabecular bone, the relationship
between the scattering attenuation coefficient and the structural and mechanical properties of the trabecular bone is of particular interest. The
results demonstrate the validity of the linear frequency-dependent assumption of attenuation coefficient n the low frequency range. Further comparisons, between measured observations and the numerical results will be
discussed.
168th Meeting: Acoustical Society of America
2289
4p THU. PM
4pPA6. Application of Mach-Zehnder interferometer to measure irregular reflection of a spherically divergent N-wave from a plane surface
in air. Maria M. Karzova (LMFA UMR CNRS 5509, Ecole Centrale de
Lyon, Universite Lyon I, Leninskie Gory 1/2, Phys. Faculty, Dept. of
Acoust., Moscow 119991, Russian Federation, masha@acs366.phys.msu.
ru), Petr V. Yuldashev (Phys. Faculty, M.V. Lomonosov Moscow State
Univ., Moscow, Russian Federation), Sebastien Ollivier (LMFA UMR
CNRS 5509, Ecole Centrale de Lyon, Universite Lyon I, Lyon, France),
Vera A. Khokhlova (Phys. Faculty, M.V. Lomonosov Moscow State Univ.,
Moscow, Russian Federation), and Philippe Blanc-Benon (LMFA UMR
CNRS 5509, Ecole Centrale de Lyon, Universite Lyon I, Lyon, France)
3:30
4:00
4pPA10. High temperature resonant ultrasound spectroscopy study on
Lead Magnesium Niobate—Lead Titanate (PMN-PT) relaxor ferroelectric material. Sumudu P. Tennakoon and Joseph R. Gladden (Phys. and
Astronomy, Univ. of MS, 1 Coliseum Dr., Phys.& NCPA, Univ. of MS,
University, MS 38677, sptennak@go.olemiss.edu)
Lead magnesium niobate-lead titanate [(1-x)PbMg1/3Nb2/3O3-xPbTiO3]
is a perovskite relaxor ferroelectric material exhibiting superior electromechanical coupling compared to the conventional piezoelectric materials. In
this work, non-poled single crystal PMN-PT material with the composition
near morphotropic phase boundary (MPB) was investigated in the temperature range of 400 K—800 K where the material is reported to be in the cubic
phase. High temperature resonant ultrasound spectroscopy (HT-RUS) technique was used to probe temperature dependency of elastic constants
derived from the measured resonant modes. Non-monotonic resonant frequency trends in the temperature regime indicate stiffening of the material,
followed by gradual softening typically observed in heated materials. Elastic
constants confirmed this stiffening in the temperature range of 400 K—673
K, where the stiffness constants C11 and C44 increased approximately by
40% and 33% respectively. Acoustic attenuation, derived from the quality
factor (Q), exhibits a minimum around the temperature where the stiffness
is maximum and, significantly higher attenuation observed at temperatures
below 400 K. The temperature range 395 K—405 K was identified as a transition temperature range, where the material showed an abrupt change in the
resonant spectrum and, the material emerges from the MPB characterized
by this very high acoustic attenuation. This transition temperature compares
favorably with dielectric constant measurements reported in the literature.
4:15
4pPA11. Structure of cavitation zones in a heavy magma under explosive character of its decompression. Valeriy Kedrinskiy (Physical HydroDynam.., Lavrentyev Inst. of HydroDynam.., Russian Acad. of Sci.,
Lavrentyev prospect 15, Novosibirsk 630090, Russian Federation, kedr@
hydro.nsc.ru)
The paper is devoted to the investigation of a dynamics of state and
structure of compressed magma flow saturated by gas and microcrystallites
which is characterized by phase transitions, diffusive processes, by increase
of a magma viscosity magnitude by the orders and bubbly cavitation development behind the decompression wave front formed in the result of volcanic channel depressurization. The multi-phase mathematical model,
which includes well-known conservation laws for mean pressure, mass
2290
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
velocity, and density as well as the system of the kinetic equations describing the physical processes that occur in a compressed magma during its explosive decompression, is considered. The results of numerical analysis
show that the processes of a magma saturation by cavitation nuclei as their
density magnitude increases by a few orders lead to the formation of separate zone with anomalously high values of the flow parameters. As it has
turned out the abnormal zone is located in the vicinity of a free surface of a
cavitating magma column. The mechanism of its formation is determined
by diffusion flows redistribution as the nuclei density increases as well as by
the change of the distribution character of main flow parameters in the
abnormal zone from a gradual to an abrupt increase of their values on the
lower zone bound. Note, the mass velocity jump by the order magnitude relatively main flow allows to conclude that the flow disintegration on the
lower bound of the zone is quite probable. [Supp. RAS Presidium Program,
Project 2.6].
4:30
4pPA12. Cavity collapse in a bubbly liquid. Ekaterina S. Bolshakova
(Phys., Novosibirsk State Univ., Novosibirsk, Russian Federation) and
Valeriy Kedrinskiy (Physical HydroDynam., Lavrentyev Inst. of HydroDynam., Russian Acad. of Sci., Lavrentyev prospect 15, Novosibirsk 630090,
Russian Federation, kedr@hydro.nsc.ru)
The effect of an ambient liquid state on a spherical cavity dynamics
under atmospheric hydrostatic pressure p and extremely low initial gas pressure p(0) inside was investigated. The equilibrium bubbly (cavitating) medium with sound velocity C as the function of gas phase concentration k
was considered as the ambient liquid model. The cavity dynamics is analyzed within the framework of Herring-equation for the following diapasons
of main parameters : k ¼ 0—5%, h(0) ¼ 0.02—10( 6) atm. Numerical
analysis has shown that the deceleration C by two order does not have an
influence neither on an asymptotic value of collapsed cavity radius nor on
the acoustical losses under its collapse. It means than in the whole the integral acoustical losses remain invariable. However the collapse cavity dynamics and the radiation structure are essentially changed: from numerous
pulsations with a decreasing amplitudes up to a single collapse and from a
wave packet up to a single wave, correspondingly. It has turned out that the
acoustic corrections in the Herring-equation don’t influence practically on
the cavity dynamics if the term of equation with dH/dt is absent. Naturally,
the deceleration C exerts essential influence on an empty cavity dynamics.
The graphs of dR/Cdt values as a function of R/Ro for different C values are
located higher the data of classical models of Herring, Gilmore and Hunter.
So the value M = 1 is reached at R/Ro = 0.023 for k = 0, and at the value
0.23 when k = 5 %. [Support RFBR, grant 12-01-00314.]
168th Meeting: Acoustical Society of America
2290
THURSDAY AFTERNOON, 30 OCTOBER 2014
MARRIOTT 1/2, 1:30 P.M. TO 5:00 P.M.
Session 4pPP
Psychological and Physiological Acoustics: Physiological and Psychological Aspects of Central Auditory
Processing Dysfunction II
Frederick J. Gallun, Cochair
National Center for Rehabilitative Auditory Research, Portland VA Medical Center, 3710 SW US Veterans Hospital Rd.,
Portland, OR 97239
Adrian KC Lee, Cochair
University of Washington, Box 357988, University of Washington, Seattle, WA 98195
Invited Papers
1:30
4pPP1. Aging as a window into central auditory dysfunction: Combining behavioral and electrophysiological approaches. David
A. Eddins, Erol J. Ozmeral, and Ann C. Eddins (Commun. Sci. & Disord., Univ. of South Florida, 4202 E. Fowler Ave., PCD 1017,
Tampa, FL 33620, deddins@usf.edu)
Central auditory processing involves a complex set of feed-forward and feedback processes governed by a cascade of dynamic
neuro-chemical mechanisms. Central auditory dysfunction can arise from disruption of one or more of these processes. With hearing
loss, dysfunction may begin with reduced and/or altered input to the central auditory system followed by peripherally induced central
plasticity. Similar central changes may occur with advancing age and neurological disorders even in the absence of hearing loss. Understanding the behavioral and physiological consequences of this plasticity on the processing of basic acoustic features is critical for effective clinical management. Major central auditory processing deficits include reduced temporal processing, impaired binaural hearing,
and altered coding of spectro-temporal features. These basic deficits are thought to be primary contributing factors to the common complaint of difficulty understanding speech in noisy environments in persons with hearing loss, brain injury, and advanced age. The results
of investigations of temporal, spectral, and spectro-temporal processing, binaural hearing, and loudness perception will be presented
with a focus on central auditory deficits that occur with advancing age and hearing loss. Such deficits can be tied to reduced peripheral
input, altered central coding, and complex changes in cortical representations.
2:00
4pPP2. Age-related declines in hemispheric asymmetry as revealed in the binaural interaction component. Ann C. Eddins, Erol J.
Ozmeral, and David A. Eddins (Commun. Sci. & Disord., Univ. of South Florida, 4202 E. Fowler Ave., PCD 1017, Tampa, FL 33620,
aeddins@usf.edu)
4p THU. PM
The binaural interaction component (BIC) is a physiological index of binaural processing. The BIC is defined as the brain activity
resulting from binaural (diotic or dichotic) stimulus presentation minus the brain activity summed across successive monaural stimulus
presentations. Smaller binaural-induced activity relative to summed monaural activity is thought to reflect neural inhibition in the central
auditory pathway. Since aging is commonly associated with reduced inhibitory processes, we evaluate the hypothesis that the BIC is
reduced with increasing age. Furthermore, older listeners typically have reduced hemispheric asymmetry relative to younger listeners,
interpreted in terms of compensation or recruitment of neural resources and considered an indication of age-related neural plasticity.
Binaural stimuli designed to elicit a lateralized percept generate maximum neural activity in the hemisphere opposite the lateralized
position. In this investigation, we evaluated the hypothesis that the BIC resulting from stimuli lateralized to one side (due to interaural
time differences) results in less hemispheric asymmetry in older than younger listeners with normal hearing. Behavioral data were
obtained to assess the acuity of binaural processing. Data support the interpretation that aging is marked by reduced central auditory inhibition, reduced temporal processing, and broader distribution of activity across hemispheres compared to young adults.
2:30
4pPP3. Effects of blast exposure on central auditory processing. Melissa Papesh, Frederick Gallun, Robert Folmer, Michele Hutter,
M. Samantha Lewis, Heather Belding (National Ctr. for Rehabilitative Auditory Res., Portland VA Medical Ctr., 6303 SW 60th Ave.,
Portland, OR 97221, Melissa.Papesh@va.gov), and Marjorie Leek (Res., Loma Linda VA Medical Ctr., Loma Linda, CA)
Exposure to high-intensity blasts is the most common cause of injury in recent U.S. military conflicts. Prior work indicates that
blast-exposed Veterans report significantly more hearing handicap than non-blast-exposed Veterans, often in spite of clinically normal
hearing thresholds. Our research explores the auditory effects of blast exposure using a combination of self-report, behavioral, and electrophysiological measures of auditory processing. Results of these studies clearly indicate that blast-exposed individuals are significantly
more likely to perform poorly on tests requiring the use of binaural information and tests of pitch sequencing and temporal acuity
2291
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2291
compared to non-blast-exposed control subjects. Behavioral measures are corroborated by numerous objective electrophysiological
measures, and are not necessarily attributable to peripheral hearing loss or impaired cognition. Thus, evidence indicates that blast exposure can lead to acquired deficits in central auditory processing (CAP) which may persist for at least 10 years following blast exposure.
Future studies of these deficits in this and other adult populations are needed to address important issues such as individual susceptibility,
anatomical, and physiological changes in auditory pathways which contribute to symptoms of these types of deficits, and development
of effective evidence-based methods of rehabilitation in adult patients. [Work supported by the VA Rehabilitation Research & Development Service and the VA Office of Academic Affiliations.]
3:00–3:30 Break
3:30
4pPP4. Auditory processing demands and working memory span. Margaret K. Pichora-Fuller (Dept. of Psych., Univ. of Toronto,
3359 Mississauga Rd., Mississauga, ON L5L 1C6, Canada, k.pichora.fuller@utoronto.ca) and Sherri L. Smith (Audiologic Rehabilitation Lab., Veterans Affairs, Mountain Home, TN)
The (in)dependence of auditory and cognitive processing abilities is a controversial topic for hearing researchers and clinicians.
Some advocate for the need to isolate auditory and cognitive factors. In contrast, we argue for the need to understand how they interact.
Working memory span (WMS) is a cognitive measure that has been related to language comprehension in general and also to speech
understanding in noise. In healthy adults with normal hearing, there is typically a strong correlation between reading and listening measures of WMS. Some investigators have opted to use visually presented stimuli when testing people who do not have normal hearing in
order to avoid the influence of modality-specific auditory processing deficits on WMS. However, tests conducted using auditory stimuli
are necessary to evaluate how cognitive processing is affected by the auditory processing demands experienced by different individuals
over a range of conditions in which the tasks to be performed, the availability of supportive context, and the acoustical and linguistic
characteristics of targets and maskers are varied. Attempts to measure auditory processing independent of cognitive processing will fall
short in assessing listening function in realistic conditions.
4:00
4pPP5. Auditory perceptual learning as a gateway to rehabilitation. Beverly A. Wright (Commun. Sci. and Disord., Northwestern
Univ., 2240 Campus Dr., Evanston, IL 60202, b-wright@northwestern.edu)
A crucial aspect of the central nervous system is that it can be modified through experience. Such changes are thought to occur in
two learning phases: acquisition—the actual period of training—and consolidation—a post-training period during which the acquired information is transferred to long-term memory. My coworkers and I have been addressing these principles in auditory perceptual learning
by characterizing the factors that induce and those that prevent learning during the acquisition and consolidation phases. We also have
been examining how these factors change during development and aging and are affected by hearing loss and other conditions that alter
auditory perception. Application of these principles could improve clinical training strategies. Further, though learning is the foundation
for behavioral rehabilitation, the capacity to learn can itself be impaired. Therefore, an individual’s response to perceptual training could
be used as an objective, clinical measure to guide diagnosis and treatment of a cognitive disorder. [Work supported by NIH.]
4:30–5:00 Panel Discussion
2292
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2292
THURSDAY AFTERNOON, 30 OCTOBER 2014
MARRIOTT 5, 1:00 P.M. TO 4:00 P.M.
Session 4pSC
Speech Communication: Voice (Poster Session)
Richard J. Morris, Chair
Communication Science and Disorders, Florida State University, 201 West Bloxham Road, 612 Warren Building,
Tallahassee, FL 32306-1200
All posters will be on display from 1:00 p.m. to 4:00 p.m. To allow contributors an opportunity to see other posters, the contributors of
odd-numbered papers will be at their posters from 1:00 p.m. and 2:30 p.m. and contributors of even-numbered papers will be at their
posters from 2:30 p.m. to 4:00 p.m.
Contributed Papers
Vocal tremor involves atypical modulation of the fundamental frequency
(F0) and intensity of the voice. Previous research on vocal tremor has focused
on measuring the modulation rate and extent of the F0 and intensity without
characterizing other modulations present in the acoustic signal (i.e., modulation of the harmonics). Characteristics of the voice source and vocal tract filter
are known to affect the amplitude of the harmonics and could potentially be
manipulated to reduce the perception of vocal tremor. The purpose of this
study was to determine the adjustments that could be made to the voice source
or vocal tract filter to alter the acoustic output and reduce the perception of
modulation. This research was carried out using a computational model of
speech production that allows for precise control and modulation of the glottal
and vocal tract configurations. Results revealed that listeners perceived a
higher magnitude of voice modulation when simulated samples had a higher
mean F0, greater degree of vocal fold adduction, and vocal tract shape for /i/
vs. /A/. Based on regression analyses, listeners’ judgments were predicted by
modulation information present in both low and high frequency bands. [Work
supported by NIH F31-DC012697.]
4pSC2. Perception of breathiness in pediatric speakers. Lisa M. Kopf,
Rahul Shrivastav (Communicative Sci. and Disord., Michigan State Univ.,
Rm. 109, Oyer Speech and Hearing Bldg., 1026 Red Cedar Rd., East Lansing,
MI 48824, kopflisa@msu.edu), David A. Eddins (Commun. Sci. and Disord.,
Univ. of South Florida, Tampa, FL), and Mark D. Skowronski (Communicative Sci. and Disord., Michigan State Univ., East Lansing, MI)
Extensive research has been done to determine acoustic metrics for voice
quality. However, few studies have focused on voice quality in the pediatric
population. Instead, metrics evaluated on adults have directly been applied to
children’s voices. Some variables, such as pitch, that differ between adult and
pediatric voices, have been shown to be critical in the perception of breathiness. Furthermore, it is not known whether adults perceive voice quality similarly for pediatric and adult speakers. In this experiment, 10 listeners judged
breathiness for 28 stimuli using a single-variable matching task. The stimuli
were modeled after four pediatric speakers and synthesized using a Klatt-synthesizer to have a wide range of aspiration noise and open quotient. Both of
these variables have been shown to influence the perception of breathiness.
The resulting data were compared to that previously obtained for adult speakers using the same matching task. Comparison of adult and pediatric voices
will help identify differences in the perception of breathiness for these groups
of speakers and to develop more accurate metrics for voice quality in children. [Research supported by NIH (R01 DC009029).]
2293
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
1:00
4pSC3. Combining differentiated electroglottograph and differentiated
audio signals to reliably measure vocal fold closed quotient. Richard J.
Morris (Commun. Sci. and Disord., Florida State Univ., 201 West Bloxham
Rd., 612 Warren Bldg., Tallahassee, FL 32306-1200, richard.morris@cci.
fsu.edu), Shonda Bernadin (Elec. and Comput. Eng., Florida A & M Univ.,
Tallahassee, FL), David Okerlund (College of Music, Florida State Univ.,
Tallahassee, FL), and Lindsay B. Wright (Commun. Sci. and Disord., Florida State Univ., Tallahassee, FL)
Over the past few decades researchers have explored the use of the electroglottograph (EGG) as a non-invasive method for representing vocal fold contact
during vowel production and to measure the closed quotient (CQ) and open quotient (OQ) of the glottal cycle. The first derivative of the EGG signal (dEGG)
can be used to indicate these moments (Childers & Krishnamurthy, 1985). However, there can be double positive peaks in the dEGG as well as a variety of negative peak patterns (Herbst et al., 2010). Obviously these variations will alter
any measurements made from the signal. Recently, the use of the dEGG with
dAudio signal was reported as a means for more reliable measurement of the
CQ from the EGG signal in combination with a time synchronized audio signal.
The purpose of this study is to demonstrate the reliability of the dEGG and dAudio for determining CQ across a variety of vocal conditions. Files recorded from
group of 15 trained females singing an octave that included their primo passaggio provided the data. Preliminary results indicate high reliability of the CQ
measurements in both the chest and middle registers of all of the singers.
4pSC4. A reduced-order three-dimensional continuum model of voice
production. Zhaoyan Zhang (UCLA School of Medicine, 1000 Veteran
Ave., 31-24 Rehab Ctr., Los Angeles, CA 90095, zyzhang@ucla.edu)
Although vocal fold vibration largely occurs in the transverse plane, control of voice is mainly achieved by adjusting vocal fold stiffness along the anterior–posterior direction through muscle activation. Thus, models of voice
control need to be at least three-dimensional on the structural side. Modeling
the detailed three-dimensional interaction between vocal fold dynamics, glottal aerodynamics, and the sub- and supra-glottal acoustics is computationally
expensive, which prevents parametric studies of voice production using threedimensional models. In this study, a Galerkin-based reduced-order threedimensional continuum model of phonation was presented. Preliminary
results showed that this model was able to qualitatively reproduce previous
experimental observations. This model is computationally efficient and thus
ideal for parametric studies in phonation research as well as practical applications such as speech synthesis. [Work supported by NIH.]
168th Meeting: Acoustical Society of America
2293
4p THU. PM
4pSC1. Acoustical bases for the perception of simulated laryngeal vocal
tremor. Rosemary A. Lester, Brad H. Story, and Andrew J. Lotto (Speech,
Lang., and Hearing Sci., Univ. of Arizona, P.O. Box 210071, Tucson, AZ
85721, rosemary.lester@gmail.com)
4pSC5. The influence of attentional focus on voice control. Eesha A. Zaher
and Charles R. Larson (Commun. Sci. and Disord., Northwestern Univ., 2240
Campus Dr., Evanston, IL 60208, EeshaZaheer2014@u.northwestern.edu)
The present study tested the role of attentional focus on control of voice
fundamental frequency (F0). Subjects vocalized an “ah” sound while hearing their voice auditory feedback randomly shifted upwards or downwards
in pitch. In the “UP” condition, subjects vocalized, listened for and pressed
a button for each upward pitch shift stimulus. In the “DOWN” condition,
subjects listened for and pressed a button for each downward shift. In the
CONTROL condition, subjects vocalized without paying attention to the
stimulus direction or pressing a button. Data were analyzed by averaging
voice F0 contours across several trials for each pitch shift stimulus in all
conditions. Response magnitudes were larger for the CONTROL than for
the UP or DOWN conditions. Responses for the UP and DOWN conditions
did not differ. Results suggest that when subjects focus their attention to
identify specific stimuli and produce a non-vocal motor response conditional
upon the identification, the neural mechanisms involved in voice control are
reduced, possibly because of a reduction in the error signal resulting from
the comparison of the efference copy of voice output with auditory feedback. Thus, focusing attention away from vocal control reduces neural
resources involved in control of voice F0.
4pSC6. Attention-related modulation of involuntary audio-vocal
response to pitch feedback errors. Hanjun Liu, Huijing Hu, and Ying Liu
(Rehabilitation Medicine, The First Affiliated Hospital of Sun Yat-sen
Univ., 58 Zhongshan 2nd Rd., Guangzhou, Guangdong 510080, China,
lhanjun@mail.sysu.edu.cn)
It has been demonstrated that unexpected alterations in auditory feedback elicit fast compensatory adjustments in vocal production. Although
generally thought to be involuntary in nature, whether these adjustments can
be influenced by cognitive function such as attention remains unknown. The
present event-related potential (ERP) study investigated whether neurobehavioral processing of auditory-vocal integration can be affected by attention.
While sustaining a vowel phonation and hearing pitch-shifted feedback, participants were required to either ignore the auditory feedback perturbation,
or attend to it with two levels of attention load. The results revealed
enhancement of P2 response to the attended auditory perturbation with the
low load level as compared to the unattended auditory perturbation. Moreover, increased auditory attention load led to a significant decrease of P2
response. By contrast, there was no attention-related change of vocal
response. These findings provide the first neurophysiological evidence that
involuntary auditory-vocal integration can be modulated as a function of auditory attention. Furthermore, it is suggested that auditory attention load can
result in a decrease of the cortical processing of auditory-vocal integration
in pitch regulation.
4pSC7. A study on the effect of intraglottal vortical structures on vocal
fold vibration. Mehrdad H Farahani and Zhaoyan Zhang (Head and Neck
Surgery, UCLA, 31-24 Rehab Ctr., UCLA School of Medicine, 1000 Veteran Ave., Los Angeles, CA 90095, mh.farahani@gmail.com)
Recent investigations suggested possible formation of the vortical structures in the intraglottal region during the closing phase of the phonation
cycle. Vortical regions in the flow field are locations of negative pressure,
and it has been hypothesized that this negative pressure might facilitate the
glottal closure and thus affects the vibration pattern and voice production
for high subglottal pressures. However, it is unclear whether the vortexinduced negative pressure is large enough, compared with vocal fold inertia
and elastic recoil, to have a noticeable effect on glottal closure. In addition,
the intraglottal vortical structures generally exist only for a small fraction of
the closing phase when the glottis becomes divergent enough to induce flow
separation. In the current work, oscillation of the vocal folds and the flow
field are modeled using a non-linear finite element solver and a reduced
order flow solver, respectively. The effect of vortical structures is modeled
as a sinusoidal negative pressure wave applied to vocal fold surface between
the flow separation point and the superior edge of the vocal folds. The
effects of this vortex-induced negative pressure are quantified at different
conditions of vocal fold stiffness and subglottal pressures. [Work supported
by NIH.]
2294
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
4pSC8. Effects of thyroarytenoid muscle activation on phonation in an
in vivo canine larynx model. Georg Luegmair, Dinesh Chhetri, and
Zhaoyan Zhang (Dept. of Head and Neck Surgery, Univ. of California Los
Angeles, 1000 Veteran Ave., Rehab 31-24, Los Angeles, CA 90095, gluegmair@ucla.edu)
Previous studies have shown that the thyroarytenoid (TA) muscle plays
an important role in the control of vocal fold adduction and stiffness. The
effects of TA stimulation on vocal fold vibration, however, are still unclear.
In this study, the effects of TA muscle activation on phonation were investigated in an in vivo canine larynx model. Laryngeal muscle activation was
achieved through parametric stimulation of the thyroarytenoid, the lateral
cricoarytenoid (LCA), and the cricothyroid (CT) muscles. For each stimulation level, the subglottal pressure was gradually increased to produce phonation. The subglottal pressure, the volume flow, and the outside acoustic
pressure were measured together with high-speed recording of vocal fold
vibration from a superior view. The results show that, without TA activation, phonation was limited to conditions of medium to high levels of LCA
and CT activations. TA activation allowed phonation to occur at a much
lower activation level of the LCA and CT muscles. Compared to conditions
of no TA activation, TA activation led to decreased open quotient. Increasing TA activation also allow phonation to occur at a much larger range of
the subglottal pressure while still maintaining certain degree of glottal closure during vibration. [Work supported by NIH.]
4pSC9. Voice accumulation and voice disorders in primary school
teachers. Pasquale Bottalico (Dept. of Communicative Sci. and Disord.,
Michigan State Univ., 1026 Red Cedar Rd., East Lansing, MI 10125, pasqualebottalico@yahoo.it), Lorenzo Pavese, Arianna Astolfi (Dipartimento
di Energia, Politecnico di Torino, Torino, Italy), and Eric J. Hunter (Dept.
of Communicative Sci. and Disord., Michigan State Univ., East Lansing,
MI)
Statistics on professional voice users with vocal health issues demonstrate the significance of the problem. However, such disorders are not currently recognized as an occupational disease in Italy. Conducting studies
examining the vocal health of occupational voice users is an important step
in identifying this as an important public health issue. The current study was
conducted in six primary schools in Italy with 25 teachers, one of the most
affected occupational categories. A clinical examination was conducted
(consisting of hearing and voice screening, a VHI, etc.). On this basis, teachers were divided into three groups: healthy subjects, subject with logopaedic
disorders, and subjects with objectively measured pathological symptoms.
The distributions of voicing and silence periods for the teachers at work
were collected using the Ambulatory Phonation Monitor (APM3200), a device for long-term monitoring of vocal parameters. The APM senses the
vocal fold vibrations at the base of the neck by means of a small accelerometer. Correlations were calculated between the voice accumulation slope
(obtained by multiplying the number of occurrences for each period by the
corresponding duration) and the clinical status of the teachers. The differences in voice accumulation distributions among the three groups were
analyzed.
4pSC10. Room acoustics and vocal comfort in untrained vocalists. Eric
J. Hunter, Pasquale Bottalico, Simone Graetzer, and Russell Banks (Dept. of
Communicative Sci. and Disord., Michigan State Univ., 1026 Red Cedar
Rd., East Lansing, MI 48824, ejhunter@msu.edu)
Talkers have long been studied in their speech accommodation strategies
in noise. Vocal effort and comfort within noisy situations have also been
studied. In this study, untrained vocalists were exposed to a range of room
acoustic conditions. In each environment, the subject performed a vocal
task, with a goal of being “heard” by a listener 5 m away. After each task,
the subject completed a series of questions addressing vocal effort and comfort. Additionally, using a head and torso simulator (HATS), the environment was assessed using a sine sweep presented at the HATS mouth and
recorded at the ears. It was found that vocal clarity (C50) and the initial
reflection related to vocal comfort. The results are not only relevant to room
design but also to understanding talkers’ acuity to acoustic conditions and
their adjustments to them.
168th Meeting: Acoustical Society of America
2294
Frequency (F0) vibrato is commonly known, but not so for flow vibrato,
the mean flow variation that accompanies frequency vibrato. Two classically trained singers, each with over 20 years professional experience, a soprano and a tenor, recorded /pa:pa:pa:/ sequences on three pitches (C4, A4,
and G5 for the soprano, D3, D4, and G4 for the tenor) and three loudness
levels (p, mf, and f) at each pitch. Each vowel had 3–6 frequency vibrato
cycles. For both singers, flow vibrato (obtained using the Glottal Enterprises
aerodynamic system) was present, and the lowest pitch had the most variability; otherwise, flow vibrato was fairly sinusoidal in shape. For the soprano, flow vibrato cycle extents were: 21–88 cc/s, lowest pitch; 60–147 cc/
s, middle pitch; 115–214 cc/s, highest pitch, across loudness levels. For the
soprano, the phase difference for the flow was 120–180 degrees ahead of the
F0 vibrato. For the tenor, the flow vibrato cycle extents were: 32–85 cc/s,
lowest pitch; 98–113 cc/s, middle pitch; 76–240 cc/s, highest pitch, across
loudness levels. Flow vibrato for the tenor led the F0 vibrato typically by
40–120 degrees. For both subjects, some flow vibrato cycles had double
peaks. Flow vibrato needs further study to determine its origin, shapes, and
magnitudes.
4pSC12. Impact of vocal tract resonance on the perception of voice
quality changes caused by vocal fold stiffness. Rosario Signorello,
Zhaoyan Zhang, Bruce Gerratt, and Jody Kreiman (Head and Neck Surgery,
Univ. of California Los Angeles David Geffen School of Medicine, 31-24
Rehab Ctr., UCLA School of Medicine, 1000 Veteran Ave., Los Angeles,
CA 90095, rsignorello@ucla.edu)
Experiments using animal and human larynx models are often conducted
without a vocal tract. While it is reasonable to assume the absence of a vocal
tract has only small effects on vocal fold vibration, it is unclear how sound
production and its perception will be affected. In this study, the validity of
using data obtained in the absence of a vocal tract for voice perception studies was investigated. Using a two-layer self-oscillating physical model, three
series of voice stimuli were created: one produced with conditions of leftright symmetric vocal fold stiffness, and two with left-right asymmetries in
vocal fold body stiffness. Each series included a set of stimuli created with a
physical vocal tract, and a second set created without a physical vocal tract.
Stimuli were re-synthesized to equalize the mean F0 for each series and normalized for amplitude. Listeners were asked to evaluate the three series in a
sort-and-rate task. Multidimensional scaling analysis will be applied to
examine the perceptual interaction between the voice source and the vocal
tract resonances. [Work supported by NIH.]
4pSC13. Perceptual differences among models of the voice source: Further evidence. Marc Garellek (Linguist, UCSD, La Jolla, CA), Gang Chen
(Elec. Eng., UCLA, Los Angeles, CA), Bruce R. Gerratt (Head and Neck
Surgery, UCLA, 31-24 Rehab Ctr., 1000 Veteran Ave., Los Angeles, CA
90403), Abeer Alwan (Elec. Eng., UCLA, Los Angeles, CA), and Jody
Kreiman (Head and Neck Surgery, UCLA, Los Angeles, CA, jkreiman@
ucla.edu)
Models of the voice source differ in how they fit natural voices, but it is
still unclear which differences in fit are perceptually salient. This study
describes ongoing analyses of differences in the fit of six voice source models to 40 natural voices, and how these differences relate to perceptual similarities among stimuli. Listeners completed a visual sort-and-rate task to
compare versions of each voice created with the different source models,
and the results were analyzed using multi-dimensional scaling (MDS). Perceptual spaces were interpreted in terms of variations in model fit in both
the time and spectral domains. The discussion will focus on the perceptual
importance of matches to both time-domain and spectral features of the
voice. [Work supported by NIH/NIDCD grant DC01797 and NSF grant IIS1018863.]
2295
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
4pSC14. The biological function of fundamental frequency in leaders’
charismatic voices. Rosario Signorello (Head and Neck Surgery, Univ. of
California Los Angeles David Geffen School of Medicine, 31-24 Rehab
Ctr., UCLA School of Medicine, 1000 Veteran Ave., Los Angeles, CA
90095, rsignorello@ucla.edu)
Charismatic leaders use voice based on two functions: a primary biological function and a secondary language and culture-based function (Signorello, 2014). In order to study the primary function in more depth, we
conducted acoustic and perceptual studies on the use of F0 by French, Italian and Brazilian charismatic political leaders. Results show that leaders
manipulate F0 in significantly different manners relative to: (1) the context
of communication (persuasive goal, the place where communication occurs
and the type of audience) in order to be recognized as the leader of the
group; and (2) the elapse of time (from the beginning to the end of the
speech) in order to create a climax with the audience. Results of a perceptual
test show that the leader’s use of low F0 voice results in the perception of
the leader as a dominant or threatening leader and the use of higher F0 conveys sincere, calm, and reassuring leadership. These results show cross-language and cross-cultural similarities in leaders’ vocal behavior and
listeners’ perception, and robustly demonstrate the two different functions
of leaders’ voices.
4pSC15. Voice quality variation and gender. Kara Becker, Sameer ud
Dowla Khan, and Lal Zimman (Linguist, Reed College, Reed College, 3203
SE Woodstock Boulevard, Portland, OR 97202, kbecker@reed.edu)
Recent work on American English has established that speakers increasingly use creaky phonation to convey pragmatic information, with young
urban women assumed to be the most active users of this phonetic feature.
However, no large-scale acoustic or articulatory study has established the
actual range and diversity of voice quality variation along gender identities,
encompassing different sexual orientations, regional backgrounds, and socioeconomic statuses. The current study does exactly that, through four methods: (1) subjects identifying with a range of gender and other demographic
identities were audio recorded while reading wordlists as well as a scripted
narrative assuming characters’ voices designed to elicit variation in vowel
quality. Simultaneously, (2) electroglottographic readings were taken and
analyzed to determine the glottal characteristics of this voice quality variation. (3) Subjects were then asked to rate recordings of other people’s voices
to identify the personal characteristics associated with the acoustic reflexes
of phonation; in the final task, (4) subjects were explicitly asked about their
language ideologies as they relate to gender. Thus, the current study
explores the relation between gender identity and phonetic features, measured acoustically, articulatorily, and perceptually. This work is currently
underway and preliminary results are being compiled at this time.
4pSC16. Towards standard scales for dysphonic voice quality: Magnitude estimation of reference stimuli. David A. Eddins (Commun. Sci. &
Disord., Univ. of South Florida, 4202 E. Fowler Ave., PCD 1017, Tampa,
FL 33620, deddins@usf.edu) and Rahul Shrivastav (Communicative Sci. &
Disord., Michigan State Univ., East Lansing, MI)
This work represents a critical step in developing standard measurement
scales for the dysphonic voice qualities of breathiness and roughness. Methods such as Likert ratings, visual analog scales and magnitude estimation
result in arbitrary units, limiting their clinical usefulness. A single-variable
matching task can quantifying voice quality in terms of physical units but is
too time consuming for clinical use. None of these methods result in information that has a direct or intuitive relationship with the underlying percept.
A proven approach for the perception of loudness is the Sone scale which
ties physical units to the perceptual estimates of loudness magnitude. As a
first step in developing such a scale for breathiness and roughness, here we
establish the relationship between the change in perceived VQ magnitude
and the change in physical units along the continuum of each VQ dimension. A group of 25 listeners engaged in a magnitude estimation task to
determine perceived magnitude associated with the comparison stimuli used
in our single-variable matching tasks. This relationship is analogous to mapping intensity in dB to perceived loudness in Phons and is a critical step in
developing a Sone-like scale for breathiness and roughness.
168th Meeting: Acoustical Society of America
2295
4p THU. PM
4pSC11. Flow vibrato in singers. Srihimaja Nandamudi and Ronald C.
Scherer (Commun. Sci. and Disord., Bowling Green State Univ., 200 Health
and Human Services Bldg., Bowling Green, OH 43403, nandas@bgsu.
edu)
4pSC17. Divergent or convergent glottal angles: Which give greater
flow? Ronald Scherer (Commun. Sci. and Disord., Bowling Green State
Univ., 200 Health Ctr., Bowling Green, OH 43403, ronalds@bgsu.edu)
4pSC18. Methodological issues when estimating subglottal pressure
from oral pressure. Brittany Frazer (Commun. Sci. and Disord., Bowling
Green State Univ., 200 Marie Pl., Perrysburg, OH 43551, bfrazer@bgsu.
edu) and Ronald C. Scherer (Commun. Sci. and Disord., Bowling Green
State Univ., Bowling Green, OH)
During phonation, the glottis alters between convergent and divergent
angles. For the same angle value, diameter, and transglottal pressure, which
angle, divergent or convergent, results in greater flow? The symmetric glottal angles of the physical static model M5 were used. Characteristics (lifesize) of the model were: axial glottal length 0.30 cm; angles of 5, 10, 20,
and 40 degrees; diameters of 0.005, 0.01, 0.02, 0.04, 0.08, 0.16, and 0.32
cm; transglottal pressures from 1 to 25 cm H2O; resulting in flows from 2.7
to 1536 cc/s and Reynolds number from 29.4 to 13,058. Results: (1) For
diameters of 0.04, 0.08 and 0.16 cm, the divergent angle always gave more
flow than the convergent angle (about 5–25%); (2) for the smallest (0.005
cm) and largest diameter (0.32 cm), the divergent angles always gave less
flow (10–30%); (3) for diameters of 0.01 and 0.02 cm, flow was greater for
divergent 5 and 10 degrees, and less for divergent 20 and 40 degrees. These
results suggest that the divergent glottal angle will increase the glottal flow
for midrange glottal diameters (skewing the glottal flow further “to the
right”?), and create less flow at very small diameters (increasing energy in
the higher harmonics?).
A noninvasive method to estimate subglottal pressure for vowel productions is to smoothly utter a CVCV string such as /p:i:p:i:p:i:…/ using a short
tube in the mouth with the tube attached to a pressure transducer. The pressure during the lip occlusion estimates the subglottal pressure during the adjacent vowel. What should the oral pressure look like for it to provide
accurate estimates? The study compared results using various conditions
against a standard condition that required participants to produce /p:i:p:i:../
syllables smoothly and evenly at approximately 1.5 syllables per second.
The non-standard tasks were: performing the task without training, increasing syllable rate, using a voiced /b/ instead of a voiceless /p/ initial syllable,
adding a lip or velar leak, or using a two syllable production (“peeper”)
instead of a single syllable production. Lip leak, velar leak, and lack of time
to equilibrate air pressure throughout the airway caused estimates of subglottal pressure to be inaccurate. Accuracy was better when estimates of
subglottal pressure were obtained using the voiced initial consonant and the
two-syllable word. Training improved the consistency of the oral pressure
profiles and thus the assurance in estimating the subglottal pressure. Oral
pressures with flat plateaus appear most accurate.
THURSDAY AFTERNOON, 30 OCTOBER 2014
INDIANA F, 1:00 P.M. TO 4:45 P.M.
Session 4pUW
Underwater Acoustics: Shallow Water Reverberation III
Kevin L. Williams, Chair
Applied Physics Lab., University of Washington, 1013 NE 40th St., Seattle, WA 98105
Contributed Paper
1:00
with inversion techniques. Recently, the method has been extended to treat
non-parallel sediment layering. The method is applied to data from an autonomous underwater vehicle (AUV) towing a source (1600–3500 Hz) and
an horizontal array of hydrophones. AUV reflection measurements were
acquired every 3 m along 10 criss-cross lines over a 1km<+>2<+> area
with evidently dipping layers. Mapping the along track sound-speed profiles
in geographical coordinates results in a pseudo-3D (Nx2D) sediment structure characterization of the area down to several tens of meters in the subbottom. The sound speed profile agreement at crossing points is quite good.
4pUW1. Seafloor sound-speed profile and interface dip angle measurement by the image source method: Application to real data. Samuel Pinson (Laboratorio de Vibraç~
oes e Ac
ustica, Universidade Federal de Santa
Catarina, LVA Dept de Engenharia Mec^anica, UFSC, Bairro Trindade, Florian
opolis, SC 88040-900, Brazil, samuelpinson@yahoo.fr) and Charles W.
Holland (Penn State Univ., State College, PA)
The image source method characterizes the sediment sound-speed profile from seafloor reflection data with a low computational cost compared
Invited Papers
1:15
4pUW2. Requirements, technology, and science drivers of applied reverberation modeling. Anthony I. Eller and Kevin D. Heaney
(OASIS, Inc., 11006 Clara Barton Dr., Fairfax Station, VA 22039, ellera@oasislex.com)
The historical development of reverberation modeling is a story driven by both supporting and sometimes conflicting features of
application requirements, measurement and computing capability, and scientific understanding. This paper presents an overview of how
underwater reverberation modeling technology has responded to application needs and how this process has helped the community to
identify and resolve related science issues. Emphasis is on the areas of System Design and Acquisition Support, Deployment and Operational Support, and Training Support. Gaps in our scientific knowledge are identified and recent advances are described that help push
forward our collective understanding of how to predict and mitigate reverberation.
2296
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2296
1:35
4pUW3. Reverberation models as an aid to interpret data and extract environmental information. Dale D. Ellis (Phys., Mount
Allison Univ., 18 Hugh Allen Dr., Dartmouth, NS B2W 2K8, Canada, daledellis@gmail.com) and John R. Preston (Appl. Res. Lab.,
The Penn State Univ., State College, PA)
Reverberation measurements obtained with towed arrays are a valuable tool to extract information about the ocean environment.
Preston pioneered the use of polar plots to display reverberation and superimpose the beam time series on bathymetry maps. As part of
Rapid Environmental Assessment (REA) exercises Ellis and Preston [J. Marine Syst. 78, S359–S371, S372–S381] have used directional
reverberation measurements to detect uncharted bottom features, and to extract environmental information using model-data comparisons. One enthusiast declared “This is like doing 100 simultaneous transmission loss runs and having the results available immediately.”
Though that was clearly an exaggeration and the results are not precise, the approach provides valuable information to direct more accurate and detailed surveys. The early work used range-independent (flat bottom) models for the model-data comparisons, while current
work includes a range-dependent model based on adiabatic normal modes. A model has been developed which calculates reverberation
from range-dependent bottom bathymetry, echoes from targets and discrete clutter objects, then outputs beam time series directly comparable with measured ones. Recent work has identified interesting effects in sea bottom sand dunes in the TREX experiments. This paper will provide an overview of the earlier work, and examples from the recent TREX experiment.
1:55
4pUW4. Reverberation data/model comparisons using transport theory. Eric I. Thorsos, Jie Yang, Frank S. Henyey, and W. T.
Elam (Appl. Phys. Lab., Univ. of Washington, 1013 NE 40th St., Seattle, WA 98105, eit@apl.washington.edu)
Transport theory has been developed for modeling shallow water propagation and reverberation at mid frequencies (1–10 kHz)
where forward scattering from a rough sea surface is taken into account in a computationally efficient manner. The method is based on a
decomposition of the field in terms of unperturbed modes, and forward scattering at the sea surface leads to mode coupling that is treated
with perturbation theory. Reverberation measurements made during TREX13 combined with extensive environmental measurements
provide an important test of transport theory predictions. Modeling indicates that the measured reverberation was dominated by bottom
reverberation, and the reverberation level in the 2 4 kHz band was observed to decrease as the sea surface conditions increased from a
low sea state to a higher sea state. This suggests that surface forward scattering was responsible for the change in reverberation level.
Results of data/model comparisons examining this effect will be shown. [Work supported by ONR Ocean Acoustics.]
Contributed Papers
4pUW5. Physics of backscattering in terms of mode coupling applied to
measured seabed roughness spectra in shallow water. David P. Knobles
(ARL, UT at Austin, PO BOX 8029, Austin, TX 78713-8029, dpknobles@
yahoo.com)
Energy conserving coupled integral equations for the forward and backward propagating modal components have been previously developed [J.
Acoust. Soc. Am. 130, 2673–2680 (2011)]. A rough seabed surface leads to
a backscattered field and modifies the interference structure of the forward
propagating field. Perturbation theory applied to the basic coupled integral
equations allows for physical insight into the correlation of the roughness
spectrum to the forward and backward modal intensities and cross mode coherence. This study applies the Nx2D integral equation coupled-mode
approach to 3-D roughness measurements and examines the physics of the
coupling of the forward and backward field components and computes the
modal intensities as a function of azimuth. The roughness measurements
were made in about 20 m of water off Panama City, Florida. [Work supported by ONR Code 322 OA.]
2:30
4pUW6. Energy conservation via coupled modes in waveguides with an
impedance boundary condition. Steven A. Stotts and Robert A. Koch (Environ. Sci. Lab., Appl. Res. Labs/The Univ. of Texas at Austin, 10000 Burnet Rd., Austin, TX 78758, stotts@arlut.utexas.edu)
A statement of energy conservation for a coupled mode formulation
with real mode functions and eigenvalues has been demonstrated to be consistent with the statement of conservation derivable from the Helmholtz
equation. The restriction to real mode functions and eigenvalues precludes
coupled mode descriptions with waveguide absorption or untrapped modes.
The demonstration, along with the derivation of the coupled mode range
equation, relies on orthonormality in terms of a product of two modal depth
functions integrated to infinite depth. This paper shows that energy conservation and the derivation of the coupled mode range equation can be
extended to complex mode functions and eigenvalues, and that energy is
conserved for ocean waveguides with a penetrable bottom boundary at a finite depth beneath any range dependence. For this, the penetrable bottom
2297
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
boundary is specified by an impedance condition for the mode functions.
The new derivations rely on completeness and a modified orthonormality
statement. Mode coupling is driven solely by waveguide range dependence.
Thus, the form of the range equation and the values of the coupling coefficients are unaffected by a finite depth waveguide. Applications of energy
conservation to examine the accuracy of a numerical coupled mode calculation are presented.
2:45
4pUW7. Effect of channel impulse response on matched filter performance in the 2013 Target and Reverberation Experiment. Mathieu E.
Colin (Acoust. and Sonar, TNO, Postbus 96864, Den Haag 2509 JG, Netherlands, mathieu.colin@tno.nl), Michael A. Ainslie (Acoust. and Sonar, TNO,
The Hague, Netherlands), Peter H. Dahl, David R. Dall’Osto (Appl. Phys.
Lab., Univ. of Washington, Seattle, WA), Sean Pecknold (Underwater Surveillance and Communications, Defence Res. and Development Canada ,
Dartmouth, NS, Canada), and Robbert van Vossen (Acoust. and Sonar,
TNO, The Hague, Netherlands)
Active sonar performance is determined by the characteristics of the target, the sonar system and the effect of the environment on the received
waveform. The two main influences of the environment are propagation
effects and the contamination of the target echo with a background. The ambient noise and reverberation are mitigated by means of signal processing,
mostly through beamforming and matched-filtering. The improvement can
be quantified by the signal to noise ratios before and after processing. Propagation effects can have a large influence on the gains obtained by the processing. To study the effect of the channel on the matched filter performance,
broadband channel impulse responses were modeled and compared to measurements acquired during the Office of Naval Research-funded 2013 Target
and Reverberation Experiment (TREX). In shallow water, a large time
spread is often observed, reducing the effectiveness of the matched filter.
TREX data show, however, a limited time spread. Model predictions indicate that this could be caused by a rough sea-surface, which while increasing propagation loss, at the same time increases matched filter gain.
3:00–3:15 Break
168th Meeting: Acoustical Society of America
2297
4p THU. PM
2:15
3:15
4:00
4pUW8. Using physical oceanography to improve transmission loss calculations in undersampled environments. Cristina Tollefsen and Sean
Pecknold (Defence Res. and Development Canada, P. O. Box 1012, Dartmouth, NS B2Y 3Z7, Canada, cristina.tollefsen@gmail.com)
4pUW11. Laboratory measurements of backscattering strengths from
two-types of artificially roughened sandy bottoms. Su-Uk Son (Dept. of
Marine Sci. and Convergent Technol., Hanyang Univ., 55 Hanyangdaehakro, Sangnok-gu, Ansan, Gyeonggi-do 426-791, South Korea, suuk2@
hanyang.ac.kr), Sungho Cho (Maritime Security Res. Ctr., Korea Inst. of
Ocean Sci. & Technol., Ansan, Gyeonggi-do, South Korea), and Jee Woong
Choi (Dept. of Marine Sci. and Convergent Technol., Hanyang Univ.,
Ansan, Gyenggi-do, South Korea)
The vertical sound speed profile (SSP) is a critical input to any acoustic
propagation model. However, even when measured SSPs are available they
are frequently noisy “snapshots” of the SSP at a single moment in time and
space and do not fully capture changes such as solar heating and winddriven mixing that can significantly affect shallow water propagation on
time scales of less than a day. Furthermore, SSPs measured in the field may
not extend to the ocean bottom and are often based on measured profiles of
temperature with an implicit assumption of constant salinity. In April–May
2013, the Target and Reverberation Experiment (TREX) was conducted in
the Northeastern Gulf of Mexico near Panama City, Florida, a region
strongly affected by local wind forcing, freshwater inputs, and the presence
of a warm-core Gulf of Mexico Loop Current eddy ("Eddy Kraken") offshore of the experimental site. “Synthetic” SSPs were constructed for the
trial area by combining knowledge of the physical oceanography and water
masses in the area with the measured SSPs that were available. Transmission loss was modelled using both synthetic and measured SSPs and the
results will be compared with measured transmission loss.
3:30
4pUW9. Analytic formulation for broadband rough surface and volumetric scattering including matched-filter range resolution. Wei Huang,
Delin Wang, and Purnima Ratilal (Elec. and Comput. Eng., Northeastern
Univ., 006 Hayden Hall, 370 Huntington Ave., Boston, MA 02115, huang.
wei1@husky.neu.edu)
An analytic formulation is derived for the broadband scattered field
from a randomly rough surface based on Green’s theorem employing perturbation theory. The matched filter is applied to resolve the scattered field
within the range resolution footprint of a broadband imaging system. Statistical moments of the scattered field are then expressed in terms of the second moment characterization of the scattering surface. The broadband
diffuse reverberation depends on the rough surface spectrum evaluated over
a range of wavenumbers, centered at the Bragg wavenumber corresponding
to the center frequency of the broadband pulse and extending to wavenumbers proportional to the signal bandwidth. A corresponding analytic broadband volume scattering model is derived from the Rayleigh-Born
approximation to Green’s theorem.
3:45
4pUW10. Objective identification of the dominant seabed scattering
mechanism. Gavin Steininger (SEOS, U Vic, 201 1026 Johnson St., Victoria, BC v7v 3n7, Canada, gavin.amw.steininger@gmail.com), Charles W.
Holland (SEOS, U Vic, State College, Pennsylvania), Stan E. Dosso, and
Jan Dettmer (SEOS, U Vic, Victoria, BC, Canada)
This paper develops and applies a quantitative inversion procedure for
scattering-strength data to determine the dominant scattering mechanism (surface and/or volume scattering) and to estimate the relevant scattering parameters and their uncertainties. The classification system is based on transdimensional Bayesian inversion with the deviance information criterion used
to select the dominant scattering mechanism. Scattering is modeled using
first-order perturbation theory as due to one of three mechanisms: interface
scattering from a rough seafloor, volume scattering from a heterogeneous
sediment layer, or mixed scattering combining both interface and volume
scattering. The classification system is applied to six simulated test cases
where it correctly identifies the true dominant scattering mechanism as having
greater support from the data in five cases; the remaining case is indecisive.
The approach is also applied to measured backscatter-strength data from the
Malta Plateau where volume scattering is determined as the dominant scattering mechanism. This conclusion and the scattering/geoacoustic parameters
estimated in the inversion are consistent with properties from previous inversions and/or with core measurements from the site. In particular, the scattering
parameters are converted from the continuous scattering models used in the
inversion to the equivalent discrete scattering parameters, which are found to
be consistent with properties of the cores. [Work supported by ONR.]
2298
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
In the case of sandy bottom, the backscattering from the interface roughness is significantly dominant compared to that from the volume inhomogeneities, and the power spectrum of interface roughness thus becomes the
most important factor to control the scattering mechanism. Backscattering
strength measurements with a 50-kHz signal were made for two types of
roughness (smooth and rough interfaces) which were artificially formed on
a 0.5-m thick sandy bottom in a 5-m deep water tank. The roughness profiles
were estimated by the arrival time analysis of 5-MHz backscattering signals
emitted by the transducer moving parallel to the interface at a speed of 1
cm/s, which were then Fourier transformed to yield power spectra. In this
talk, the measurements of backscattering strength as a function of grazing
angle in a range of 35 to 90 are presented. Finally, the effect of different
roughness types on the scattering strength will be discussed in comparison
with the predictions obtained by theoretical scattering model including the
perturbation and Kirchhoff approximations. [This research was supported
by the Agency for Defense Development, Korea.]
4:15
4pUW12. Multistatic performance prediction for Doppler-sensitive
waveforms in a shallow-water environment. Cristina Tollefsen (Defence
Res. and Development Canada, P. O. Box 1012, Dartmouth, NS B2Y 3Z7,
Canada, cristina.tollefsen@gmail.com)
Navies worldwide are now operationally capable of exploiting multistatic sonar technology. One of the purported advantages of multistatics
when detecting directional targets should be the increased probability of
receiving a strong reflection at one of the multistatic receivers. However, it
is not yet clear (or intuitive) how best to deploy multistatic-capable assets to
achieve particular mission objectives. The Performance Assessment for Tactical Systems (PATS) software was recently developed by Maritime Way
Scientific under contract to Defence Research and Development Canada as
a research tool to assist in exploring different approaches to multistatic performance modelling. Beginning with a user-defined environment and sensor
layout, PATS uses transmission loss and reverberation model results to calculate signal excess at each grid point in the model domain. Monte Carlo
simulations using many realizations of target tracks allow for the calculation
of the cumulative probability of detection as a means to assess performance.
Results will be presented comparing the shallow-water performance of
monostatic and multistatic sensors using frequency-modulated and Dopplersensitive waveforms as well as omnidirectional and directional targets in a
variety of realistic military scenarios.
4:30
4pUW13. Twinkling exponents for backscattering by spheres in the vicinity of airy caustics associated with reflections by corrugated surfaces.
Philip L. Marston (Phys. and Astronomy Dept., Washington State Univ.,
Pullman, WA 99164-2814, marston@wsu.edu)
High frequency sound reflected by corrugated surfaces produce caustic
networks relevant to sea surface reflection [Williams et al., J. Acoust. Soc.
Am. 96, 1687–1702 (1994)]. When a sphere is positioned sufficiently far
from the reflecting surface, it may be close to an Airy caustic which causes
a significant increase in the backscattering [Dzikowicz and Marston, J.
Acoust. Soc. Am. 116, 2751–2758 (2004)] for signals that bounce only once
off of the focusing surface. For simplicity, here, it is assumed that those signals may be distinguished from the earlier direct echo from the sphere and
the later (and sometimes stronger) doubly focused echo from the sphere
[Dzikowicz and Marston, J. Acoust. Soc. Am. 118, 2811–2819 (2005)]. In
1977, M. V. Berry noticed that the third and higher intensity moments of
wavefields containing caustics can increase in proportion to k , where k is
the wavenumber and is a “twinkling exponent” determined by the
168th Meeting: Acoustical Society of America
2298
dependencies of the intensity and focal volume on k. Assuming that the
sphere is impenetrable and sufficiently large that its direct scattering
depends only weakly on k, for the single-bounce backscattering by a sphere
considered here (the easiest situation for applying Berry’s analysis) the predicted exponent for the third moment is = 1/3.
THURSDAY EVENING, 30 OCTOBER 2014
7:30 P.M. TO 9:00 P.M.
OPEN MEETINGS OF TECHNICAL COMMITTEES
The Technical Committees of the Acoustical Society of America will hold open meetings on Tuesday, Wednesday, and Thursday
evenings. On Tuesday the meetings will begin at 8:00 p.m., except for Engineering Acoustics which will hold its meeting starting at
4:30 p.m. On Wednesday evening, the Technical Committee on Biomedical Acoustics will meet starting at 7:30 p.m. On Thursday evening, the meetings will begin at 7:30 p.m.
These are working, collegial meetings. Much of the work of the Society is accomplished by actions that originate and are taken in
these meetings including proposals for special sessions, workshops, and technical initiatives. All meeting participants are cordially
invited to attend these meetings and to participate actively in the discussion.
Committees meeting on Thursday are as follows:
Lincoln
Santa Fe
Marriott 3/4
Marriott 1/2
Indiana G
Indiana F
4p THU. PM
Animal Bioacoustics
Musical Acoustics
Noise
Psychological and Physiological Acoustics
Signal Processing in Acoustics
Underwater Acoustics
2299
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2299
FRIDAY MORNING, 31 OCTOBER 2014
INDIANA A/B, 8:00 A.M. TO 12:30 P.M.
Session 5aBA
Biomedical Acoustics: Cavitation Control and Detection Techniques
Kevin J. Haworth, Cochair
Univ. of Cincinnati, 231 Albert Sabin Way, CVC3940, Cincinnati, OH 45209
Oliver D. Kripfgans, Cochair
Dept. of Radiology, Univ. of Michigan, Ann Arbor, MI 48109-5667
Invited Papers
8:00
5aBA1. Detection and control of cavitation during blood–brain barrier opening: Applications and clinical considerations.
Meaghan A. O’Reilly, Ryan M. Jones, Alison Burgess, Cassandra Tyson, and Kullervo Hynynen (Physical Sci., Sunnybrook Res. Inst.,
2075 Bayview Ave., Rm. C713, Toronto, ON M4N3M5, Canada, moreilly@sri.utoronto.ca)
Microbubble-mediated opening of the blood–brain barrier (BBB) using ultrasound is a targeted technique that provides a transient
time window during which circulating therapeutics that are normally restricted to the vasculature can pass into the brain. This effect has
been associated with increases in cavitation activity of the circulating microbubbles, and our group has previously described a method to
actively control treatments in pre-clinical rodent models based on acoustic emissions recorded by a single transducer. Recently, we have
developed a clinical-scale receiver array capable of detecting bubble activity through ex vivo human skullcaps starting at pressure levels
below the threshold for BBB opening. The use of this array to spatially map cavitation activity in the brain during ultrasound therapy
will be discussed, including considerations for compensating for the distorting effects of the skull bone. Additionally, results from preclinical investigations examining safety and therapeutic potential will be presented, and receiver design considerations for both pre-clinical and clinical scale systems will be discussed.
8:20
5aBA2. Passive acoustic mapping of stable and inertial cavitation during ultrasound therapy. Christian Coviello (Inst. of Biomedical Eng., Dept. of Eng. Sci., Univ. of Oxford, Oxford, United Kingdom), James Choi (Dept. of BioEng., Imperial College, London,
United Kingdom), Jamie Collin, Robert Carlisle (Inst. of Biomedical Eng., Dept. of Eng. Sci., Univ. of Oxford, Oxford, United Kingdom), Miklos Gyongy (Faculty of Information Technol. and Bionics, Pazmany Peter Catholic Univ., Prague, Hungary), and Constantin
C. Coussios (Inst. of Biomedical Eng., Dept. of Eng. Sci., Univ. of Oxford, Inst. of Biomedical Eng., Old Rd. Campus Res. Bldg.,
Oxford, Oxfordshire OX3 7DQ, United Kingdom, constantin.coussios@eng.ox.ac.uk)
Accurate spatio-temporal characterization, quantification, and control of the type and extent of cavitation activity is crucial for a
wide range of therapeutic ultrasound applications, ranging from ablation to sonothrombolysis, opening of the blood-brain barrier and
drug delivery for cancer. Passive Acoustic Mapping (PAM) is a technique that utilizes arrays of acoustic detectors, typically coaxially
aligned or coincident with the therapeutic elements, to receive acoustic emissions outside the main frequency band of the therapy pulse.
The signals received by each detector are then filtered in the frequency domain into harmonics and ultra/subharmonics of the fundamental therapeutic frequency and other broadband components, and subsequently beamformed using a multi-correlation algorithm, which
uses measures of similarity between the signals rather than time-of-flight information in order to map sources of non-linear emissions in
real time. 2D and 3D cavitation maps obtained using time exposure acoustics beamforming will be presented, and juxtaposed to the
greater spatial resolution but increased computational complexity afforded by more advanced algorithms such as the Robust Capon
Beamformer (RCB). The spatial correlation between cavitation maps produced using PAM and the associated therapeutic effect will
also be discussed in the context of cavitaion-enhanced ablation and drug delivery.
8:40
5aBA3. Image-guided sonothrombolysis in a stroke model with a cavitation delivery and monitoring system. Francois Vignon,
William T. Shi (Ultrasound Imaging and Therapy, Philips Res. North America, 345 Scarborough Rd., Briarcliff Manor, NY 10510,
francois.vignon@philips.com), Jeffry Powers (Philips Ultrasound, Bothell, WA), Feng Xie, Juefei Wu, Shunji Gao, John Lof, and
Thomas R. Porter (Cardiology, Univ. of NE Medical Ctr., Omaha, NE)
Microbubbles (MB) and ultrasound (US) can dissolve intra-arterial thrombi. In order to reproducibly deliver the correct cavitation
dose and ensure treatment efficacy and safety, we designed a therapeutic US mode with cavitation monitoring. Therapy delivery and
recording of the MB signal are achieved with a sector imaging probe. Monitoring is achieved by spectrally analyzing the MB signal:
ultraharmonics are a marker of stable cavitation (SC) and broadband noise characterizes inertial cavitation (IC). We used the system in a
pig model. Thrombotic occlusions were created by injecting 4-hour old clots bilaterally into the internal carotids. Forty pigs were
randomized to either 2.4 MI, 5 ls pulses with MBs; 1.7 MI, 20 ls pulses with MBs; and 2.4 MI, 5 ls pulses without MBs. Angiographic
recanalization rates were compared. Cavitation as a function of MI was estimated in vivo. Dominant SC started at an applied MI of 0.6
2300
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2300
(0.3MI in situ after derating by skull attenuation). Dominant IC was estimated to start at an applied MI of 0.9 (0.6 in situ). Thus, all therapy settings were in the IC regime. The 2.4MI + MB setting was the most effective (100% recanalization) vs 38% for the 1.7MI + MB
and 50% for 2.4 MI without MBs (both p<0.05 compared to 2.4MI + MB). No signs of hemorrhage were found in any animal. In conclusion, higher IC levels are most effective for thrombus dissolution. Spectral analysis techniques can be used to plan and monitor the
therapy.
9:00
5aBA4. Timing of high intensity pulses for myocardial cavitation-enabled therapy. Douglas Miller, Chunyan Dou (Radiology,
Univ. of Michigan, 3240A Medical Sci. I, 1301 Catherine St., Ann Arbor, MI 48109-5667, douglm@umich.edu), Gabe E. Owens
(Pediatrics, Univ. of Michigan, Ann Arbor, MI), and Oliver Kripfgans (Radiology, Univ. of Michigan, Ann Arbor, MI)
Ultrasound pulses intermittently triggered from an ECG signal can interact with circulating microbubbles to produce myocardial cavitation microlesions, which may enable tissue-reduction therapy. The timing of therapy pulses relative to the ECG was investigated to
identify the optimal trigger point with regard to physiological response and microlesion production. Rats were anesthetized, prepared for
ultrasound, placed in a heated water bath, and treated with 1.5 MHz focused ultrasound pulses aimed by 8 MHz imaging. Initially, rats
were treated for 1 min with triggering at each of six different points in the ECG while monitoring blood pressure. Premature complexes,
a useful indicator of efficacy, were seen in the ECG, except during early systole. Premature complexes corresponded with blood pressure
pulses for triggering during diastole, but not during systole. Next, triggering at three of the time points, end diastole, end systole, or middiastole, was tested for the impact on microlesion creation. Microlesions stained by Evans blue dye were scored in frozen sections. There
was no statistically significant variation in cardiomyocyte injury. The end of systole was identified as an optimal trigger time point which
yielded ECG complexes and substantial cardiomyocyte injury, but minimal cardiac functional disruption.
9:20
5aBA5. Cavitation threshold determination —Can we do it? Gail ter Haar, John Civale, Ian Rivens, and Marcia Costa (Phys., Inst. of
Cancer Res., Phys. Dept., Royal Marsden Hospital, Sutton, Surrey SM2 5PT, United Kingdom, gail.terhaar@icr.ac.uk)
As clinical applications, which harness acoustic cavitation, become more commonplace, it becomes more and more important to be
able to determine the threshold pressures at which it is likely to occur. In our studies, we have used a suite of different detection techniques in an effort to determine these thresholds. These include passive cavitation detection, transducer impedance monitoring, and visual
appearance. Different methods of acoustic signal processing have been compared. The resultant cavitation thresholds will be discussed.
9:40
5aBA6. Monitoring boiling histotripsy with bubble-based ultrasound techniques. Vera Khokhlova (Ctr. for Industrial and Medical
Ultrasound, Appl. Phys. Lab., Univ. of Washigton, 1013 NE 40th St., Seattle, WA 98105, va.khokhlova@gmail.com), Michael Canney
(INSERM U556, Lyon, France), Julianna Simon, Tatiana Khokhlova, Joo-Ha Hwang, Adam Maxwell, Michael Bailey, Oleg Sapozhnikov,
Wayne Kreider, and Lawrence Crum (Ctr. for Industrial and Medical Ultrasound, Appl. Phys. Lab., Univ. of Washigton, Seattle, WA)
Cavitation phenomena have been always considered as a predominant mechanism of concern in mechanical tissue damage induced
by therapeutic ultrasound. Corresponding methods have been developed to monitor cavitation. Recently, a new high intensity focused
ultrasound technology, called boiling histotripsy (BH), was introduced, in which the major physical phenomenon that initiates mechanical tissue damage is vapor bubble growth associated with rapid tissue heating to boiling temperatures. Caused by nonlinear propagation
effects and the development of high-amplitude shocks, this tissue heating is localized in space and can lead to boiling within milliseconds. Once a boiling bubble is created, interaction of shock waves with the cavity results in tissue disintegration. While the incident
shocks can lead to cavitation phenomena and accompanying broadband emissions, the presence of a millimeter-sized vapor cavity in tissue produces strong echogenicity in ultrasound (US) imaging that can be exploited with B-mode diagnostic ultrasound. Various other
methods of imaging boiling histotripsy, including passive cavitation detection (PCD), Doppler or nonlinear pulse-inversion techniques,
and high speed photography in transparent gel phantoms are also overviewed. The role of shock amplitude as a metric for mechanical
tissue damage is discussed. [Work supported by NIH EB007643, T32DK007779, and NSBRI through NASA NCC 9-58.]
10:00
5aBA7. Control of cavitation through coalescence of cavitation nuclei. Timothy L. Hall, Alex Duryea, and Hedieh Tamaddoni
(Univ. of Michigan, 2200 Bonisteel Blvd., Ann Arbor, MI 48109, hallt@umich.edu)
5a FRI. AM
Therapeutic ultrasound in the form of SWL, HIFU, or histotripsy frequently generates cavitation nuclei (bubbles 1–10 um radius),
which can persist up to about 1 s before dissolving. These nuclei can attenuate and reflect propagation of acoustic fields reducing SWL
efficiency, enhancing HIFU heating, or shifting the location of a histotripsy focal zone making procedures less predictable. Depending
on their location, nuclei can also directly cause tissue damage when a high amplitude sound field causes them to undergo inertial cavitation. These undesirable effects can be reduced by using a low amplitude sound field (MI <1) to stimulate coalescence of nuclei through
primary and secondary Bjerknes forces. We will show nuclei coalescence significantly reduces sound field attenuation, improves SWL
breakup of model kidney stones, and reduces collateral damage in soft tissues. We also show techniques for designing the non-focal
acoustic fields for efficient coalescence with 3D printed acoustic lenses. Timothy Hall has a consulting arrangement with Histosonics,
Inc., which has licensed intellectual property related to this abstract.
10:20–10:30 Break
2301
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2301
Contributed Papers
10:30
5aBA8. Ultraharmonic intravascular ultrasound imaging with commercial 40 MHz catheter: A feasibility study. Himanshu Shekhar, Ivy Awuor,
Steven Huntzicker, and Marvin M. Doyley (Univ. of Rochester, 345 Hopeman Bldg., University of Rochester River Campus, Rochester, NY 14627,
himanshuwaits@gmail.com)
The abnormal growth of the vasa vasorum is characteristic of life-threatening atherosclerotic plaques. Intravascular ultraharmonic imaging is an
emerging technique that could visualize the vasa vasorum and help clinicians identify life-threatening plaques. Implementing this technique on commercial intravascular ultrasound (IVUS) systems could to accelerate its
clinical translation. Our previous work has demonstrated ultraharmonic
IVUS imaging with a modified clinical system that was equipped with a
commercial 15 MHz peripheral imaging catheter. In the present study, we
investigated the feasibility of ultraharmonic imaging with a commercially
available 40 MHz coronary imaging catheter. We imaged a flow phantom
that had contrast agent microbubbles (Targestar-P-HF, Targeson Inc., CA)
perfused in side channels parallel to its main lumen. The transducer was
excited at 30 MHz using 10% bandwidth chirp-coded pulses. The ultraharmonic response at 45 MHz was isolated and preferentially visualized using
pulse inversion and digital filtering. Side channels with 900 lm and 500 lm
diameter were detected with contrast-to-tissue ratios approaching 10 dB for
clinically relevant microbubble concentrations. The results of this study
indicate that ultraharmonic imaging is feasible with commercially available
coronary IVUS catheters, which may facilitate its widespread application in
preclinical research and clinical imaging.
10:45
5aBA9. A method to calibrate the absolute receive sensitivity of spherically focused, single-element transducers. Kyle T. Rich and T. Douglas
Mast (Biomedical Eng., Univ. of Cincinnati, 3938 Cardiovascular Res. Ctr.,
231 Albert Sabin Way, Cincinnati, OH 45267-0586, doug.mast@uc.edu)
Quantitative acoustic measurements of microbubble behavior, including
scattering and emissions from cavitation, would be facilitated by improved
calibration of transducers making absolute pressure measurements. In particular, appropriate methods are needed for wideband calibration of focused
passive cavitation detectors. Here, a substitution method was developed to
characterize the absolute receive sensitivity of two spherically focused, single-element transducers (center transmit frequencies 4 and 10 MHz).
Receive calibrations were obtained by transmitting and receiving a broadband pulse between the two focused transducers in a pitch-catch, confocally
aligned configuration, separated by a distance equal to the sum of the two
focal lengths. A calibrated hydrophone was substituted to measure the pressure field in the plane of each receiver’s surface. The frequency dependent
receive sensitivity at the focus was then calculated for each transducer as
the ratio of the receiver-measured voltage and the average hydrophonemeasured pressure amplitude across the receiver surface. Calibrations were
validated by generating an approximately spherically spreading, broadband
pressure wave at the focus of each transducer using a 2-mm diameter transducer and comparing the absolute acoustic pressure measured by each
focused transducer to that measured by a calibrated hydrophone.
11:00
5aBA10. Instigation and monitoring of inertial cavitation from nanoscale particles using a diagnostic imaging platform and passive acoustic
mapping. Christian Coviello, James Kwan, Susan Graham, Rachel Myers,
Apurva Shah, Penny Probert Smith, Robert Carlisle, and Constantin Coussios (Inst. of Biomedical Eng., Univ. of Oxford, ORCRB, Oxford OX3
7DQ, United Kingdom, christian.coviello@eng.ox.ac.uk)
Inertial cavitation nucleated by microbubble contrast agents has been
recently shown to enhance extravasation and improve the distribution of
anti-cancer agents during ultrasound (US)-enhanced delivery. However,
microbubbles require frequent replenishment due to their rapid clearance
and destruction upon US exposure and are unable to extravasate into tumor
tissue due to their large size. A new generation of gas-stabilizing polymeric
2302
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
cup-shaped nanoparticles, or “nanocups” (NCs), have been formulated to a
size that enables exploitation of the enhanced permeability and retention
effect for intratumoral accumulation. NCs provide sustained inertial cavitation, as characterized by broadband emissions, at peak rarefactional pressures readily achievable by diagnostic ultrasound systems. This enables the
use of a single low-cost system for B-mode treatment guidance, instigation
of cavitation, and real-time passive acoustic mapping (PAM) of the location
and extent of cavitation activity during therapy. The significant lowering of
the inertial cavitation threshold in the presence of NCs as characterized by
PAM is first quantified in-vitro. In-vivo and ex-vivo results in xenograftimplanted tumor bearing mice further evidence the strong presence of inertial cavitation detectable in the tumor at diagnostic levels of US intensity, as
confirmed by PAM images overlaid on B-mode in real-time.
11:15
5aBA11. Passive cavitation imaging with nucleic acid-loaded microbubbles in mouse tumors. Man M. Nguyen, Jonathan A. Kopechek, Bima
Hasjim, Flordeliza S. Villanueva, and Kang Kim (Dept. of Medicine, Univ.
of Pittsburgh, 3550 Terrace St., 562 Scaife Hall, Pittsburgh, PA 15261,
manmnguyen@gmail.com)
Ultrasound-targeted microbubble (MB) destruction has been used to
deliver nucleic acids to cancer cells for therapeutic effect. Identifying both
the location and cavitation activities of the MBs is needed for efficient and
effective treatment. In this study, we implemented passive cavitation imaging into a commercially available ultrasound open platform (Verasonics) for
a 128-element linear array transducer, centered at 5 MHz, and applied it to
an in-vivo mouse tumor model. Cationic lipid MBs were loaded with a transcription factor decoy that suppresses STAT3 signaling and inhibits tumor
growth in murine squamous cell carcinomas. During systemic MB infusion,
ultrasound pulses (4 or 20 cycles) were delivered with a 1-MHz single-element transducer (0.4–1.4MPa peak pressures). Channel data were offline
beamformed, band-pass filtered, subtracted from reference images acquired
without MBs, and co-registered with B-mode images. During MB infusion,
harmonics and broadband emissions were detected in the tumor with both
frequency spectra and cavitation images. For 4-cycle 0.4 MPa pulses, harmonic signals at 5 MHz and broadband signals 3–7 MHz were 23 dB and at
least 5 dB greater with MBs than without MBs, respectively. These preliminary results demonstrate the feasibility of in-vivo passive cavitation imaging
and could lead to further studies for optimizing US/MB-mediated delivery
of nucleic acids to tumors.
11:30
5aBA12. Non-focal acoustic lens designs for cavitation bubble consolidation. Hedieh A. Tamaddoni, Alexander Duryea, and Timothy L. Hall
(Univ. of Michigan, 2740 Barclay Way, Ann Arbor, MI 48105, alavi@
umich.edu)
During shockwave lithotripsy, cavitation bubbles form on the surface of
urinary stones aiding in the fragmentation process. However, shockwaves
can also produce pre-focal bubbles, which may shield or block subsequent
shockwaves and potentially induce collateral tissue damage. We have previously shown in-vitro that low amplitude acoustic waves can be applied to
actively stimulate bubble coalescence and help alleviate this effect. A traditional elliptical transducer lens design produces the maximum focal gain
possible for a given aperture. From experiments and simulation, we have
found that this design is not optimal for bubble consolidation as the primary
and secondary Bjerknes forces may act against each other and the effective
field volume is too small. For this work, we designed and constructed nonfocal transducer lenses with complex surface geometries using rapid prototyping stereolithography to produce more effective acoustic fields for bubble
consolidation during lithotripsy or ultrasound therapy. We demonstrate a
design methodology using an inverse problem technique to map the desired
acoustic field back to the surface of the transducer lens to determine the correct phase shift at every point on the lens surface. This method could be
applied to other acoustics problems where non-focused acoustic fields are
desired.
168th Meeting: Acoustical Society of America
2302
11:45
12:00
5aBA13. Scavenging dissolved oxygen via acoustic droplet vaporization.
Kirthi Radhakrishnan, Christy K. Holland, and Kevin J. Haworth (Internal
Medicine, Univ. of Cincinnati, Cardiovascular Ctr. 3972, 231 Albert Sabin
Way, Cincinnati, OH 45267, radhakki@ucmail.uc.edu)
5aBA14. Effects of rose bengal on cavitation cloud behavior in optically
transparent gel phantom investigated by high-speed observation. Jun
Yasuda, Takuya Miyashita (Dept. of Commun. Eng., Tohoku Univ., 6-6-065 Aramakiazaaoba, Aoba, Sendai, Miyagiken 980-0801, Japan, j_yasuda@
ecei.tohoku.ac.jp), Kei Taguchi (Dept. of Biomedical Eng., Tohoku Univ.,
Sendai, Japan), Shin Yoshizawa (Dept. of Commun. Eng., Tohoku Univ.,
Sendai, Japan), and Shin-ichiro Umemura (Dept. of Biomedical Eng.,
Tohoku Univ., Sendai, Japan)
Acoustic droplet vaporization (ADV) has been investigated for capillary
hemostasis, thermal ablation, and ultrasound imaging. The maximum diameter of a microbubble produced by ADV depends on the gas saturation of
the surrounding fluid. This dependence is due to diffusion of dissolved gases
from the fluid into the perfluoropentane (PFP) microbubble. This study
investigated the change in oxygen concentration in the surrounding fluid after ADV. Albumin-shelled PFP droplets in air-saturated saline (1:30, v/v)
were continuously pumped through a flow system and insonified by a
focused 2-MHz single-element transducer to induce ADV. B-mode image
echogenicity was used to determine the ADV threshold pressure amplitude.
The dissolved oxygen concentration in the fluid upstream and downstream
of the insonation region was measured using inline sensors. Droplet size distributions were measured before and after ultrasound exposure to determine
the ADV transition efficiency. The ADV pressure threshold at 2 MHz was
1.7 MPa (peak negative). Exposure of PFP droplets to ultrasound at 5 MPa
peak negative pressure caused the dissolved oxygen content in the surrounding fluid to decrease from 88 6 3% to 20 6 4%. The implications of oxygen
scavenging during ADV will be discussed.
Sonodynamic treatment is a non-thermal ultrasonic method using sonochemical effect of cavitation bubbles. Rose bengal (RB) is sonochemically
active and reduces cavitation threshold and therefore has potential to be an
agent for sonodynamic treatment. For the effectiveness and safety of the
treatment, controlling cavitation is crucial. In our previous study, we have
suggested high-intensity focused ultrasound (HIFU) employing second-harmonic superimposition, which can control cavitation cloud generation by
superimposing the second harmonic onto the fundamental. In this study, to
investigate the effects of RB on cavitation behavior, a polyacrylamide gel
phantom containing RB was exposed to second-harmonic superimposed
ultrasound and the generated cavitation bubbles were observed by a highspeed camera. The gel contained three different concentrations of RB, 0, 1,
and 10 mg/L. The ultrasonic intensity and exposure duration were 40 kW/
cm2 and 100 ls, respectively. The fundamental frequency was 0.8 MHz. In
the results, the amount of the incepted cloud became higher and the lifetime
of bubbles became longer as the RB concentration increased at high reproducibility. The observed RB concentration dependence suggests that the
amount of cavitation bubbles can be controlled using second-harmonic
superimposition. The observed lifetime extension of bubbles can not only
promote sonochemical but also enhance thermal bioeffect.
12:15–12:30 Panel Discussion
FRIDAY MORNING, 31 OCTOBER 2014
INDIANA E, 10:00 A.M. TO 1:00 P.M.
Session 5aED
Education in Acoustics: Hands-On Acoustics Demonstrations for Indianapolis Area Students
Uwe J. Hansen, Cochair
Chemistry& Physics, Indiana State University, 64 Heritage Dr., Terre Haute, IN 47803-2374
Andrew C. H. Morrison, Cochair
Natural Science Department, Joliet Junior College, 1215 Houbolt Rd., Joliet, IL 60431
2303
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
5a FRI. AM
Acoustics has a long and rich history of physical demonstrations of fundamental (and not so fundamental) acoustics principles and
phenomenon. In this session “Hands-On” demonstrations will be set-up for a group of middle school students from the Indianapolis
area. The goal is to foster curiosity and excitement in science and acoustics at this critical stage in the students’ educational development
and is part of the larger “Listen Up” education outreach effort by the ASA.
Each station will be manned by an experienced acoustician who will help the students understand the principle being illustrated in
each demo. Any acousticians wanting to participate in this fun event should email Uwe Hansen (uhansen@indstate.edu) or Andrew
C. H. Morrison (amorriso@jjc.edu).
2303
FRIDAY MORNING, 31 OCTOBER 2014
MARRIOTT 7/8, 9:45 A.M. TO 12:05 P.M.
Session 5aNS
Noise: Transportation Noise, Soundscapes, and Related Topics
Alan T. Wall, Chair
Battlespace Acoustics Branch, Air Force Research Laboratory, Bldg. 441, Wright-Patterson AFB, OH 45433
Chair’s Introduction—9:45
Contributed Papers
9:50
5aNS1. Traffic monitoring with noise: Investigations on an urban seismic network. Nima Riahi and Peter Gerstoft (Marine Physical Lab., Scripps
Inst. of Oceanogr., 9500 Gilman Dr., MC 0238, La Jolla, CA 92093-0238,
nriahi@ucsd.edu)
Traffic in urban areas generates not only acoustic noise but also much
seismic noise. The latter is typically not perceptible by humans but could, in
fact, offer an interesting data source for traffic information systems. To
explore the potential for this, we study a 5300 geophone network, which
covered an area of over 70 km2 in Long Beach, CA, and was deployed as
part of a hydrocarbon industry survey. The sensors have a typical spacing of
about 100 m, which presents a two-sided processing challenge here: signals
beyond a few receiver spacings from the sources are often strongly attenuated and scattered whereas nearby receiver signals may contain complicated
near-field effects. We illustrate how we address this issue and give three
simple applications: counting cars on a highway section, classifying different types of vehicles passing along a road, and measuring time and take-off
velocity of aircraft at Long Beach airport. We discuss future work toward
traffic monitoring and also possible connections with acoustical problems.
10:05
5aNS2. Impact of AMX-A1 military aircraft operations on the acoustical environment close to a Brazilian airbase. Olmiro C. de Souza and
Stephan Paul (UFSM, Acampamento, 569, Santa Maria, Santa Maria
97050003, Brazil, olmirocz.eac@gmail.com)
Military aircraft operating on airbases usually have a considerable
impact on the neighborhood. While the impact of civil aircraft operations
can be modeled by commercially available software, the same is hardly possible for military aircraft as no EPNL data are available for such aircraft and
flight path are not restricted to those used by civilian operations. Therefore,
in this work, the noise impact of AMX-A1 aircraft operating at a Brazilian
airbase was evaluated from measurements originally intended for calibration
of the noise map of an university that is in the vicinity. From the data, it was
possible to obtain LAeq10min with and without jet noise to see how much
does AMX operations influence in the total measurement. Sound exposure
levels (SEL) were also calculated. It was found that depending of AMX procedure (Approach, Departure, Touch, Go, etc.), jet noise increases the
LAeq10min up to 10 dB and SEL values reaches 96, 6 dBA in sensitive areas.
It will be discussed if A-weighted sound power level can be estimated form
the data considering the aircraft as a point source in free field.
10:20
noise is defined as the noise produced by an aircraft in high altitude
operation (>3000 m) measured by a microphone 1.2 m above ground level.
Calculations were performed for three different aircraft operating conditions—cruise, climb, and descent. For each calculation, the aircraft noise
source was modeled as an isolated CROR engine. This noise model was
determined from experimental measurements made in a transonic wind tunnel using a 1/6th-scale open rotor rig. En-route noise levels were calculated
using the whole aircraft noise prediction code SOPRANO. The CROR noise
model were input into SOPRANO and long-distance propagation was calculated using the ray-tracing code APHRODITE, which is implemented within
SOPRANO. This ray tracing code requires atmospheric wind speed, wind
direction, temperature, and humidity profiles, which were collected from
historical data around Europe. The ray tracing method divides the atmosphere up into a number of layers. Meteorological parameters were assumed
to vary linearly between the values specified at the boundaries of each layer.
Numerous simulations are conducted using different atmospheres in order to
assess the impact of atmospheric conditions on the en-route noise levels.
10:35
5aNS4. Gaps in the literature on the effects of aircraft noise on children’s cognitive performance. Matthew Kamrath and Michelle C. Vigeant
(Graduate Program in Acoust., Penn State Univ., 201 Appl. Sci. Bldg.,
University Park, PA 16802, kamrath64@gmail.com)
In the past two decades, several major studies have indicated that
chronic aircraft noise exposure negatively impacts children’s cognitive performance. For example, the longitudinal Munich airport study (Hygge, Am.
Psychol. Soc., 2002) demonstrated that noise adversely affects reading ability, memory, attention, and speech perception. Moreover, the cross-sectional
RANCH study (Stansfeld, Lancet, 2005) found a linear correlation between
extended noise exposure and reduced reading comprehension and recognition memory. This presentation summarizes these and other recent studies
and discusses four key areas in need of further research (ENNAH Final
Report Project No. 226442, 2013). First, future studies should account for
all of the following confounding factors: socioeconomic variables, daytime
and nighttime aircraft, road, and train noise, and air pollution. Second, multiple noise metrics should be evaluated to determine if the character of the
noise alters the relationship between noise and cognition. Third, future
research should explore the mitigating effects of improved classroom acoustics and exterior sound insulation. Finally, additional longitudinal studies
are necessary: (1) to establish a causal relationship between aircraft noise
and cognition; and (2) to understand how changes in the duration of the exposure and in the age of the students influence the relationship. [Work supported by FAA PARTNER Center of Excellence.]
5aNS3. The effect of long-range propagation on contra-rotating open
rotor en-route noise levels. Upol Islam (Inst. of sound and Vib. Res. (ISVR),
Univ. of Southampton, Highfield Campus, Bldg. 13, Rm. 2009, Southampton,
Hampshire SO17 1BJ, United Kingdom, ui1d11@soton.ac.uk)
The purpose was to calculate the en-route noise level produced by an
advanced contra-rotating open rotor (CROR) powered aircraft. The en-route
2304
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2304
5aNS5. Acoustic absorption of green roof samples commercially available in southern Brazil. Ricardo Brum, Stephan Paul (Centro de Tecnologia, Universidade Federal de Santa Maria, Rua Erly de Almeida Lima, 650,
Santa Maria, RS 97105-120, Brazil, ricardozbrum@yahoo.com.br), Andrey
R. da Silva (Centro de Engenharias da Mobilidade, Universidade Federal de
Santa Catarina, Joinville, Brazil), and Tenile Piovesan (Centro de Tecnologia, Universidade Federal de Santa Maria, Santa Maria, RS, Brazil)
Previous investigations have shown that green roofs provide many environmental benefits, such as thermal conditioning, air cleaning, and rain
water absorption. Nevertheless, information regarding acoustic properties,
such as sound absorption and transmission loss is still sparse. This work
presents measurements of the sound absorption coefficient of two types of
green roofs commercially available in Brazil: the alveolar and the hexa system. Measurements were made in a reverberant chamber according to ISO354 for different variations of both systems: the alveolar system with 2.5 cm
of substrate with and without grass and 4 cm of substrate only. The hexa
system was measured with layers of 4 and 6 cm of substrate without vegetation and 6 cm of substrate with a layer of vegetation of the sedum type. For
all systems, high absorption coefficients were found for medium and high
frequency limits (a 0.7) and low absorption in low frequencies (a 0.2).
This was expected due to the highly porous structure of the substrate. The
results suggest that the types of green roofs evaluated in this work could be
a good approach to noise control in urban areas.
11:05
5aNS6. The perceived annoyance of urban soundscapes. Adam Craig,
Don Knox, and David Moore (School of Eng. and Built Environment, Glasgow Caledonian Univ., 70 Cowcaddens Rd., Glasgow G4 0BA, United
Kingdom, Adam.Craig@gcu.ac.uk)
Annoyance is one of the main factors that contribute to a negative view
of environmental noise, and can lead to stress-related health conditions.
Subjective perception of environmental sounds is dependent upon a variety
of factors related to the sound, the geographical location, and the listener.
Noise maps used to communicate information to the public about environmental noise in a given geographic location are based on simple noise level
measurements and do not include any information regarding how perceptually annoying or otherwise the noise might be. This study involved subjective assessment by a large panel of listeners (N = 200) of a corpus of 60
pre-recorded urban soundscapes collected from a variety of locations around
Glasgow City Centre. Binaural recordings were taken at three points during
each 24 hour period in order to capture urban noise during day, evening, and
night. Perceived annoyance was measured using Likert and numerical scales
and each soundscape measured in terms of arousal and positive/negative valence. The results shed light on the subjective annoyance of environmental
sound in a range of urban locations around Glasgow, and form the basis for
development of environmental noise maps, which more fully communicate
the effects of environmental noise to the public.
11:20
5aNS7. What comprises a healthy soundscape for the captive Southern
White Rhinoceros (Ceratotherium simum simum)? Suzi Wiseman (Environ. Geography, Texas State Univ.-San Marcos, 3901 North 30th St., Waco,
TX 76708, sw1210txstate@gmail.com), Preston S. Wilson (Mech. Eng.,
Univ. Texas at Austin, Austin, TX), and Frank Sepulveda (Geophys., Baylor
Univ., Killeen, TX)
Many creatures, including the myopic rhinoceros, depend upon hearing
and smell to determine their environment. Nature is dominated by meaningful biophonic and geophonic information quickly absorbed by soil and vegetation, while anthrophonic urban soundscapes exhibit vastly different
physical and semantic characteristics, sound repeatedly reflecting off hard
2305
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
geometric surfaces, distorting and reverberating, and becoming noise. Noise
damages humans physiologically, including reproductively, and likely damages other mammals. Rhinos vocalize sonically and infrasonically, but
audiograms are unavailable. They generally breed poorly in urban zoos,
where infrasonic noise can be chronic. Biological and social factors are
studied, but little attention if any is paid to soundscape. We present a methodology to analyze the soundscapes of captive animals according to their
hearing range. Sound metrics determined from recordings at various institutions can then be compared and correlations with the health and wellbeing
of their animals can be sought. To develop this methodology we studied the
sonic, infrasonic, and seismic soundscape experienced by the white rhinos
at Fossil Rim Wildlife Center, one of the few U.S. facilities to successfully
breed this species in recent years. Future analysis can seek particular parameters known to be injurious to human mammals, plus parameters known to
invoke response in animals.
11:35
5aNS8. Shape optimization of acoustic horns using few design variables.
Nilson Barbieri (Mech. Eng., PUCPR, Rua Imaculada Conceiç~ao, 1155,
Curitiba, Parana 80215-901, Brazil, nilson.barbieri@pucpr.br), Renato
Barbieri (Mech. Eng., UDESC, Joinville, Santa Catarina, Brazil), Clebe T.
Vitorino, and Key F. Lima (Mech. Eng., PUCPR, Curitiba, Brazil)
The main steps for design of the optimal geometry of acoustic horns
employing numerical methods are: the definition of the domain and the
restrictions and control of the boundary, the definition of the objective function and the frequency range of interest, the evaluation of the objective function value, and the selection of a robust optimization technique to calculate
the optimal value. During the optimization process, the profile is changing
continuously until obtaining the optimal horn profile. The main focus of this
work was to obtain optimal geometries with the use of few design variables.
Two different methods to control the horn profile during the optimization
process are used: approximation of the contour of the horn with Hermite
polynomials and sinusoidal functions. The numerical results show the efficiency of these methods and it was also found (at least from the engineering
point of view) that the solution is not unique to the geometry of the horn to
single-frequency. The results for the optimization for more than one frequency are also shown.
11:50
5aNS9. Parametric study of a PULSCO vent silencer. Usama Tohid
(Eng., PULSCO, 17945 Sky Park Circle, Ste. G, Irvine, CA 92614, u.
tohid@pulsco.com)
We have conducted a parametric study via numerical simulations of a
PULSCO vent silencer. The overall objective is to demonstrate the existence
of an optimum system performance for a given set of operating conditions
by modifying the corresponding geometry of the device. The vent silencer
under consideration consists of a perforated diffuser, the silencer body, and
a tube module. The tube module consists of a set of tubes through which the
working fluid passes. The flow tubes are perforated and surrounded with
acoustic packing that is responsible for the attenuation. The mathematical
model of the vent silencer is built upon Helmholtz equation for the plane
wave solution, and the Delany-Bazley model for the acoustic packing. The
geometrical parameters chosen for the parametric study include: the porosity
of the diffuser and the flow tubes, the type of packing material used for the
tube module, bulk density for the acoustic packing, and the hole diameter of
the perforated diffuser and flow tubes. The equations of the mathematical
model are discretized over the computational domain and solved with a finite element method. Numerical results in terms of transmission loss, for the
system, indicate that diffuser hole size of 1/4” with porosity of 0.1, flow
tube hole size of 1/8” with porosity of 0.23, packing density of 16 kg/m3 for
TRS-10 and 100 kg/m3 for Advantex provided the optimum results for the
chosen set of conditions. The numerical results were found to be in agreement with experimental data.
168th Meeting: Acoustical Society of America
2305
5a FRI. AM
10:50
FRIDAY MORNING, 31 OCTOBER 2014
MARRIOTT 5, 8:00 A.M. TO 10:00 A.M.
Session 5aPPa
Psychological and Physiological Acoustics: Psychological and Physiological Acoustics Potpourri
(Poster Session)
Noah H. Silbert, Chair
Communication Sciences & Disorders, University of Cincinnati, 3202 Eden Avenue, 344 French East Building,
Cincinnati, OH 45267
All posters will be on display from 8:00 a.m. to 10:00 a.m. To allow contributors an opportunity to see other posters, contributors of
odd-numbered papers will be at their posters from 8:00 a.m. to 9:00 a.m. and contributors of even-numbered papers will be at their posters from 9:00 a.m. to 10:00 a.m.
Contributed Papers
5aPPa1. Is age-related hearing loss predominantly metabolic? Robert H.
Withnell (Speech and Hearing Sci., Indiana Univ., Bloomington, IN) and
Margarete A. Ueberfuhr (Systemic NeuroSci., Ludwig-Maximilians Univ.,
Großhaderner Str. 2, D-82152 Planegg-Martinsried, Munich, Germany, margarete.ueberfuhr@gmx.de)
Studies in animals have shown that age-related hearing loss is predominantly metabolic in origin. In humans, direct access to the cochlea is not
usually possible and so non-invasive methods of assessing cochlear mechanical function are required. This study used a non-invasive assay of cochlear
mechanical function, otoacoustic emissions, to examine a metabolic versus
hair-cell-loss origin for age-related hearing loss. Three subject groups were
examined: adult females with clinically normal hearing, adult females with
age-related hearing loss, and adult males with noise-induced hearing loss.
Contrasting otoacoustic emission input-output functions were obtained for
the three groups, suggesting a causal relationship between age-related hearing loss and strial dysfunction.
5aPPa2. Further modeling of temporal effects in two-tone suppression.
Erica L. Hegland and Elizabeth A. Strickland (Speech, Lang., and Hearing
Sci., Purdue Univ., Heavilon Hall, 500 Oval Dr., West Lafayette, IN 47907,
ehegland@purdue.edu)
Two-tone suppression, a nearly instantaneous reduction in cochlear gain
and a by-product of the active process, has been extensively studied both
physiologically and psychoacoustically. Some physiological data suggest
that the medial olivocochlear reflex (MOCR), which reduces the gain of the
active process in the cochlea, may also reduce suppression. The interaction
of these two gain reduction mechanisms is complex and has not been widely
studied or understood. Therefore, a model of the auditory periphery that
includes the MOCR time course was used to systematically investigate this
interaction of gain reduction mechanisms. This model was used to closely
examine two-tone suppression at the level of the basilar membrane using
suppressors lower in frequency than the probe tone. Results were compared
both with and without elicitation of the MOCR. Preliminary results indicate
that elicitation of the MOCR reduces two-tone suppression when measured
as the total basilar membrane response at the characteristic frequency (CF)
of the probe. The purpose of this study was to investigate further by separating the frequency components of the basilar membrane response at CF to
determine the excitation produced by the probe and by the suppressor with
and without MOCR elicitation. [Research supported by NIH(NIDCD)R01
DC008327 and T32 DC00030.]
5aPPa3. Characterization of cochlear implant-related artifacts during
sound-field recording of the auditory steady state response using an amplitude modulated stimulus: A comparison among normal hearing
adults, cochlear implant recipients, and implant-in-a-box. Shruti B.
Deshpande (Commun. Sci. & Disord., Univ. of Cincinnati, 3202 Eden Ave.,
Cincinnati, OH 45267-0379, balvalsn@mail.uc.edu), Michael P. Scott (Div.
of Audiol., Cincinnati Children’s Hospital Medical Ctr., Cincinnati, OH),
Fawen Zhang, Robert W. Keith (Commun. Sci. & Disord., Univ. of Cincinnati, Cincinnati, OH), and Andrew Dimitrijevic (Commun. Sci. Res. Ctr.,
Cincinnati Children’s Hospital, Dept. of Otolaryngol., Univ. of Cincinnati,
Cincinnati, OH)
Recent work has investigated the use of electric stimuli to evoke auditory steady state response (ASSR) in cochlear implant (CI) users. While
more control can be exerted using electric stimuli, acoustic stimuli present
natural listening environment for CI users. However, ASSR using acoustic
stimuli in the presence of a CI could lead to artifacts. Five experiments
investigated the presence and characteristics of CI-artifacts during soundfield ASSR using amplitude modulated (AM) stimulus (carrier frequency: 2
kHz; modulation frequency: 82.031 Hz). Experiment 1 investigated differences between 10 normal hearing (NH) and 10 CI participants in terms of
ASSR amplitude versus intensity and onset phase versus intensity. Experiment 2 explored similar relationships for an implant-in-a-box. Experiment 3
investigated correlations between electrophysiological ASSR thresholds
(ASSRe) and behavioral thresholds to the AM stimulus (BTAM) for the NH
and CI groups. Mean threshold differences (ASSRe-BTAM) were computed
for each group and group differences were studied. Experiment 4 investigated the presence of transducer-related artifacts using masking. Experiment
5 investigated the effect of manipulation of intensity and external components of the CI on the ASSR. Overall, results of this study provide the first
comprehensive description of the characteristics of CI-artifacts during
sound-field ASSR. Implications for future research to further characterize
CI-artifacts, thereby leading to strategies to minimize them are discussed.
5aPPa4. Baseline neurophysiological noise levels in children with auditory processing disorder. Kyoko Nagao (Biomedical Res., Nemours/Alfred
I. duPont Hospital for Children, 1701 Rockland Rd., CPASS, Wilmington,
DE 19803, knagao@nemours.org), L. Ashleigh Greenwood (Audiol. Services, Pediatrix , Falls Church, VA), Raj C. Sheth, Rebecca G. Gaffney (Biology, Univ. of Delaware, Newark, DE), Matthew R. Cardinale (College of
Osteopathic Medicine, New York Inst. of Technol., New York, NY), and
Thierry Morlet (Biomedical Res., Nemours/Alfred I. duPont Hospital for
Children, Wilmington, DE)
The current study examined the baseline neurophysiological responses
between children with auditory processing disorder (APD) and the control
group. Auditory event related potentials were recorded in 23 children with
APD (ages 7–12 years, mean age = 8.9 years) and 25 age-matched control
2306
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2306
5aPPa5. Speech spectral intensity discrimination at frequencies above 6
kHz. Brian B. Monson (Dept. of Pediatric Newborn Medicine, Brigham and
Women’s Hospital, Harvard Med. School, 75 Francis St., Boston, MA
02115, bmonson@research.bwh.harvard.edu), Andrew J. Lotto, and Brad H.
Story (Speech, Lang., and Hearing Sci., Univ. of Arizona, Tucson, AZ)
Hearing aids and other communication devices (e.g., mobile phones)
have made some recent efforts to extend their bandwidths to represent
higher frequencies. The impact of this expansion on speech perception is
not well characterized. To assess human sensitivity to speech high-frequency energy (HFE, defined here as energy in the 8- and 16-kHz octave
bands), difference limens for HFE level changes in male and female speech
and singing were obtained. Listeners showed significantly greater ability to
detect level changes in singing vs. speech, but not in female vs. male
speech. Mean differences limen scores for speech and singing were about 5
dB in the 8-kHz octave (5.6–11.3 kHz) but 8–10 dB in the 16-kHz octave
(11.3–22 kHz). These scores are lower (better) than scores previously
reported for isolated vowels and some musical instruments, and similar to
scores previously reported for white noise.
5aPPa6. Duration perception of time-varying sounds: The role of the
amplitude decay and rise-time modulator. Lorraine Chuen (Psych., Neurosci. & Behaviour, McMaster Univ., Psych. Bldg. (PC), Rm. 102, 1280
Main St. West, Hamilton, ON L8S 4K1, Canada, chuenll@mcmaster.ca)
and Michael Schutz (School of the Arts, McMaster Univ., Hamilton, ON,
Canada)
It is well known that ramped (rising energy) sounds are perceived as longer in duration than damped (falling energy) sounds that are time-reversed,
but otherwise identical versions of one another (Schlauch, Ries & DiGiovanni, 2001; Grassi & Darwin, 2006). This asymmetry has generally been
attributed to the under-estimation of damped sound duration, rather than the
over-estimation of ramped sound duration. As previous literature most commonly employs exponential amplitude modulators, in the present experiment, we investigate whether altering the nature of this amplitude decay- or
rise-time modulator (linear or exponential) would influence this typically
observed perceptual asymmetry. Participants performed an adaptive, 2AFC
task that assessed the point of subjective equality (PSE) between a standard
tone with a constant ramped/damped envelope, and a comparator tone with
a “flat,” steady state envelope whose duration varied according to a 1-up, 1down rule. Preliminary results replicated previous findings that ramped
sounds are perceived as longer than their time-reversed, damped counterparts. However, for sounds with a linear amplitude modulator, this perceptual asymmetry is partially accounted for by ramped tone over-estimation,
contrasting previous findings in the literature conducted with exponential
amplitude modulators.
al., 1998; Parbery-Clark, 2009). Among the auditory skills in musicians that
have been studied are gap detection measures of temporal acuity (Mishra &
Panda, 2014; Payne, 2012). These studies typically have compared the gap
detection thresholds of musicians and non-musicians. The present work
relates gap detection performance to musical aptitude rather than to reported
musical training history. In addition, in the present study, gap detection was
measured under two different stimulus conditions: the within-channel (WC)
condition (in which the sound that precedes the gap is spectrally identical to
the sound following the gap) and the across-channel (AC condition) (in
which the pre- and post-gap sounds are spectrally different. Results indicate
a significant correlation between across-channel gap detection thresholds
and musical aptitude and no correlation between within-channel performance and musical aptitude. These results have important implications for
temporal acuity as it relates to musical aptitude.
5aPPa8. Modeling response times to analyze perceptual interactions in
complex non-speech perception. Noah H. Silbert (Commun. Sci. & Disord., Univ. of Cincinnati, 3202 Eden Ave., 344 French East Bldg., Cincinnati, OH 45267, noah.silbert@uc.edu) and Joseph W. Houpt (Psych.,
Wright State Univ., Dayton, OH)
General recognition theory (GRT) provides a powerful framework for
modeling interactions between perceptual dimensions in identification-confusion data. The linear ballistic accumulator (LBA) model provides powerful methods for analyzing multi-choice (2 + ) response time (RT) data as a
function of evidence accumulation and response thresholds. We extend
(static) GRT to the domain of RTs by fitting LBA models to RTs collected
in two auditory GRT experiments. Although the mapping between the constructs of GRT (e.g., perceptual separability, perceptual independence) and
the components of the LBA (e.g., drift rates, response thresholds) is complex, the dimensional interactions defined in GRT can be indirectly
addressed in the LBA framework by testing for invariance of LBA parameters across appropriate subsets of the data. The present work focuses on correspondences between (invariance of) parameters in LBA and perceptual
separability and independence in GRT.
5aPPa9. The effect of experience on environmental sound identification.
Rachel E. Bash, Brandon J. Cash, and Jeremy Loebach (Psych., St. Olaf
College, 1520 St. Olaf Ave., Northfield, MN 55057, bash@stolaf.edu)
The perception of environmental stimuli was compared across normal
hearing (NH) listeners exposed to an eight-channel sinewave vocoder and
experienced bilateral, unilateral, and bimodal cochlear implant (CI) users.
Three groups of NH listeners underwent no training (control), one day of
training with environmental stimuli (exposure), or four days of training with
a variety of speech and environmental stimuli (experimental). A significant
effect of training was observed. The experimental group performed significantly better than exposure or control groups, equal to bilateral CI users, but
worse than bimodal users. Participants were divided into low, medium and
high-performing groups using a two-step cluster algorithm. High-performing members were only observed for the CI and experimental conditions,
and significantly more low-performing members were observed for exposure and control conditions, demonstrating the effectiveness of training. A
detailed item-analysis revealed that the most accurately identified sounds
were often temporal in nature or contained iconic repeating patterns (e.g., a
horse galloping). Easily identified stimuli were common across all groups,
with experimental subjects identifying more short or spectrally driven stimuli, and CI users identifying more animal vocalizations. These data demonstrate that explicit training in identifying environmental stimuli improves
sound perception, and could be beneficial for new CI users.
5a FRI. AM
children in response to a /da/ presented to each ear separately (right and left
ear conditions). A no-sound condition was recorded as well. Baseline neurophysiological activity was measured as the root mean square amplitude of
the 100 ms pre-stimulus period. Preliminary analysis of data from 19 children with APD and 13 controls indicated that the APD group showed significantly greater pre-stimulus amplitude than the control group in the left ear
condition, F(1, 30) = 4.415, p = 0.044, but we did not find significant group
differences in the no-sound and right ear conditions, F(1, 30) = 2.237, p =
0.15 and F(1, 30) = 0.088, p = 0.77, respectively. The results suggest that
children with APD may need a longer time period to return to a resting state
than control children when the left ear is stimulated. Hence, these results
may indicate asymmetrical neural activities of the auditory pathways in
APD.
5aPPa7. Relationship between gap detection thresholds and performance on the Advanced Measures of Music Audiation Test. Matthew
Hoch (Music, Auburn Univ., Auburn, AL), Judith Blumsack, and Lindsey
Soles (CMDS, Auburn Univ., 1199 Haley Ctr., Auburn, AL 36849-5232,
blumsjt@auburn.edu)
Considerable neurophysiological, neural imaging, and behavioral
research indicates that auditory processing in musicians differs from that of
non-musicians (e.g., Musacchia et al., 2007; Ohnishi et al., 2001; Pantev et
2307
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2307
5aPPa10. The U.S. National Hearing Test, a 2013–2014 progress report.
Charles S. Watson (Res., Commun. Disord. Technol., Inc., CDT, Inc., 3100
John Hinkle Pl, Bloomington, IN 47408, watson@indiana.edu), Gary R.
Kidd (Speech and Hearing, Indiana Univ., Bloomington, IN), James D.
Miller (Res., Commun. Disord. Technol., Inc., Bloomington, IN), Jill E. Preminger (Surgery, Univ. of Louisville, Louisville, KY), Alex Crowley, and
Daniel P. Maki (Res., Commun. Disord. Technol., Inc., Bloomington,
IN)
A telephone-administered screening test for sensorineural hearing loss
was made publically available in the United States in September 2013. This
test is similar to the digits-in-noise test developed by Smits and colleagues
in the Netherlands, versions of which are now in use in most European
countries and in Australia. The test was initially offered in the United States
for a small fee ($8, then $4) but after a year of promotion it became clear
that either the fee or the complexity of paying it was inhibiting. During the
first month in which the test was subsequently offered free of charge,
31,806 calls were made to the test line, of which 26,507 were completed
tests. Analyses of test performance suggest that about 81% of the test takers
had at least a mild hearing loss, and 40% had a substantial loss (estimated to
be in excess of 45 dB PTA). Follow-up studies are being conducted to determine whether those who failed the test sought a full-scale hearing assessment, and whether those advised to obtain hearing aids did so. [Work
funded by Grant No. 5R44DC009719 from the National Institute for Deafness and other Communication Disorders.]
FRIDAY MORNING, 31 OCTOBER 2014
MARRIOTT 1/2, 10:15 A.M. TO 12:15 P.M.
Session 5aPPb
Psychological and Physiological Acoustics: Perceptual and Physiological Mechanisms, Modeling, and
Assessment
Anna C. Diedesch, Chair
Hearing & Speech Sciences, Vanderbilt University, Nashville, TN 37209
Contributed Papers
10:15
5aPPb1. Modest, reliable spectral peaks in preceding sounds influence
vowel perception. Christian Stilp and Paul Anderson (Dept. of Psychol. and
Brain Sci., Univ. of Louisville, 308 Life Sci. Bldg., Louisville, KY 40292,
christian.stilp@louisville.edu)
Sensory systems excel at extracting predictable signal properties in order
to be optimally sensitive to unpredictable, more informative properties.
Studies of auditory perceptual calibration (Kiefte & Kluender, 2008 JASA;
Alexander & Kluender, 2010 JASA) showed that when precursor sounds
were filtered to emphasize frequencies matching the second formant (F2) of
the subsequent target vowel, vowel perception decreased its reliance on F2
(predictable cue) and increased reliance on spectral tilt (unpredictable cue).
Perceptual calibration occurred when reliable spectral peaks were 20 dB or
larger, but findings in profile analysis and spectral contrast detection predict
sensitivity to more modest spectral peaks. The present experiments tested
identification of vowels varying in F2 (1000–2200 Hz) and spectral tilt (-120 dB/octave), perceptually varying from /u/ to /i/. Listeners first identified
vowels in isolation, then following a sentence filtered to add a reliable + 2 to
+ 15 dB spectral peak centered at F2 of the target vowel. Changes in perceptual weights (standardized logistic regression coefficients) across sessions
were indices of perceptual calibration. Vowel identification weighted F2 significantly less when reliable peaks were at least + 5 dB, but increases in
spectral tilt weights were very modest. Results demonstrate high sensitivity
to predictable acoustic properties in the sensory environment.
10:30
5aPPb2. Testing the contribution of spectral cues on pitch strength
judgments in normal-hearing listeners. William Shofner (Speech and
Hearing Sci., Indiana Univ., 200 S. Jordan Ave., Bloomington, IN 47405,
wshofner@indiana.edu) and Marisa Marsteller (Speech, Lang. and Hearing
Sci., Univ. of Arizona, Tucson, AZ)
When a wideband harmonic tone complex (wHTC) is passed through a
noise vocoder, the resulting sounds can have harmonic structures with large
peak-to-valley ratios in the spectra, but little or no periodicity strength in the
2308
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
autocorrelation functions. Noise-vocoded wHTCs evoke simultaneous noise
percepts and pitch percepts similar to those evoked by iterated rippled
noises. We have previously shown that spectral cues do not appear to control behavioral responses of chinchillas to noise-vocoded wHTCs in a stimulus generalization task, but do appear to contribute to pitch strength
judgments in normal-hearing listeners for noise-vocoded wHTCs relative to
non-vocoded wHTCs. To further test the role of spectral cues, normal-hearing listeners judged the pitch strengths of noise-vocoded wHTCs relative to
infinitely-iterated rippled noise (IIRN). Stimuli had harmonic structures
with a fixed fundamental frequency of 500 Hz and were presented monaurally at 50 dB SL. Listeners’ judgments of pitch strength evoked by vocoded
wHTCs were generally consistent with peak-to-valley ratios of the stimuli.
In order to reduce spectral cues and resolvability, stimuli were high-passed
filtered. Pitch strength judgments of vocoded wHTCs were reduced following high-pass filtering. These findings suggest that spectral cues do contribute to pitch perception in human listeners.
10:45
5aPPb3. The role of onsets and envelope fluctuations in binaural cue
use. G. Christopher Stecker and Anna C. Diedesch (Hearing and Speech
Sci., Vanderbilt Univ. Medical Ctr., 1215 21st Ave. South, Rm. 8310, Nashville, TN 37232-8242, g.christopher.stecker@vanderbilt.edu)
Effective localization of real sound sources requires neural mechanisms
to accurately extract and represent binaural cues, including interaural time
and level differences (ITD and ILD) in the sound arriving at the ears. Many
studies have explored the relative effectiveness of these cues, and how that
effectiveness varies with the acoustical features of a sound such as spectral
frequency and modulation characteristics. In particular, several classic and
recent studies have demonstrated relatively greater sensitivity to ITD and
ILD present at sound onsets and other positive-going fluctuations of the
sound envelope. The results of those studies have clear implications for how
spatial cues are extracted from naturally fluctuating sounds such as human
speech, and how that process is altered by echoes, reverberation, and competing sources in real auditory scenes. Here, we review the results of several
recent studies to summarize and critique the evidence for envelope-triggered
168th Meeting: Acoustical Society of America
2308
11:00
5aPPb4. Loudness of a multi-tonal sound field, consisting of either one
two-component complex sound source or two simultaneous spatially distributed sound sources. Micha€el Vannier and Etienne Parizet (Genie
Mecanique Conception, INSA-Lyon, Laboratoire Vibrations Acoustique,
13, Pl. Jean Mace, Lyon 69007, France, michael.vannier@insa-lyon.fr)
The aim of the present study is to provide new elements about the perceived loudness of stationary complex sound fields and test the validity of
current models under such conditions. The first part consisted in testing the
hypothesis according which the directional loudness of a multi-component
sound source could be fully explained by the directional loudness of each of
its single components. In this way, the directional loudness sensitivities of a
two-component complex sound source (third-octave noise bands centered at
1 kHz and 5 kHz) have been measured in the horizontal plane. Despite a
previous equalization in loudness of each component to a frontal reference,
a small effect of the azimuth angle on loudness still remained, partly disproving the assumption. In a second part, the influence of the spatial distribution of two sound sources on the global loudness was investigated (with
the same two narrow-band noises). No effect has been found by Song
(2007) for small incidence angles (10 and 30 ). The present experiment
extends this result for wide incidence angles and so, under highly dichotic
listening situations. Finally, all the subjective data have been compared with
the predictions from different models of loudness, and the results will be
discussed.
11:15
5aPPb5. Computing interaural differences using idealized head models.
Tingli Cai, Brad Rakerd, and William Hartmann (Phys. Astronomy, Michigan State Univ., 567 Wilson Rd., East Lansing, MI 48824, hartman2@msu.
edu)
The spherical model of the human head, attributable to Lord Rayleigh,
accounts for important features of observed interaural time differences
(ITD) and interaural level differences (ILD), but it also fails to capture
many details. To gain an intuitive understanding of the failures, we computed ITDs and ILDs for a succession of idealized shapes approximating the
human head: sphere, ellipsoid, ellipsoid plus cylindrical neck, ellipsoid plus
cylindrical neck plus disk torso. Calculations were done as a function of frequency (100–2500 Hz) and for source azimuths from 10 to 90 degrees using
finite-element models. The computations were compared to free-field measurements on a KEMAR manikin. The spherical head model approximated
many measured interaural features, but the frequency dependence tended to
be too flat in both ITD and ILD. The ellipsoidal head produced greater variation with frequency and therefore agreed better with the measurements,
reducing the RMS discrepancies in both ITD and ILD by 35%. Adding a
neck further increased the frequency variation. Adding the disk torso further
improved the agreement, especially below 1000 Hz, decreasing the ITD discrepancy by another 21\%. The evolution of models enabled us to associate
details of interaural differences with overall anatomical features. [Work supported by the AFOSR grant 11NL002.]
11:30
5aPPb6. Acoustic reflex attenuation in phon loudness measurements. Julius L. Goldstein (Hearing Emulations LLC, Ariel Premium,Hearing Emulations LLC, 8825 Page Ave., Saint Louis, MO 63114-6105, goldstein.jl@
sbcglobal.net)
listeners as equally loud as a 1 kHz tone at L dB SPL. Loudness is defined
relatively as L phons (Fletcher & Munson, 1933). ELC measurements by
Lydolf and Mller (1997) included in the current ISO standard (Suzuki &
Takeshima, JASA vol. 116, 2004), show systematic increases in ELC
growth rate with loudness above 60 phons and below 1 kHz, which suggests
middle-ear attenuation by the acoustic reflex (AR). A steady-state ELC
model was assembled including known mechanisms: (1) middle ear transmission modified by a head-related-transfer-function, (2) compressive cochlear amplification (CA) for signaling loudness, (3) a negative feedback
model for AR attenuation by CA inputs exceeding AR threshold, and (4)
attenuation of pressure-field stimuli by trans-eardrum static pressure. Model
parameters were calculated from ELC data using minimum-square-error
estimation. AR attenuation below 1 kHz depends on AR attenuation at the 1
kHz loudness reference frequency, but predicted ELCs are relatively insensitive to it. An earlier psychophysical study of AR attenuation, including 1
kHz, is consistent with subject-dependent model predictions (Rabinowitz &
Goldstein, JASA vol. 54, 1973; Rabinowitz, 1977). [NIH-Funded.]
11:45
5aPPb7. Effects of tinnitus and hearing loss on functional brain networks involved in auditory and visual short-term memory. Fatima T.
Husain, Kwaku Akrofi (Speech and Hearing Sci., Univ. of Illinois at
Urbana-Champaign, 901 S. Sixth St., Champaign, IL 61820, husainf@illinois.edu), and Jake Carpenter-Thompson (Neurosci., Univ. of Illinois at
Urbana-Champaign, Champaign, IL)
Brain imaging data were acquired from three subject groups—persons
with hearing loss and tinnitus (TIN), individuals with similar hearing loss
without tinnitus (HL) and those with normal hearing without tinnitus
(NH)—to test the hypothesis that TIN and control subjects use different
functional brain networks for short-term memory. Previous studies have
provided evidence of a link between hearing disorders such as tinnitus and
the reorganization of auditory and extra-auditory functional networks.
Greater knowledge of this reorganization could lead to the development of
more effective therapies. Data analysis was conducted on fMRI data
obtained while subjects performed short-term memory tasks with low or
high attentional loads, using both auditory and visual stimuli in separate
scanning sessions. Auditory stimuli were pure tones with frequencies
between 500 and 1000 Hz. Visual stimuli were Korean fonts, unfamiliar to
the subjects. We found similar behavioral response across the three groups
for both modalities and tasks. However, the groups differed in their brain
response, with these differences being more marked for the auditory tasks
and not for the tasks involving visual stimuli.
12:00
5aPPb8. Preliminary results of a two-interval forced-choice method for
assessing infant hearing sensitivity. Lynne Werner (Speech & Hearing
Sci., Univ. of Washington, 1417 North East 42nd St., Seattle, WA 981056246, lawerner@u.washington.edu)
Current methods for assessing infants’ hearing are yes-no, single interval
procedures. Although bias-free statistics can be used to describe the results
of such procedures, with the limited number of trials typically available
from an individual infant, use of these statistics can be problematic. A twointerval forced choice method based on infants’ anticipatory eye movements
toward an interesting visual event is currently under development. Preliminary results indicate that a high proportion of both 3- and 7-month-old
infants achieve over 80% correct in the detection of a 70 dB SPL 1000 Hz
tone presented through an insert earphone. Infants continue to perform better than expected by chance at levels as low as 25 dB SPL. Thus, a test
method based on infant eye movements holds potential as an efficient behavioral method for assessing infant hearing.
Equal Loudness-level Contours, ELC(f, L), represent the sound pressure
level in dB SPL of tones at frequency f that are perceived by normal-hearing
2309
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2309
5a FRI. AM
extraction of ITD and ILD across a wide range of spectral frequencies. A
number of competing models for cue extraction in fluctuating envelopes are
also considered in light of this evidence. [Work supported by NIH R01DC011548.]
FRIDAY MORNING, 31 OCTOBER 2014
MARRIOTT 5, 8:00 A.M. TO 12:00 NOON
Session 5aSC
Speech Communication: Speech Perception and Production in Challenging Conditions (Poster Session)
Alexander L. Francis, Chair
Purdue University, SLHS, Heavilon Hall, 500 Oval Dr., West Lafayette, IN 47907
All posters will be on display from 8:00 a.m. to 12:00 noon. To allow contributors an opportunity to see other posters, contributors of
odd-numbered papers will be at their posters from 8:00 a.m. to 10:00 a.m. and authors of even-numbered papers will be at their posters
from 10:00 a.m. to 12:00 noon.
Contributed Papers
5aSC1. A new dual-task paradigm to assess cognitive resources utilized
during speech recognition. Andrea R. Plotkowski and Joshua M.
Alexander (Speech, Lang., and Hearing Sci., Purdue Univ., 500 Oval Dr.,
West Lafayette, IN 47906, mitche99@purdue.edu)
Listening to ongoing conversations in challenging situations requires
explicit use of cognitive resources to decode and process spoken messages.
Traditional speech recognition tests are insensitive measures of this cognitive effort, which may differ greatly between individuals or listening conditions. Furthermore, most dual-task paradigms that have been devised for
this purpose generally rely on secondary tasks like reaction time and recall
that do not reflect real-world listening demands. A new task was designed to
capture changes in both speech recognition and verbal processing across different conditions. Listeners heard two sequential sentences spoken by opposite gender talkers in speech-shaped noise. The primary task was a
traditional speech recognition test, in which listeners immediately repeated
aloud the second sentence in the pair. The secondary task was designed to
engage explicit cognitive processes by requiring listeners to write down the
first sentence after holding it in memory while listening to and repeating
back the second sentence. Test sentences consisted of lists from the
PRESTO test (Gilbert et al. 2013, J. Am. Acad. Audiol. vol. 24, pp. 26–36)
that were carefully modified to help ensure list-equivalency. Psychometric
results from the revised PRESTO sentence lists and from the new dual-sentence task will be reported.
5aSC2. Vocal effort, coordination, and balance. Robert A. Fuhrman (Linguist, U Br. Columbia, 2613 West Mall, Vancouver, BC, Canada, robert.a.
fuhrman@gmail.com), Adriano Barbosa (Electron. Eng., Federal Univ. of
Minas Gerais, Belo Horizonte, Brazil), and Eric Vatikiotis-Bateson (Linguist, U Br. Columbia, Vancouver, BC, Canada)
Manipulating speaking and discourse requirements allows us to asses
the time-varying correspondences between various subsystems within a
talker at different levels of vocal effort. These subsystems include fundamental frequency (F0) and acoustic amplitude, rigid body (6D) motion of
the head, motion (2D) of the body, and postural forces and torques measured
at the feet. Analysis of six speakers has confirmed our hypothesis that as
vocal effort increases coordination among sub-systems simplifies, as shown
by greater correspondence (e.g., the instantaneous correlation) between the
various time-series measures. However, at the two highest levels of vocal
effort, elicited by having talkers shout to and yell at someone located appropriately far away, elements of the postural force, notably one or more torque
components, often show a reduction in correspondence with the other measures. We interpret this result as evidence that talkers become more rigidly
coordinated at the highest levels of vocal effort, which can interfere with
their balance. Furthermore, the discourse type—shouting at someone to
carry on a conversation vs. yelling at someone not expected to reply—can
be associated with differing amounts of imbalance.
2310
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
5aSC3. The gradient effect of transitional magnitude: A source of the
vowel context effect. SANG-IM LEE-KIM (Linguist, New York Univ.,
45-35 44th St. 1i, Sunnyside, NY 11104, sangim119@gmail.com)
Previous studies have shown that vocalic transitions play an important
role in the identification of the consonantal places (e.g., Whalen 1981/1991,
Nowak 2006, Babel & McGuire 2013). While it has been intermittantly
reported that the contribution of transitions may depend on vowel contexts,
the common methodology, i.e., C-V cross-splicing, is too coarse to precisely
identify the nature of this effect. In the present study, vocalic transitions are
systematically manipulated and used as a gradient variable by incrementally
removing the transitional period of the three vowels /u a e/ following alveolopalatal sibilant /ˆ/ in Polish. In an identification task, native Polish speakers were given a choice between /ˆ/ and // for stimuli with varying levels
of palatal transitions. The results showed that participants’ perception is gradient: greater transitions overall elicit more palatal responses in all vowel
contexts. More importantly, it has been shown that the apparent vowel effect
can be largely reduced to the relative magnitude of transitions that are specific to each vowel. The low and back vowels elicit greater palatal transitions providing more robust transitional cues in perception, while the high
and front vowels elicit smaller or nearly zero palatal transitions providing
less robust cues to the sibilants’ place.
5aSC4. Adaptive compensation for reliable spectral characteristics of a
listening context in vowel perception. Paul Anderson and Christian Stilp
(Psychol. and Brain Sci., Univ. of Louisville, 2301 S 3rd St., Louisville,
KY, paul.anderson@louisville.edu)
When precursor sounds are filtered to emphasize frequencies matching
F2 of a subsequent target vowel, vowel perception decreases reliance on F2
(predictable cue) and increases reliance on spectral tilt (unpredictable cue)
and vice versa. Previously, initial cue weights and weight changes (i.e., perceptual calibration to reliable signal properties) were larger for F2 than tilt,
obscuring whether the magnitude of calibration reflects cue predictability or
F2’s status as a primary cue to vowel identity. Here, vowels varied from /u/
to /i/ in tilt (-12-0 dB/octave) and the full range of F2 values (1000–2200
Hz) or a reduced range (1300–1900 Hz) designed to decrease F2 cue
weights, making tilt the primary cue for vowel identification. Vowels were
presented in isolation, then following sentences filtered to match the target
vowel’s F2 or tilt. In isolation, cue weights for F2 were higher when identifying full-F2-range vowels and higher for tilt when identifying reduced-F2range vowels. Weight changes (calibration) were comparable when the primary cue was predictable; this was also true for predictable secondary cues
(tilt for full-F2-range vowels, F2 for reduced-F2-range vowels). Perceptual
calibration to reliable signal properties is an adaptive process reflecting cue
predictability, not solely a priori cue use (e.g., F2 over tilt).
168th Meeting: Acoustical Society of America
2310
5aSC5. An approach to the analysis of relations between syllable and
sentence perception in quiet and noise in the Speech Perception Assessment and Training System: Preliminary results for ten hearing-aid
users. James D. Miller (Res., Commun. Disord. Technol., Inc., 3100 John
Hinkle Pl, Ste 107, Bloomington, IN 47408, jamdmill@indiana.edu)
5aSC8. Perceptual versus cognitive speed in a time-compressed speech
task. Michelle R. Molis, Frederick J. Gallun, and Nirmal Srinivasan
(National Ctr. for Rehabilitative Auditory Res., Portland VA Medical Ctr.,
3710 SW US Veterans Hospital Rd., Portland, OR 97239, michelle.molis@
va.gov)
Logistic functions relating abilities to indentify syllable onsets, nuclei,
and codas in quiet and noise as a function of snr are measured. Syllable perception is the product of these individual abilities. It is found that syllable
perception in noise is highly correlated with syllable perception quiet. The
relation of sentence perception in the SPATS sentence task with SPATS syllable constituent perception is examined. As shown years ago at the Bell
Labs, only modest levels of syllable identification are needed to support
nearly perfect levels of sentence perception. Here, it is found that sentence
perception in quiet and noise is correlated with syllable perception in quiet,
the use of inherent context provided by syllable perception (Boothroyd and
Nittrauer (1988)), and with the use of situational context, independent of
syllable perception. Finally, the effects of speech perception training on
these relations are examined for each of the ten hearing-aid users studied.
[Work supported by NIH/NICD Grant R21/R33DC011174 “Multi-site
Study of the Efficacy of Speech Perception Training for Hearing-Aid
Users,” C. S. Watson, PI. Data supplied by cooperating sites: Medical University of South Carolina, J. Dubno, Site PI; University of Memphis, D.
Wark, Site PI; and University of Maryland, S. Gordon-Salant, Site PI.]
Time-compression retains the information-bearing spectral change present uncompressed speech, although at a rate that may outstrip cognitive
processing speed. To compare the relative importance of perceptual and
cognitive processing speed, we compared the understanding of (1) timecompressed stimuli expanded in time via gaps with (2) uncompressed stimuli where spectral change information was removed. We hypothesized that,
despite the initial compression, the compressed and expanded stimuli would
be more intelligible as it would retain relatively more information-bearing
spectral change. Participants were somewhat older listeners (mid-1950s to
mid-1960s) with normal hearing or mild hearing loss. Stimuli were spoken
seven-digit strings time-compressed via pitch synchronous overlap and add
(PSOLA) at three uniform compression ratios (2:1, 3:1, and 5:1). In gap
insertion conditions, the total duration of the compressed stimuli was
restored via introduction of periodic gaps. This produced signal-to-gap
ratios of 1:1, 1:2, and 1:4. For comparison, segments of unaccelerated
strings, equal to the duration of the inserted gaps, were zeroed out resulting
in the same signal-to-gap ratios. Listeners identified the final four digits of
the strings presented in quiet and in a steady-state, speech-shaped background noise (SNR + 5). Our hypothesis was supported for the fastest compression rates. [Work supported by VA RR&D.]
Signal processing schemes used in hearing aids, such as nonlinear frequency compression (NFC) recode speech information by moving high-frequency information to lower frequency regions. Perceptual studies have
shown that depending on the dominant speech sound, compression occurs
and the amount of compression can have a significant effect on perception.
Very little is understood about how frequency-lowered information is
encoded by the auditory periphery. We have developed a measure that is
sensitive to information in the altered speech signal in an attempt to predict
optimal hearing aid settings for individual hearing losses. The NeuralScaled Entropy (NSE) model examines the effects of frequency-lowered
speech at the level of the inner hair cell synapse of an auditory nerve model
[Zilany et al. 2013, Assoc. Res. Otolaryngol.]. NSE quantifies the information available in speech by the degree to which the pattern of neural firing
across frequency changes relative to its past history (entropy). Nonsense syllables with different NFC parameters were processed in noise. Results are
compared with perceptual data across the NFC parameters as well as across
different vowel-defining parameters, consonant features, and talker gender.
NSE successfully captured the overall effects of varying NFC parameters
across the different sound classes.
5aSC7. Tempo-based segregation of spoken sentences. Gary R. Kidd and
Larry E. Humes (Speech and Hearing Sci., Indiana Univ., 200 S. Jordan
Ave., Bloomington, IN 47405, kidd@indiana.edu)
The ability to make use of differences in speech rhythms to selectively
attend to a single spoken message in a multi-talker background was examined
in a series of studies. Sentences from the coordinate response measure corpus
provided a set of stimuli with a common rhythmic framework spoken by several talkers at similar speaking rates. Subjects were asked to identify two key
words spoken in a “target” sentence identified by a word (call sign) near the
beginning of the sentence. The target talker was always in the same male voice
and either two or six background talkers were presented in different voices
(half male and half female). The rate of the background talkers was manipulated to create natural sounding speech that preserved the original pitch and
speech rhythms at faster and slower speaking rates. Unaltered target sentences
were presented in the presence of faster, unaltered, or slower competing sentences. Performance was poorest with matching target and background tempos,
with substantial increases in performance as the tempo differences increased.
Modification of the target-sentence rate confirmed that the effect is due to the
relative timing of target and background speech, rather than the properties of
rate-modified background speech. [Work supported by NIH-NIA.]
2311
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
5aSC9. Information-bearing acoustic changes are important for understanding vocoded speech in a simulation of cochlear implant processing
strategies. Christian Stilp (Dept. of Psychol. and Brain Sci., Univ. of Louisville, 308 Life Sci. Bldg., Louisville, KY 40292, christian.stilp@louisville.
edu)
Information-bearing acoustic changes (IBACs) in the speech signal are
important for understanding speech. This was demonstrated with cochleascaled entropy for cochlear implants (CSECI), which measures perceptually
significant intervals of noise-vocoded speech (Stilp et al., 2013 JASA; Stilp,
2014 JASA; Stilp & Goupell, 2014 ASA). However, vocoding does not necessarily mimic CI processing. Some CI processing strategies present acoustic information in all channels at all times (e.g., CIS) while others present
only the n-highest-amplitude channels out of m at any time (e.g., ACE).
Here, IBACs were explored in a simulation of ACE processing. Sentences
were divided into 22 channels spanning 188–7938 Hz and noise-vocoded. In
each 1-ms interval (simulating 1000 pulses/second stimulation rate), only
the eight highest-amplitude channels were retained. CSECI was calculated
between 1-ms or 16-ms sentence segments, then summed into 80-ms intervals. High-CSECI or low-CSECI intervals were replaced by speech-shaped
noise. Consistent with previous studies, replacing high-CSECI intervals
impaired sentence intelligibility more than replacing an equal number of
low-CSECI intervals. Importantly, performance was comparable when 1- or
16-ms IBACs were replaced by noise. Results reveal the perceptual importance of IBACs on rapid timescales after simulated ACE processing, indicating this information is likely available to CI users for understanding
speech.
5aSC10. Talker intelligibility across clear and sinewave vocoded speech.
Jeremy Loebach, Gina Scharenbroch, and Katelyn Berg (Psych., St. Olaf
College, 1520 St. Olaf Ave., Northfield, MN 55057, loebach@stolaf.edu)
Talker intelligibility was compared across clear and sinewave vocoded
speech. Ten talkers (5 female) from the Midwest and Western dialect
regions recorded samples of 210 meaningful IEEE sentences, 206 semantically anomalous sentences, and 300 MRT words. Ninety-three normal hearing participants provided open set transcriptions of the materials presented
in the clear over headphones. Forty-one different normal hearing participants provided open set transcriptions of the materials processed with an
eight-channel sinewave vocoder. Transcription accuracy was highest for
clear speech compared to vocoded speech, and for meaningful sentences,
followed by anomalous sentences and words for both conditions. Weak
talker effects were observed for the meaningful sentences in the clear (ranging from 97.7% to 98.2%), but were more pronounced for vocoded versions
168th Meeting: Acoustical Society of America
2311
5a FRI. AM
5aSC6. Neural-scaled entropy predicts the effects of nonlinear frequency compression on speech perception. Varsha Hariram and Joshua
Alexander (Speech Lang. and Hearing Sci., Purdue Univ., 500 Oval Dr.,
West Lafayette, IN 47907, vhariram@purdue.edu)
(68.5% to 85.5%). Weak talker effects were observed for semantically
anomalous sentences in the clear (89.4%-93.3%), but more variability was
observed across talkers in the vocoded condition (54.4%–73.7%). Finally,
stronger talker effects were observed for clear and vocoded MRT words
(83.8%–95.6%, 46.3%–59.0%, respectively). Talker rankings differed
across stimulus conditions, as well as across processing conditions, but significant positive correlations between conditions were observed for meaningful and anomalous sentences, but not MRT words. Acoustic and dialect
influences on intelligibility will be discussed.
5aSC11. Vowels of four-year-old children with cerebral palsy in
Mandarin-learning environment. Li-mei Chen (Foreign Lang. and Lit.,
National Cheng Kung Univ., Tainan, Taiwan), Yu Ching Lin (Physical
Medicine and Rehabilitation, National Cheng Kung Univ., Tainan, Taiwan),
Wei Chen Hsu, and Meng-Hsin Yeh (Foreign Lang. and Lit., National
Cheng Kung Univ., 1 University Rd, Tainan 701, Taiwan, myonaa@gmail.
com)
Characteristics of vowel productions of children with cerebral palsy
(CP) were investigated with data from two 4-year-old children with CP and
two typically-developing (TD) children in Mandarin-learning environment.
Clear vowel productions from picture naming and natural conversation in
three 50-minute audio recordings of each child were transcribed and analyzed. Seven parameters were examined: vowel duration of /a/, F2 slope in
transition of CV sequence, cumulative change of F2 for vowel /a/, degree of
nasalization in oral vowel (A1-P1), percent of jitter, percent of shimmer,
and the signal to noise ratio (SNR). Major findings are: (1) The CP group
showed shorter vowel duration of /a/; (2) TD group has larger F2 slope in
CV transition; (3) No obvious differences were found between TD and CP
groups in cumulative change of F2 for vowel /a/, degree of nasalization
(A1-P1), and voice perturbation (percent of jitter, percent of shimmer, and
SNR). Further study with more participants and with careful data selection
can verify findings of this study in search for valid parameters to characterize vowel production of children with CP.
5aSC12. Effects of depression on speech. Saurabh Sahu and Carol EspyWilson (Elec. and Comput. Eng., Univ. of Maryland College Park, 8125 48
Ave., Apt 101, College Park, MD 20740, ssahu89@umd.edu)
In this paper, we are investigating the effects of depression on speech.
The motivation comes from the fact that neuro-physiological changes associated with depression affect motor coordination and can disrupt the articulatory precision in speech. We use the database collected by Mundt et al. (J.
Neurolinguist. vol. 20, no. 1, pp. 50–64, Jan. 2007) in which 35 subjects
were treated over a 6 week period and study how the changes in mental state
are manifest in certain acoustic properties that correlate with the Hamilton
Depression Rating Scale (HAM-D), which is a clinical assessment score.
We look at features such as the modulation frequencies, aperiodic energy
during voiced speech, vocal fold jitter and shimmer, and other cues that are
related to articulatory precision. These measures will be discussed in detail.
5aSC13. Pitch production of a Mandarin-learning infant with cerebral
palsy. Meng-Hsin Yeh, Li-mei Chen (Foreign Lang. and Lit., National
Cheng Kung Univ., 1 University Rd., Tainan 701, Taiwan, myonaa@gmail.
com), Chyi-Her Lin, Yuh-Jyh Lin, and Yung-Chieh Lin (Pediatrics,
National Cheng Kung Univ., Tainan, Taiwan)
In this study, pitch production were investigated in two Mandarin-learning infants at 6 months of age, an infant with cerebral palsy (CP) and a typically developing (TD) infant. Words with distinct tones in Mandarin differ
in meaning. In order to produce a correct tone, having good control of the
respiratory and the laryngeal mechanisms are necessary. Thus, producing a
correct tone and reaching intelligibility for children with CP is considered to
be relatively difficult. In previous studies, Kent and Murray (1982) pointed
out that falling contours predominated in infant vocalizations at 3, 6, and 9
months. A study by Chen et al (2013) with 4-year-old children indicated
that the mean pitch duration of CP children is 1.3–1.8 times longer than TD
counterparts. In adults, Jeng, Weismer, and Kent (2006) found that the pitch
slopes of Mandarin in CP adults are smaller than in healthy adults. Three
measures were employed in this current study and the major findings are:
2312
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
(1) Both TD and CP infants produced more falling than rising pitch; (2) The
mean duration of pitch in CP is 2.3 times longer than that of TD; (3) The
pitch slope in CP is smaller than that of TD.
5aSC14. Linear and non-linear acoustic voice analysis of Persian speaking Parkinson’s disease patients. Fatemeh Majdinasab (Speech Therapy,
Tehran Univ. of Medical Sci., Tehran, Iran), Maryam Mollashahi, Mansour
Vali (Medical Eng., k.N. toosi Univ. of Technol., Tehran, Iran), and Hedieh
Hashemi (Dept. of Commun. Sci. & Disord., Univ. of Cincinnati, Cincinnati, OH, hashemihedieh@yahoo.com)
Purpose: Many studies have analyzed acoustic voice characteristics
(AVC) of Parkinson’s disease patients (PDP) by linear or non-linear methods. The aim of this study is to compare the linear and non-linear
approaches in acoustic voice analysis of Persian speaking PDPs. Method:
This cross sectional, non-experimental study was done on 27 (15 males, 12
females) PDP and 21 healthy age-sex matched subjects (11 males, 10
females). Patients were chosen from attendants of movement disorders
clinic using convenience sampling. All of patients evaluated in "on" medication period. AVC consisting average fundamental frequency (f0), standard
deviation of f0, mean percentage of jitter, shimmer, and HNR in prolongation of all Persian vowels /a, e, i, o, u /. PRAAT 5.1.17 software (as a linear
tool) and MATLAB (as a non-linear method) used to evaluate AVC. Result:
There was not any significant difference between PDPs and normal subjects
except for jitter /æ /(0.041) and / e/ (0.021). According to non-linear characteristics of Wavelet entropy coefficient, and by mother wavelet with coif1
(in MATLAB), all of AVC of patients differentiated from normal. Conclusion:
It seems that non-linear analysis is more detailed method to discriminate
dysarthric voice from normal voice. Keywords: Acoustic voice analysis,
Parkinson’s disease, linear, nonlinear.
5aSC15. Vowel development in children with Down and Williams syndromes. Ewa Jacewicz, Robert A. Fox (Dept. and Speech and Hearing Sci.,
The Ohio State Univ., 1070 Carmack Rd., 110 Pressey Hall, Columbus, OH
43210, jacewicz.1@osu.edu), Vesna Stojanovik, and Jane Setter (Dept. of
Clinical Lang. Sci., Univ. of Reading, Reading, United Kingdom)
Down syndrome (DS) and Williams syndrome (WS) are genetic disorders resulting from different types of genetic errors. While both disorders
lead to phonological and speech motor deficits, particularly little is known
about vowel production in DS and WS. Recent work suggests that impaired
vowel articulation in DS likely contributes to the poor intelligibility of DS
speech. Developmental delays in temporal vowel structure and pitch control
have been found in children with WS when compared to their chronological
matches. Here, we analyze spontaneous speech samples produced by British
children with DS and WI and compare them with typically developing children from the same geographic area in Southern England. We focus on the
acquisition of fine-grained phonetic details, asking if children with DS and
WS are able to synchronize the phonetic and indexical domains while coping with articulatory challenges related to their respective syndromes. Phonetic details pertaining to the spectral (vowel-inherent spectral change) and
indexical (regional dialect) vowel features are examined and vowel spaces
are derived from formant values sampled at multiple temporal locations.
Variations in density patterns across the vowel space are also considered to
define the nature of the acoustic overlap in vowels related to each
syndrome.
5aSC16. Prosodic characteristics in young children with autism spectrum disorder. Laura Dilley, Sara Cook, Ida Stockman, and Brooke Ingersoll (Michigan State Univ., Dept. of Communicative Sci., East Lansing, MI
48824, ldilley@msu.edu)
The prosody of high-functioning adults and adolescents with autism
spectrum disorder (ASD) has been reported to differ from that of typically
developing individuals. The present study investigated whether young children under eight years old with ASD differ in prosodic characteristics compared with neurotypical children matched on expressive language ability.
Seven children with ASD (38–93 months) and seven neurotypical children
(20–30 months) were recorded during naturalistic interactions with a parent.
Na€ıve listeners (n = 18) were recruited to rate utterances for: (i) age, (ii)
168th Meeting: Acoustical Society of America
2312
5aSC17. Speech production changes and intelligibility with a real-time
cochlear implant simulator. Lily Talesnick (Neurosci., Trinity College,
300 Summit St., Hartford, CT 06106, lily.talesnick@trincoll.edu) and Elizabeth D. Casserly (Psych., Trinity College, Hartford, CT)
Subjects hearing their speech through a real-time cochlear implant (CI)
simulator alter their production in multiple ways, e.g., reducing speaking
rate and constricting F1/F2 vowel space. The motivations behind these alterations, however, are currently unknown. Two possibilities are that the
changes in speech are due to the influence of a direct feedback loop in which
the subject is adjusting speech production to minimize acoustic “error,” or
that the changes could reflect the indirect influence of a high cognitive load
(stemming from the challenge of hearing through the real-time CI simulator). We explored these two possibilities by conducting a playback experiment in which 35 na€ıve listeners assessed the intelligibility of speech
produced under conditions of normal versus vocoded feedback. Intelligibility of vocoded isolated word stimuli in each condition was tested in both a
two-alternative forced choice task (“Which recording is easier to understand?”) and an open-set word recognition task. Listeners found normalfeedback speech significantly more intelligible in both tasks (p’s < 0.0125),
suggesting that speakers were not adjusting for direct error correction, but
rather due to the influence of an intervening factor, e.g., high cognitive load.
Confusion matrix analyses further illuminate the perceptual consequences
of the effects of CI-simulated speech feedback.
5aSC18. Hearing and hearing-impaired children’s acoustic–phonetic
adaptations to an interlocutor with a hearing impairment. Sonia Granlund, Valerie Hazan (Speech, Hearing & Phonetic Sci., Univ. College London (UCL), Rm. 326, Chandler House, 2 Wakefield St., London WC1N
1PF, United Kingdom, s.granlund@ucl.ac.uk), and Merle Mahon (Developmental Sci., Univ. College London (UCL), London, United Kingdom)
In England, the majority of children with a hearing impairment attend
mainstream schools. However, little is known about the communication
strategies used by children when interacting with a peer with hearing loss.
This study examined how children with normal-hearing (NH) and those
with a hearing impairment (HI) adapt to the needs of a HI interlocutor, focusing on the acoustic–phonetic properties of their speech. Eighteen NH
and 18 HI children between the ages of 9 and 15 years performed two problem-solving communicative tasks in pairs: one session was completed with
a friend with normal hearing (NH-directed speech) and one session was
done with a friend with a hearing impairment (HI-directed speech). As
expected, task difficulty increased in interactions involving a HI interlocutor. HI speakers had a slower speech rate, higher speech intensity, and
greater F0 range than NH speakers. However, both HI and NH participants
decreased their speech rate, and increased their F0 range, mean F0 and the
intensity of their speech in HI-directed speech compared to NH-directed
speech. This suggests that both NH and HI children are able to adapt to the
needs of their interlocutor, even though speech production is more effortful
for HI children than their NH peers.
5aSC19. Objective speech intelligibility prediction in sensorineural
hearing loss using acoustic simulations and perceptual speech quality
measures. Emma Chiaramello, Stefano Moriconi, and Gabriella Tognola
(Inst. of Electronics, Computers and TeleCommun. Eng., CNR Italian
National Res. Council, Piazza Leonardo Da Vinci 32, Milan 20133, Italy,
gabriella.tognola@ieiit.cnr.it)
A novel approach to objectively predict speech intelligibility in sensorineural hearing loss using acoustic simulations of impaired perception and
2313
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
objective measures of perceptual speech quality (PESQ) is proposed and
validated. Acoustic simulations of impaired perception with different types
and degrees of hearing loss were obtained degrading of the original speech
waveforms by spectral smearing, expansive nonlinearity, and level scaling.
The CUNY NST syllables were used as test material. PESQ was used to
measure perceptual quality of the acoustic simulations thus obtained.
Finally, PESQ scores were transformed into predicted intelligibility scores
using a logistic function. Validation of the proposed objective method was
performed by comparing predicted intelligibility scores with subjective
measures of intelligibility of the degraded speech in a group of ten subjects.
Predictive intelligibility scores showed good correlation (R<sup>2</
sup> = 0.7) with subjective intelligibility scores and a low error in the prediction (RMSE = 0.14). The proposed approach could be a valuable aid in
real clinical applications where it is needed to measure speech intelligibility
and might be of some help in avoiding time-consuming experimental measurements. In particular, this method might be valuable in the characterization of the sensitivity of new speech tests for screening and diagnosing of
hearing loss, or in the assessment of the performance of novel algorithms of
speech enhancement for a target hearing impairment.
5aSC20. Identification of dialect cues by dyslexic and non-dyslexic listeners. Robert A. Fox (Speech and Hearing Sci., The Ohio State Univ., 110
Pressey Hall, 1070 Carmack Rd., Columbus, OH 43210-1002, fox.2@osu.
edu), Gayle Long, and Ewa Jacewicz (Speech and Hearing Sci., The Ohio
State Univ., Columbus, OH)
Spoken language encodes two different forms of information: linguistic
(related to the message) and indexical (e.g., speaker’s age, gender, and regional dialect). However, some speech-language impairments (such as dyslexia) can reduce a listener’s ability to process both linguistic and indexical
speech cues. For example, Perrachione et al. (Science, 333, 2011) demonstrated that individuals with dyslexia were less able to identify new voices
than were control listeners. This study examines the ability of listeners with
and without dyslexia to identify speaker dialect. Eighty listeners—40 adults
and 40 children (20 in each group were dyslexic, 20 were not; 40 were
males and 40 were females)—listened to a set of 80 sentences produced by
English speakers from Western North Carolina or central Ohio and were
asked to identify which region the speaker came from. Results demonstrated
that adult listeners were significantly better at dialect identification and that
listeners with dyslexia were significantly poorer at dialect identification.
More notably, there was a significant age by listener group interaction—the
improvement in dialect identification between adults and children was
significantly smaller in listeners with dyslexia. This indicates that an initial
limitation in language learning can inhibit long-term development of
speaker-specific phonetic representations.
5aSC21. Individual differences in the lexical processing of phonetically
reduced speech. Rory Turnbull (Linguist, The Ohio State Univ., 222 Oxley
Hall, 1712 Neil Ave., Columbus, OH 43210, turnbull@ling.osu.edu)
There is widespread evidence that phonetically reduced speech is processed slower and more effortfully than unreduced speech. However, individual differences in degree and strategies of reduction, and their effects on
lexical access, are largely unexplored. This study explored the role of autistic traits in the production and perception of reduced pronunciation variants.
Stimuli were recordings of words produced in either high reduction (HR) or
low reduction (LR) contexts, extracted from sentences produced by talkers
ranging in autism-spectrum quotient (AQ) scores. The reductions in these
stimuli were generally small temporal differences, rather than segmentallevel alterations such as /t/-flapping. Listeners completed a lexical decision
task with these stimuli and the autism-spectrum quotient (AQ) questionnaire. Confirming previous research, the results demonstrate that response
times (RTs) to reduced words were slower than to unreduced words. No
other effects on RT were observed. In terms of response accuracy, LR words
were responded to more accurately than HR words, but this pattern was only
observed for temporally reduced words. This LR word accuracy benefit was
larger for listeners with more autistic personality traits. These results suggest that individuals differ in the extent to which unreduced speech provides
a perceptual benefit.
168th Meeting: Acoustical Society of America
2313
5a FRI. AM
percentage of intelligible words, (iii) pitch, (iv) speech rate, (v) degree of
animation, and (vi) certainty of diagnosis. An acoustic analysis was also
conducted of speech rate and fundamental frequency (F0). Results of the rating task showed no statistically significant difference on any measure except
estimated age. However, children in the ASD group had a significantly
lower mean, maximum, and minimum F0 than children in the control group;
there was no significant difference between groups for speech rate. These
findings may indicate that speech characteristics alone are unlikely to be a
sufficient early sign of an ASD diagnosis.
5aSC22. Change of static characteristics of Japanese word utterances
with aging. Mitsunori Mizumachi and Kazuto Ogata (Dept. of Elec. Eng.
and Electronics, Kyushu Inst. of Technol., 1-1 Sensui-cho, Tobata-ku,
Kitakyushu, 805-8440, Japan, mizumach@ecs.kyutech.ac.jp)
Acoustical characteristics of elderly speech have been investigated in
the various viewpoints. Elderly speech can be subjectively characterized by
roughness, breathiness, asthenic, and hoarseness. Those characteristics have
been individually explained in both medical science and speech science. In
particular, the hoarseness, which is caused by a physiological problem with
an aged vocal cord, is the most well-known static properties of elderly
speech. Change of the hoarseness is quantitatively investigated with aging.
Japanese phonetically-balanced 543 word utterances were collected with the
cooperation of 153 speakers, whose ages ranged from 20 to 89 years old.
Acoustical characteristics of the word utterances were examined in the
viewpoints of age and auditory impression. In the static acoustical analysis
of Japanese vowels /a/, /e/, /i/, /o/, and /u/, it is confirmed that energy in the
high frequency region rises with aging. There is a remarkable energy lift
over 4 kHz, and the amount of the energy lift is proportion to the degree of
subjective hoarseness.
5aSC23. Effect of formant characteristics on older listeners’ dynamic
pitch perception. Jing Shen (Commun. Sci. and Disord., Northwestern
Univ., 2240 Campus Dr., Evanston, IL 60208, jing.shen@northwestern.
edu), Richard Wright (Linguist, Univ. of Washington, Seattle, Washington),
and Pamela Souza (Commun. Sci. and Disord., Northwestern Univ., Evanston, IL)
Previous research suggested a large inter-subject variability in dynamic
pitch perception among older individuals (Souza et al., 2011). Although
data from younger listeners with normal hearing indicate temporal and spectral variations in complex formant characteristics may influence dynamic
pitch perception (Green et al., 2002), the present study examines this interaction in an aging population. The stimulus set includes two monophthongs
that have static formant patterns and two diphthongs that have dynamic
formant patterns. The fundamental frequency at the midpoint in time of
each vowel is kept consistent, while the ratio of start-to-end frequency
varies in equal logarithmic steps. Older adults with near-normal hearing are
tested using an identification task, in which they are required to identify the
pitch glide as either “rise” or “fall.” An experimental task of AX discrimination is also included to verify the identification data. Results to date show
inter-subject variability in dynamic pitch perception among listeners with
good static pitch perception. Better pitch glide perception with monophthong than diphthong is observed in those individuals who perform poorly
in general. The findings suggest a connection between individual abilities to
perceive dynamic pitch and to extract the cues from fundamental and formant frequencies. [Work supported by NIH.]
5aSC24. Sentence recognition in older adults. Kathleen F. Faulkner
(Dept. of Psychol. and Brain Sci., Indiana Univ., 1101 E 10th St., Bloomington, IN 47401, katieff@indiana.edu), Gary R. Kidd, Larry E. Humes
(Speech and Hearing Sci., Indiana Univ., Bloomington, IN), and David B.
Pisoni (Dept. of Psychol. and Brain Sci., Indiana Univ., Bloomington,
IN)
Many older adults report difficulty when listening to speech in background noise. These difficulties may arise from some combination of factors, including age-related hearing loss, auditory sensory processing
difficulties, and/or general cognitive decline. To perform well in everyday
noisy environments, listeners must quickly adapt, switch attention, and
adjust to multiple sources of variability in both the signal and listening environments. Sentence recognition tests in noise have been useful for assessing
speech understanding abilities because they require a combination of basic
sensory/perceptual abilities as well as cognitive resources and processing
operations. This study was designed to explore several factors underlying
2314
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
individual differences in aided speech understanding in older adults. We
examined the relations between measures of speech perception, cognition,
and self-reported listening difficulties in a group of aging adults (N = 40, age
range 60–86) and a group of young normal hearing listeners (N = 28, age
range 18–30). All participants completed a comprehensive battery of tests,
including cognitive, psychophysical, speech understanding, as well as the
SSQ self-report scale. While controlling for audibility, speech understanding declined with age and was strongly correlated with psychophysical
measures, cognition, and self-reported speech understanding difficulties.
[Work supported by NIH: NIDCD grant T32-DC00012 and NIA grant R01AG008293 to Indiana University.]
5aSC25. Individual differences in speech perception in noise: A neurocognitive genetic study. Zilong Xie (Dept. of Commun. Sci. & Disord.,
The Univ. of Texas at Austin, 2504A Whitis Ave. (A1100), Austin, TX
78712, xzilong@gmail.com), W. Todd Maddox (Dept. of Psych., The Univ.
of Texas at Austin, Austin, TX), Valerie S. Knopik (Div. of Behavioral
Genetics, Rhode Island Hospital, Brown Univ. Med. School, Providence,
RI), John E. McGeary (Providence Veterans Affairs Medical Ctr. , Providence, RI), and Bharath Chandrasekaran (Dept. of Commun. Sci. & Disord.,
The Univ. of Texas at Austin, Austin, TX)
Previous work has demonstrated that individual listeners vary substantially in their ability to recognize speech in noisy environments. However,
little is known about the underlying sources of individual differences in
speech perception in noise. Noise varies in the levels of energetic masking
(EM) and informational masking (IM) imposed on target speech. Relative to
EM, release from IM places greater demand on selective attention. A polymorphism in exon III of the DRD4 gene has been shown to influence selective attention. Here we investigated whether this polymorphism contributes
to individual variation in speech recognition ability. We assessed sentence
recognition performance across a range of maskers (1-, 2-, and 8-talker babble, and speech-spectrum noise) among 104 young, normal-hearing adults.
We also measured their working memory capacity with Operation Span
Task, which relies on selective attention to update and maintain items in
memory while performing a secondary task. Results showed that the long
variant of the DRD4 gene significantly associated with better recognition
performance in 1-talker babble conditions only, and that this relation was
mediated by enhanced working memory capacity. These findings suggest
that the DRD4 polymorphism can explain some of the individual differences
in speech recognition ability, but is specific to IM conditions.
5aSC26. Potential sports concussion identification using acoustic-phonetic analysis of vowel productions. Terry G. Horner (Indiana Univ. Methodist Sports Medicine, 201 Pennsylvania Parkway, Ste. 100, Indianapolis,
IN 46280, tghorner@hughes.net) and Michael A. Stokes (Waveform Commun., Indianapolis, IN)
Concussions impair cognitive function and muscle motor control; however, little is known about how this impairment affects speech production.
In the present study, concussed athletes speech is recorded at the initial
office visit and subsequent visits. The last recording, when the brain is determined to have recovered using present criteria, becomes the baseline. The
vocabulary consists of seven h-vowel-d (hVd) words (who’d, heed, hood,
hid, had, hud, and heard) produced three times each for a total of 21 productions. The study is focused on vowel characteristics and the limited coarticulatory effects of the hVd vocabulary make it ideal for the study. Duration
measurements are made by experimenter analysis and the formant measurements are made using the automatic speech recognition engine ELBOW.
The preliminary comparisons from the subjects completing the protocol
show formant drift for three or more of the seven vowels, and duration is
affected for each talker and each vowel. These results were anticipated since
the impairment would affect articulatory movement and timing. The results
as well as a discussion of development of an automated real-time concussion identification application will be presented.
168th Meeting: Acoustical Society of America
2314
5aSC27. Thai phonetically balanced word recognition test: Reliability
evaluations and bias and error analysis. Adirek Munthuli (Elec. and Comput. Eng., Thammasat Univ., Khlong Luang, Pathumthani, Thailand), Chutamanee Onsuwan (Linguist, Thammasat Univ., Dept. of Linguist, Faculty
of Liberal Arts, Thammasat University, Khlong Luang, Pathumthani 12120,
Thailand, consuwan@hotmail.com), Charturong Tantibundhit (Elec. and
Comput. Eng., Thammasat Univ., Khlong Luang, Pathumthani, Thailand),
and Krit Kosawat (Thailand National Electronics and Comput. Technol.
Ctr., Khlong Luang, Pathumthani, Thailand)
are in line with those found for Thai speech sounds in noise condition. Interestingly, vowels are found to be most resistant to confusion. Finally, possible effect of lexical frequency is examined and discussed.
5aSC28. Talker variability in spoken word recognition: Evidence from
repetition priming. Yu Zhang and Chao-Yang Lee (Ohio Univ., W239
Grover Ctr., Ohio University, Athens, OH 45701, yz137808@ohio.edu)
The effect of talker variability on the processing of spoken words is
investigated using short-term repetition priming experiments. Prime-target
pairs, either repeated (e.g., queen-queen) or unrelated (e.g., bell-queen),
were produced by the same or different male speakers. Two interstimulus
intervals (ISI, 50 and 250 ms) were used to explore the time course of repetition priming and voice specificity effects. The auditory stimuli were presented to 40 listeners, who completed a lexical decision task followed by a
talker voice discrimination task. Results from the lexical decision task
showed that the magnitude of priming was attenuated in the different-talker
condition, indicating a talker variability effect on spoken word recognition.
In contrast, the talker variability effect on priming did not differ between
the two ISIs. Talker voice discrimination was faster and more accurate for
nonword targets, but not for word targets, indicating a lexical status effect
on voice discrimination. Taken together, these results suggest that talker
variability affects recognition of spoken words, and that the effect cannot be
simply attributed to non-lexical voice discrimination.
Word recognition score (WRS) is one of the measuring techniques used
in speech audiometry, a part of a routine audiological examination. The
test’s accuracy is crucial and largely depends on the test materials. With emphasis on phonetic balance, test-retest reliability, inter-list equivalency, and
symmetrical phoneme occurrence, Thammasat University Phonetically Balanced Word Lists 2014 (TU PB’14) were created with five different lists,
each with 25 Thai monosyllabic words. TU PB’14 reflects Thai phoneme
distribution based on large-scale written Thai corpora, InterBEST [1]. To
evaluate its validity and test-retest reliability, the lists were given at five intensity levels (15–55 dB HL) in test and retest sessions to 30 normal-hearing
subjects. The differences in performance between the two sessions are not
significantly large and correlation coefficients at the linear regions are all
positive. Analysis of listeners’ errors, including sequence recurrences, was
carried out. Errors occurred predominantly in the case of initials, followed
by finals and lexical tones. Confusion patterns of initials, finals, and tones
FRIDAY MORNING, 31 OCTOBER 2014
INDIANA F, 8:00 A.M. TO 12:30 P.M.
Session 5aUW
Underwater Acoustics: Acoustics, Ocean Dynamics, and Geology of Canyons
John A. Colosi, Cochair
Department of Oceanography, Naval Postgraduate School, 833 Dyer Road, Monterey, CA 93943
James Lynch, Cochair
Woods Hole Oceanographic, MS # 11, Bigelow 203, Woods Hole Oceanographic, Woods Hole, MA 02543
Chair’s Introduction—8:00
Invited Papers
8:05
5a FRI. AM
5aUW1. What do we know and what do we need to know about submarine canyons for acoustics? James Lynch, Ying-Tsong Lin,
Timothy Duda, Arthur Newhall (Appl. Ocean Phys. and Eng., Woods Hole Oceanographic Inst., MS # 11, Bigelow 203, Woods Hole
Oceanographic, Woods Hole, MA 02543, jlynch@whoi.edu), and Glen Gawarkiewicz (Physical Oceanogr., Woods Hole Oceanographic
Inst., Woods Hole, MA)
Acoustic propagation and scattering in marine canyons is an inherently 3-D problem, both for the environmental input (bottom topography and geology, biology, and physical oceanography) and the acoustic field. In this talk, we broadly examine what our knowledge
is of these environmental fields, and what their salient effects should be upon acoustics. Examples from recent experiments off the
United States and Taiwan will be presented, along with other historical data. Three dimensional acoustic modeling results will also be
presented. Directions for future research will be discussed.
2315
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2315
8:25
5aUW2. Ocean dynamics and numerical modeling of canyons and shelfbreaks. Pierre F. Lermusiaux (MechE, MIT, 77 Mass Ave.,
Cambridge, MA 02139, pierrel@mit.edu, Patrick Haley, Chris Mirabito (MIT, Cambridge, MA 02139), Timothy Duda, and Glen
Gawarkiewicz (WHOI, Woods Hole, MA)
Multiscale ocean dynamics and multi-resolution numerical modeling of canyons and shelfbreaks are outlined. The dynamics focus is
on fronts, currents, tides, and internal tides/waves that occur in these regions. Due to the topographic gradients and strong internal field
gradients, nonlinear terms and non-hydrostatic dynamics can be significant. Computationally, a challenge is to achieve accurate simulations that resolve strong gradients over dynamically significant space- and time-scales. To do so, one component are high-order schemes
that are more accurate for the same efficiency than lower-order schemes. A second is multi-resolution grids that allow optimized refinements, such as reducing errors near steep topography. A third are methods that allow to solve for multiple dynamics, e.g., hydrostatic
and non-hydrostatic, seamlessly. To address these components, new hybridizable discontinuous Galerkin (HDG) finite-element schemes
for (non)-hydrostatic physics including a nonlinear free-surface are introduced. The results of data-assimilative multi-resolution simulations are then discussed, using the primitive-equation MSEAS system and telescoping implicitly two-way nested domains. They correspond to collaborative experiments: (i) Shallow Water 06 (SW06) and the Integrated Ocean Dynamics and Acoustics (IODA) research
in the Middle Atlantic Bight region; (ii) Quantifying, Predicting and Exploiting Uncertainty (QPE) in the Taiwan-Kuroshio region; and
(iii) Philippines Straits Dynamics Experiment (PhilEx).
8:45
5aUW3. Internal tides in canyons and their effect on acoustics. Timothy F. Duda, Weifeng G. Zhang, Ying-Tsong Lin (Appl. Ocean
Phys. and Eng. Dept., Woods Hole Oceanographic Inst., WHOI AOPE Dept. MS 11, Woods Hole, MA 02543, tduda@whoi.edu), and
Aurelien Ponte (Laboratoire de Physique des Oceans, IFREMER-CNRS-IRD-UBO, Plouzane, France)
Internal gravity waves of tidal frequency are generated as the ocean tides push water upward onto the continental shelf. Such waves
also arrive at the continental slope from deep water and are heavily modified by the change in water depth. The wave generation and
wave shoaling effects have an additional level of complexity where a canyon is sliced into the continental slope. Recently, steps have
been taken to simulate internal tides in canyons, to understand the physical processes of internal tides in canyons, and also to compute
the ramifications on sound propagation in and near the canyons. Internal tides generated in canyons can exhibit directionality, with the
directionality being consistent with an interesting multiple-scattering effect. The directionality imparts a pattern to the sound-speed
anomaly field affecting propagation. The directionality also means that short nonlinear internal waves, which have specific strong effects
on sound, can have interesting patterns near the canyons. In addition to the directionality of internal tides radiated from canyons, the internal tide energy within the canyons can be patchy and may unevenly affect sound.
9:05
5aUW4. An overview of internal wave observations and theory associated with canyons and slopes. John A. Colosi (Dept. of Oceanogr., Naval Postgrad. School, 833 Dyer Rd., Monterey, CA 93943, jacolosi@nps.edu)
Topographic environments such as canyons and slopes are known to be regions of complex internal-wave behavior associated with
wave generation, propagation, and dissipation. Much of this anomalous behavior stems from the kinematic constraint that internal waves
must maintain their angle of propagation with respect to the horizontal even after interaction with a sloping boundary. In canyons or on
slopes, waves propagating in from deep water or generated locally (mostly by tidal flows) either reflect back out to sea or intensify in
energy density as they propagate up slope. In particular, wave intensification can lead to nonlinear phenomena including steepening,
breaking, and dissipation. This talk will provide and overview of internal wave observations, modeling, and theory in canyons and on
slopes with a particular emphasis on acoustically relevant aspects of the wave field.
9:25
5aUW5. Fiery ice from the sea: Marine gas hydrates. Ross Chapman (Earth and Ocean Sci., Univ. of Victoria, 3800 Finnerty Rd.,
Victoria, BC V8P5C2, Canada, chapman@uvic.ca)
Marine gas hydrates are cage-like structures of water containing methane or some higher hydrocarbons that are stable under conditions of high pressures and low temperatures. The hydrate structures are formed in sediments of continental margins and are found
worldwide. The stability zone extends to about 200 m beneath the sea floor, and hydrates exist in several different forms within the
zone, from massive ice-like features at cold seeps on the sea floor to finely distributed deposits in sediment pores over extensive areas.
The base of the stability zone is characterized by a strong acoustic impedance change from high velocity hydrated sediments above to
low velocity gas below. This acoustic feature generates a strong signal in seismic surveys called the Bottom Simulating Reflector, and it
is widely used as an indicator of the presence of hydrates. This paper reviews the current knowledge of hydrate systems from research
carried out on the Cascadia Margin off the west coast of Vancouver Island, and in the Gulf of Mexico. The hydrate distributions are different in each of these areas, leading to different effects in acoustic reflectivity.
9:45
5aUW6. South China Sea upper-slope sand dunes acoustics experiment. Ching-Sang Chiu, Ben Reeder (Dept. of Oceanogr., Naval
Postgrad. School, 833 Dyder Rd., Rm. 328, Monterey, CA 93943-5193, chiu@nps.edu), Linus Chiu (National Sun Yat-sen Univ., Kaohsiung, Taiwan), Yiing Jang Yang, and Chifang Chen (National Taiwan Univ., Taipei, Taiwan)
Very large subaqueous sand dunes were discovered on the upper continental slope of the Northeastern South China Sea. The spatial
distribution and scales of these large sand dunes were mapped by two multibeam echo sounding (MBES) surveys, one in 2012 and the
other in 2013. These two surveys represented two pilot cruises as part of a multiyear, US-Taiwan collaborative field study designed to
characterize these sand dunes, the associated physical processes and the associated acoustic scattering physics. The main experiment
will be carried out in 2014. The combination of MBES, coring and acoustic transmission data obtained from the two pilot cruises have
2316
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2316
provided vital initial knowledge of (1) the spatial and temporal scales of the sand dunes from objective analysis, (2) the geoacoustic
properties of the dunes based on forward modeling to matching the measured levels, and (3) the anisotropy and translational variability
of the transmission loss based on a signal energy analysis of the repeated 1–2 kHz and 4–6 kHz FM signals transmitted by a calibrated
sound source towed along two circular tracks, each surrounding a receiver. The results from the pilot cruises are presented and discussed.
[The research is sponsored by the US ONR and the Taiwan NSC.]
10:05–10:20 Break
10:20
5aUW7. Three dimensional underwater acoustic modeling on continental slopes and submarine canyons. Ying-Tsong Lin, David
Barclay, Timothy F. Duda, and Weifeng Gordon Zhang (Appl. Ocean Phys. and Eng., Woods Hole Oceanographic Inst., Bigelow 213,
MS#11, WHOI, Woods Hole, MA 02543, ytlin@whoi.edu)
Underwater sound propagation on slopes and canyons is influenced jointly and strongly by the complexity of topographic variability
and ocean dynamics. Some integrated ocean and acoustic models have been developed and implemented to investigate such joint acoustic effects. In this talk, an integrated numerical model employing a time-stepping three-dimensional (3D) parabolic-equation (PE) acoustic modeling method and the Regional Ocean Modeling System (ROMS) is presented. Numerical examples of sound propagation and
ambient noise in Mid-Atlantic Bight area with realistic environmental conditions are demonstrated. The sound propagation model
reveals the focusing of sound due to concave canyon seafloor and the different level of temporal variability of focused and unfocused
sound. The ambient noise model is constructed for surface wind generated noise, and the model shows the azimuthal dependency of
noise field and its spatial coherence structure. Lastly, a simple sonar performance prediction is made to investigate the variability of the
probability of detection in these complex underwater environments. [Work supported by the ONR.]
10:40
5aUW8. Three-dimensional effects in the sound propagation in area of coastal slope. Boris Katsnelson (Marine GeoSci., Univ. of
Haifa, 1, Universitetskaya sq, Voronezh 394006, Russian Federation, katz@phys.vsu.ru) and Andrey Malykhin (Phys., Voronezh Univ.,
Voronezh, Russian Federation)
Coastal slope (wedge) in the ocean is well known “canonical” problem for analysis of manifestation of the horizontal refraction (3d
effects) in a shelf zone. In given paper the following effects are reviewed:(1) Spatial variability of the sound field in given area. Areas of
one-path and multipath propagation, shadow zones, and caustics in horizontal plane, their properties in dependence on the frequency,
influence of bottom parameters;(2) Interference structure of the sound field in the horizontal plane in dependence on mode number and
frequency;(3) Distribution of the sound field in vicinity of curvilinear coastal line, for example, gulf, bay, peninsula etc. (shadow zones,
multipath area, and whispering gallery waveguide in horizontal plane);(4) Temporal variability of signals due to frequency dependence
of the horizontal refraction and in turn pulse compression/decompression and time reversal in multipath area;(5) time-frequency diagrams; Mentioned and other effects can change properties of bottom and surface reverberation, scattering, noise field distribution,
attenuation in area of coastal wedge. The corresponding estimations are presented. [Work was supported by BSF, Grant 2010471,
RFBR-NSFC Grant 14-05-91180.]
11:00
11:15
5aUW9. Analytic prediction of acoustic coherence time scales in continental-shelf environments with random internal waves. Zheng Gong,
Tianrun Chen (Mech. Eng., Massachusetts Inst. of Technol., 5-435, 77 Massachusetts Ave., Cambridge, MA 02139, zgong@mit.edu), Purnima Ratilal
(Elec. and Comput. Eng., Northeastern Univ., Boston, MA), and Nicholas
C. Makris (Mech. Eng., Massachusetts Inst. of Technol., Cambridge,
MA)
5aUW10. Modeling three dimensional environment and broadband
acoustic propagation in Arctic shelf-basin region. Mohsen Badiey,
Andreas Muenchow, Lin Wan (College of Earth, Ocean, and Environment,
Univ. of Delaware, 261 S. College Ave., Robinson Hall, Newark, DE
19716, badiey@udel.edu), Megan S. Ballard (Appl. Res. Labs., Univ. of
Texas, Austin, Delaware), David P. Knobles, and Jason D. Sagers (Appl.
Res. Labs., Univ. of Texas, Austin, TX)
An analytical model derived from normal mode theory for the accumulated effects of range-dependent multiple forward scattering is applied to
estimate the temporal coherence of the acoustic field forward propagated
through a continental-shelf waveguide containing random three-dimensional
internal waves. The modeled coherence time scale of narrow band low-frequency acoustic field fluctuations after propagating through a continentalshelf waveguide is shown to decay with a power-law of range to the 1/2 beyond roughly 1 km, decrease with increasing internal wave energy, to
be consistent with measured acoustic coherence time scales. The model
should provide a useful prediction of the acoustic coherence time scale as a
function of internal wave energy in continental-shelf environments. The
acoustic coherence time scale is an important parameter in remote sensing
applications because it determines (i) the time window within which standard coherent processing such as matched filtering may be conducted, and
(ii) the number of statistically independent fluctuations in a given measurement period that determines the variance reduction possible by stationary
averaging.
Rapid climate change over the last decade has created a renewed interest
in the nature of underwater sound propagation in the Arctic Ocean. Changes
in the oceanography and surface boundary conditions are expected to cause
measurable changes in the propagation and scattering of low frequency
sound. Recent measurements of a high-resolution three-dimensional (3-D)
sound speed structure in a 50 x 50 km2 region in an open-water shelf-basin
region of the Beaufort Sea offer a unique and rare opportunity to study the
effects of a complex oceanography on the acoustic field as it propagates
from the deep basin onto the continental shelf. The raw oceanography data
were analyzed and processed to create a 3-D sound speed field for the water
column in the basin-slope-shelf area. Recent advances in both 2-D and 3-D
acoustic modeling capability allow one to study the effects of the range- and
azimuth-dependent water column layers on the frequency-dependent acoustic modal structure. Of particular interest is the nature of the 3-D and modecoupling effects on the frequency response induced by the oceanography.
The results will likely be useful in designing acoustic experiments with serious logistical constraints in the rapidly changing Arctic Ocean.
2317
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2317
5a FRI. AM
Contributed Papers
11:30
12:00
5aUW11. Underwater jet noise simulation based on a Large Eddy Simulation/Lighthill hybrid method. GuoQing Liu (School of Naval Architecture and Ocean Eng., Huazhong Univ. of Sci. and Technol., Wuhan 430074,
China China, liugq_2010@163.com), Tao Zhang, YongOu Zhang (School
of Naval Architecture and Ocean Eng., Huazhong Univ. of Sci. and Technol., Wuhan, Hubei Province, China), Huajiang Ouyang (School of Eng.,
Univ. of Liverpool, Liverpool, United Kingdom), and Xu Li (School of Naval Architecture and Ocean Eng., Huazhong Univ. of Sci. and Technol.,
Wuhan, China)
5aUW13. Ultra low frequency electromagnetic underwater sound
source. Wei Lu and Yu Lan (College of Underwater Acoust. Eng., Harbin
Eng. Univ., 145 Nantong St.,Nangang District, Harbin 150001, China,
luwei@hrbeu.edu.cn)
In recent years, extensive researches about the numerical method for
aeroacoustics noise simulation have been made. However, the research of
hydrodynamic noise develops slowly. In this paper, a hybrid method of
combining Large Eddy Simulation (LES) and Lighthill’s acoustic analogy
theory is established to compute the hydrodynamic noise, which is based on
the preliminary study of the method for aerodynamic noise prediction under
low Mach number. First, the model of three-dimensional underwater jet is
determined by an experimental model. Meanwhile, the CFD mesh and the
acoustic mesh are both prepared. Then, the flow field of underwater jet is
simulated with LES. The characteristics of turbulent flow are analyzed by
the pressure difference and the uniformity coefficient of velocity. After that,
the noise of underwater jet is simulated using the theory of Lighthill’s
acoustic analogy. Finally, the solutions obtained by the hybrid method are
compared with the experimental data available in open literature. In conclusion, the sound pressure level at the observation point agrees well with the
experimental data. The LES/Lighthill hybrid method is able to compute the
underwater jet noise and the hydrodynamic noise.
11:45
5aUW12. Formation sparse aperture antenna arrays based on the
sequence Costas. Igor I. Anikin (Concern CSRI Elektropribor, JSC, 30,
Malaya Posadskaya Ul., St. Petersburg 197045, Russian Federation, anikin1952@bk.ru)
To obtain high spatial resolution in sonar, ultrasonic image, radar, seismic,
and radio astronomy use active antenna arrays that contain a large number of
elements. To reduce the cost of such an antenna arrays used with sparse aperture. In this approach, the antenna array is partitioned into several subarrays.
Geometric size subarray equivalent equidistant placement Nc * Nc elements.
Subarrays filled Nc elements arranged according to the sequence Costas Ncth order. Each filled subarrays own Costas sequence. As a result, the number
of elements in the array is reduced in times Nc. Form the beam pattern in the
main sections close to the plane shape of the beam pattern of equidistant
antenna array, and the directivity factor is almost independent of frequency
band. The upper frequency is reduced in the directivity factor (pNc)/2 times
as compared with the plane equidistant antenna array. Using a decaying distribution can be reduced in amplitude level of the side lobe in the principal
planes. Thus, by setting the order of the Costas sequence can in each case to
optimize the degree of reduction of the number of elements in the array
antenna at a predetermined directivity factor.
2318
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
A detail analysis is presented of one ultra low frequency sound source
which is smaller and lighter than conventional piezoelectric ultra low frequency sound source. This sound source is single piston vibration using the
electromagnetic principle. The radiation characteristic of sound source is
researched by single piston radiation model in low frequency. The dynamic
characteristic such as resonant frequency, vibrant displacement of sound
source is researched by analytic method and finite element method. In analytic method, the electricity-magnetism, magnetism-force, and force-vibration conversion models of sound source are established by differential
equations in different coupling physical fields, and the dynamic characteristics based on the conversion models are simulated by combining and solving
differential equations using the MATLAB/SIMULINK. In finite element method,
using transient solver of electromagnetic analysis finite element software
Ansoft, the dynamic characteristics of sound source are solved. Optimizing
the dynamic characteristic of sound source by adjusting magnetic circuit,
drive coil, and elastic component parameters, the resonant frequency and
radiated sound power of sound source are determined. One prototype sound
source design, in which the source level is 184 dB in frequency 73 Hz by
calibration, is fabricated that demonstrated proof-of-concept.
12:15
5aUW14. Simplex underwater acoustic communications using passive
time reversal. Lin Sun, Haisen Li, Bo Zou, and Ruo Li (College of Underwater Acoust. Eng., Harbin Eng. Univ., Harbin Eng. University, No.145
Nantong St., Nangang District, Harbin City, Heilongjiang Province., Harbin
150001, China, sunlinhrb@sina.com)
The spatial-temporal compression, which is achieved through using simple time reversal (TR) process, can reduce the inter-symbol interference and
increase the signal strength. The active TR needs two-way propagation, so it
cannot be used in simplex underwater acoustic communications. Based on
the one-way propagation property of passive TR, a simplex underwater
acoustic communication method using passive TR is proposed. The proposed method is considered in two scenarios: uplink transmission from a
single send-only element to an array and downlink transmission from an
array to a single receive-only element. The principle of proposed method is
analyzed in theory and the performance of proposed method is verified
through experiment. Results demonstrate that passive TR process can
improve the output signal-to-noise ratio and decrease the bit error rate, so
the performance of proposed method is superior to that of simplex acoustic
communication method without using passive TR.
168th Meeting: Acoustical Society of America
2318
This document is frequently updated; the current version can be found online at the Internet site:
<http://scitation.aip.org/content/asa/journal/jasa/info/authors>.
Information for contributors to the
Journal of the Acoustical Society of America (JASA)
Editorial Staff a)
Journal of the Acoustical Society of America, Acoustical Society of America, 1305 Walt Whitman Road,
Suite 300, Melville, NY 11747-4300
The procedures for submitting manuscripts to the Journal of the Acoustical Society of America are
described. The text manuscript, the individual figures, and an optional cover letter are each uploaded
as separate files to the Journal’s Manuscript Submission and Peer Review System. The required
format for the text manuscript is intended so that it will be easily interpreted and copy-edited during
the production editing process. Various detailed policies and rules that will produce the desired
format are described, and a general guide to the preferred style for the writing of papers for the
Journal is given. Criteria used by the editors in deciding whether or not a given paper should be
published are summarized.
PACS numbers: 43.05.Gv
TABLE OF CONTENTS
I.
II.
INTRODUCTION
ONLINE HANDLING OF MANUSCRIPTS
A. Registration
B. Overview of the editorial process
C. Preparation for online submission
D. Steps in online submission
E. Quality check by editorial office
III. PUBLICATION CHARGES
A. Mandatory charges
B. Optional charges
C. Payment of page charges—Rightslink
IV. FORMAT REQUIREMENTS FOR MANUSCRIPTS
A. Overview
B. Keyboarding instructions
C. Order of pages
D. Title page of manuscript
E. Abstract page
F. Section headings
V.
STYLE REQUIREMENTS
A. Citations and footnotes
B. General requirements for references
C. Examples of reference formats
1. Textual footnote style
2. Alphabetical bibliographic list style
D. Figure captions
E. Acknowledgments
F. Mathematical equations
G. Phonetic symbols
H. Figures
I. Tables
VI. THE COVER LETTER
VII. EXPLANATIONS AND CATEGORIES
A. Subject classification, ASA-PACS
B. Suggestions for Associate Editors
C. Types of manuscripts
1. Regular research articles
2. Education in acoustics articles
3. Letters to the editor
4. Errata
5. Comments on published papers
6. Replies to comments
7. Forum letters
8. Tutorial and review papers
9. Book reviews
VIII. FACTORS RELEVANT TO PUBLICATION
DECISIONS
A. Peer review system
B. Selection criteria
C. Scope of the Journal
IX. Policies regarding prior publication
A. Speculative papers
B. Multiple submissions
X.
SUGGESTIONS REGARDING CONTENT
A. Introductory section
B. Main body of text
C. Concluding section
D. Appendixes
E. Selection of references
XI. SUGGESTIONS REGARDING STYLE
A. Quality of writing and word usage
B. Grammatical pitfalls
C. Active voice and personal pronouns
D. Acronyms
E. Computer programs
F. Code words
REFERENCES
I. INTRODUCTION
The present document is intended to serve jointly as (i) a
set of directions that authors should follow when submitting
articles to the Journal of the Acoustical Society of America
and as (ii) a style manual that describes those stylistic
a)
E-mail: jasa@aip.org
2319
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Information for Contributors
2319
features that are desired for the submitted manuscript. This
document extracts many of the style suggestions found in the
AIP Style Manual,1 which is available online at the internet
site <http://www.aip.org/pubservs/style/4thed/toc.html>. The
AIP Style Manual, although now somewhat dated and not
specifically directed toward publication in the Journal of the
Acoustical Society of America (JASA), is a substantially more
comprehensive document, and authors must make use of it
also when preparing manuscripts. If conflicting instructions
are found in the two documents, those given here take precedence. (Authors should also look at recent issues of the
Journal for examples of how specific style issues are handled.)
Conscientious consideration of the instructions and advice
given in the two documents should considerably increase
the likelihood that a submitted manuscript will be rapidly
processed and accepted for publication.
II. ONLINE HANDLING OF MANUSCRIPTS
All new manuscripts intended for possible publication
in the Journal of the Acoustical Society of America should
be submitted by an online procedure. The steps involved
in the processing of manuscripts that lead from the initial
submission through the peer review process to the transmittal
of an accepted manuscript to the production editing office
are handled by a computerized system referred to here as
the Peer X-Press (PXP) system. The Acoustical Society
of America contracts with AIP Publishing LLC for the use
of this system. There is one implementation that is used
for most of the material that is submitted to the Journal of
the Acoustical Society of America (JASA) and a separate
implementation for the special section JASA Express Letters
(JASA-EL) of the Journal.
you do this, a “task page” will appear. At the bottom of the
page there will be an item Modify Profile/Password. Click on
this. Then a Page will appear with the heading Will you please
take a minute to update the profile?
If you are satisfied with your profile and password, then
you go to the top of the Task page and click on the item
Submit Manuscript that appears under Author Tasks. Then
you will see a page titled Manuscript Submission Instructions. Read what is there and then click continue at the bottom of the page.
B. Overview of the editorial process
(1) An author denoted as the corresponding author submits a
manuscript for publication in the Journal.
(2) One of the Journal’s Associate Editors is recruited to
handle the peer-review process for the manuscript.
(3) The Associate Editor recruits reviewers for the manuscript via the online system.
(4) The reviewers critique the manuscript, and submit their
comments online via the Peer X-Press system.
(5) The Associate Editor makes a decision regarding the
manuscript, and then composes online an appropriate decision letter, which may include segments of the reviews,
and which may include attachments.
(6) The Journal’s staff transmits a letter composed by the
Associate Editor to the corresponding author. This letter
describes the decision and further actions that can be
taken.
If revisions to the manuscript are invited, the author may
resubmit a revised manuscript, and the process cycle is repeated. To submit a revision authors should use the link provided in the decision message.
A. Registration
Everyone involved in the handling of manuscripts in the
Journal’s editorial process must first register with the Journal’s implementation of the PXP system, and the undertaking
of separate actions, such as the submission of a manuscript,
requires that one first log-in to the system at http://jasa.peerxpress.org/cgi-bin/main.plex.
If you have never logged into the system, you will need
to get a user name and password. Many ASA members are
already in the data base, so if you are a member, you in
principle may already have a user name and password, but
you will have to find out what they are. On the login page,
you click on the item “Unknown/Forgotten Password.” On
the new page that comes up after you do this, give your first
name and last name. After you have filled in this information, just click on “mailit.” You will then get a e-mail message with the subject line “FORGOTTEN PASSWORD.”
The system will actually give you a new password if you had
ever used the system before. After you get this new password, you can change it to something easy to remember after
you login.
Once you have your “user name” and “password” you go
to the log-in page again, and give this information when you
log-in. You will first be asked to change your password. After
2320
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
C. Preparation for online submission
Before one begins the process of submitting a manuscript
online, one should first read the document Ethical Principles
of the Acoustical Society of America for Research Involving
Human and Non-Human Animals in Research and Publishing
and Presentations which is reached from the site <http://
scitation.aip.org/content/asa/journal/jasa/info/authors>. During
the submission, you will be asked if your research conformed
to the stated ethical principles and if your submission of
the manuscript is in accord with the ethical principles that
the Acoustical Society has set for its journals. If you cannot
confirm that your manuscript and the research reported are in
accord with these principles, then you should not submit your
manuscript.
Another document that you should first read is the document Transfer of Copyright Agreement, which is downloadable from the same site. When you submit your manuscript
online you will be asked to certify that you and your
coauthors agree to the terms set forth in that document. What
is in that document has been carefully worded with extensive
legal advice and which has been arrived at after extensive discussion within the various relevant administrative committees of the Acoustical Society of America. It is regarded
Information for Contributors
2320
as a very liberal document in terms of the rights that are
allowed to the authors. One should also note the clause: The
author(s) agree that, insofar as they are permitted to transfer
copyright, all copies of the article or abstract shall include a
copyright notice in the ASA’s name. (The word “permitted”
means permitted by law at the time of the submission.) The
terms of the copyright agreement are non-negotiable. The
Acoustical Society does not have the resources or legal assistance to negotiate for exceptions for individual papers, so
please do not ask for such special considerations. Please read
the document carefully and decide whether you can provide
an electronic signature (clicking on an appropriate check box)
to this agreement. If you do not believe that you can in good
conscience give such an electronic signature, then you should
not submit your manuscript.
Given that one has met the ethical criteria and agreed to
the terms of the copyright transfer agreement, and that one
has decided to submit a manuscript, one should first gather
together the various items of information that will be requested during the process, and also gather together various
files that one will have to upload.
Information that will be entered into the PeerX-Press
submission form and files to be uploaded include:
(1)
(2)
(3)
(4)
(5)
(6)
(7)
2321
Data for each of the authors:
(a) First name, middle initial, and last name
(b) E-mail address
(c) Work telephone number
(d) Work fax number
(e) Postal address (required for corresponding author,
otherwise optional)
Title and running title of the paper. The running title is
used as the footline on each page of the article. (The
title is limited to 17 words and the running title is limited to six words and up to 50 characters and spaces;
neither may include any acronyms or any words explicitly touting novelty.)
Abstract of the paper. (This must be in the form of a
single paragraph and is limited to 200 words for regular
articles and to 100 words for letters to the editor. (Authors would ordinarily do an electronic pasting from a
text file of their manuscript.)
Principal ASA-PACS number that characterizes the
subject matter of the paper and which will be used to
determine the section of the Journal in which the published paper will be placed. Note that if the PACS number you list first is too generic, e.g., 43.20, that may
result in a delay in processing your paper.
A short prioritized list of Associate Editors suggested
for the handling of the manuscript.
Contact information (name, e-mail address, and institution) of suggested reviewers (if any), and/or names of
reviewers to exclude and reasons why.
Cover letter file (optional, with some exceptions).
Material that would ordinarily have been in the cover
letter is now supplied by answering online questions and
by filling out the online form. However, if an author needs
to supply additional information that should be brought
to the attention of the editor(s) and/or reviewer(s), a
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
cover letter should be written and put into the form of an
electronic file.
(8) Properly prepared manuscript/article file in LaTeX,
Word, or WordPerfect format. (The requirements for a
properly prepared manuscript are given further below.)
It is also possible to submit your file in PDF but this
is not desirable since the entire manuscript must be
retyped. It must be a single stand-alone file. If the author
wishes to submit a LaTeX file, the references should
be included in the file, not in a separate BibTeX file.
Authors should take care to insure that the submitted
manuscript/article file is of reasonable length, no more
than 2 MB.
(9) Properly prepared figure files in TIFF, PS, JPEG, or
EPS (see also, Section V. H); one file for each cited
figure number. The uploading of figures in PDF format
is not allowed. (The captions should be omitted, and
these will appear as a list in the manuscript itself.) The
figures should not have the figure numbers included on
the figures in the files as such, and it is the responsibility of the corresponding author to see that the files are
uploaded in proper order. Authors may upload figures
in a zip file (figure files must be numbered in order
using 1, 2, etc. If figures have parts they must be numbered 1a, 1b, 1c, etc.). [In order to maintain online
color as a free service to authors, the Journal cannot
accept multiple versions of the same file. Authors may
not submit two versions of the same illustration (e. g.,
one for color and one for black & white). When preparing illustrations that will appear in color in the online Journal and in black & white in the printed Journal, authors must ensure that: (i) colors chosen will
reproduce well when printed in black & white and
(ii) descriptions of figures in text and captions will be
sufficiently clear for both print and online versions. For
example, captions should contain the statement “(Color
online).” If one desires color in both versions, these
considerations are irrelevant, although the authors
must guarantee that mandatory additional publication
charges will be paid.]
(10) Supplemental files (if any) that might help the reviewers in making their reviews. If the reading of the paper
requires prior reading of another paper that has been
accepted for publication, but has not yet appeared in print,
then PDF file for that manuscript should be included as
a supplementary file. Also, if the work draws heavily on
previously published material which, while available to
the general public, would be time-consuming or possibly
expensive for the reviewers to obtain, then PDF files of
such relevant material should be included.
(11) Archival supplemental materials to be published with the
manuscript in AIP Publishing’s Supplemental Materials
electronic depository.
In regard to the decision as to what formats one should
use for the manuscript and the figures, a principal consideration may be that the likelihood of the published manuscript
being more nearly to one’s satisfaction is considerably increased if AIP Publishing, during the production process,
Information for Contributors
2321
can make full or partial use of the files you submit. There
are conversion programs, for example, that will convert
LaTeX and MS Word files to the typesetting system that AIP
Publishing uses. If your manuscript is not in either of these
formats, then it will be completely retyped. If the figures are
submitted in EPS, PS, JPEG, or TIFF format, then they will
probably be used directly, at least in part. The uploading of
figures in PDF format is not allowed.
D. Steps in online submission
After logging in, one is brought to the Peer X-Press Task
Page and can select the option of submitting a new manuscript. The resulting process leads the corresponding author
through a sequence of screens.
The first screen will display a series of tabs including:
Files, Manuscript Information, Confirm Manuscript, and
Submit. Clicking on these tabs displays the tasks that must be
completed for each step in the submission. Red arrows denote
steps that have not been completed. Green arrows are displayed
for each tab where the step has been successfully completed.
After submission, all of the individual files, text and
tables, plus figures, that make up the full paper will be
merged into a single PDF file. One reason for having such
a file is that it will generally require less computer memory
space. Another is that files in this format are easily read with
any computer system. However, the originally submitted set
of files, given the acceptance for publication, will be what is
submitted to the Production Editing office for final processing.
is first agreed that certain charges will be paid. If it is evident
that there is a strong chance that a paper’s published length
will exceed 12 pages, the paper will not be processed unless
the authors guarantee that the charges will be paid. If the
paper’s published length exceeds 12 pages or more, there is a
mandatory charge of $80 per page for the entire article. (The
mandatory charge for a 13 page article, for example, would be
$1,080, although there would be no mandatory charge if the
length were 12 pages.)
To estimate the extent of the page charges, count 3
manuscript pages (double-spaced lines, with wide margins) as
equivalent to one printed page, and count 4 figures or tables as
equivalent to one printed page. If this number exceeds 12 and
your institution and/or sponsor will not pay the page charges,
please shorten your paper before submitting it.
Color figures can be included in the online version of the
Journal with no extra charge, providing that these appear
suitably as black and white figures in the print version.
The charges for inclusion of color figures in the print
version of the Journal are $325 per figure file. If figures that
contain parts are submitted in separate files for each part, the
$325 charge applies to each file.
If an author’s institution or research sponsor is unwilling
to pay such charges, the author should make sure that all of
the figures in the paper are suitable for black and white printing,
and that the estimated length is manifestly such that it will not
lead to a printed paper that exceeds 12 pages.
B. Optional charges
E. Quality check by editorial office
Upon receiving system notification of a submission,
staff members in the Editorial Office check that the overall
submission is complete and that the files are properly prepared and suitable for making them available to the Associate Editors and the reviewers. They also check on the estimated length of the manuscript in the event that the author
indicates that page charges will not be paid. If all is in order,
the Manuscript Coordinator initiates the process, using the
ASA-PACS numbers and suggested Associate Editor list
supplied by the author, to recruit an Associate Editor who
is willing to handle the manuscript. At this time the author
also receives a “confirmation of receipt” e-mail message. If
the staff members deem that there are submission defects
that should be addressed, then the author receives a “quality
check” e-mail message. If there are only a small number of
defects, the e-mail message may give an explicit description
of what is needed. In some cases, when they are very numerous, and it is apparent that the author(s) are not aware that
the Journal has a set of format requirements, the e-mail message may simply ask the authors to read the instructions (i.e.,
the present document) and to make a reasonable attempt to
follow them.
III. PUBLICATION CHARGES
A. Mandatory charges
Papers of longer length or with color figures desired for
the print version of the Journal will not be published unless it
2322
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
To encourage a large circulation of the Journal and to
allow the inclusion of a large number of selected research
articles within its volumes, the Journal seeks partial subsidization from the authors and their institutions. Ordinarily, it is
the institutions and/or the sponsors of the research that undertake the subsidization. Individual authors must ask their
institutions or whatever agencies sponsor their research to
pay a page charge of $80 per printed page to help defray the
publication costs of the Journal. (This is roughly 1/3 of the
actual cost per page for the publication of the Journal.) The
institutions and the sponsoring agencies have the option of
declining, although a large fraction of those asked do pay
them. The review and selection of manuscripts for publication proceeds without any consideration on the part of the
Associate Editors as to whether such page charges will be
honored. The publication decision results after consideration
of the factors associated with peer review; the acceptance of
the page charges is irrelevant.
C. Payment of publication charges—Rightslink
When your page proofs are ready for your review, you
will receive an e-mail from AIP Publishing Production
Services. It will include a link to an online Rightslink site
where you can pay your voluntary or mandatory page charges,
color figure charges, or to order reprints of your article. If you
are unable to remit payment online, you will find instructions
for requesting a printed invoice so that you may pay by check
or wire transfer.
Information for Contributors
2322
IV. FORMAT REQUIREMENTS FOR MANUSCRIPTS
A. Overview
For a manuscript submitted by the online procedure to
pass the initial quality control, it is essential that it adhere
to a general set of formatting requirements. Such vary from
journal to journal, so one should not assume that a manuscript appropriate for another journal’s requirements would
be satisfactory for the Journal of the Acoustical Society of
America. The reasons for the Journal’s requirements are
partly to insure a uniform style for publications in the Journal and partly to insure that the copy-editing process will
be maximally effective in producing a quality publication.
For the latter reason, adequate white space throughout the
manuscript is desired to allow room for editorial corrections,
which will generally be hand-written on a printed hard-copy.
While some submitted papers will need very few or no corrections, there is a sufficiently large number of accepted papers of high technical merit that need such editing to make
it desirable that all submissions are in a format that amply
allows for this.
The following is a list of some of the more important
requirements. (More detailed requirements are given in the
sections that follow.)
(1)
The manuscript must be paginated, starting with the
first page.
(2) The entire manuscript must be doubled-spaced. This
includes the author addresses, the abstract, the references, and the list of figure captions. It should contain
no highlighting.
(3) The title and author list is on the first page. The abstract
is ordinarily on a separate page (the second page) unless
there is sufficient room on the title page for it, within the
constrains of ample margins, 12 pt type, double-spacing,
and ample white space. The introduction begins on a
separate page following the page that contains the abstract.
(4) The title must be in lower case, with the only capitalized
words being the first word and proper nouns.
(5) No acronyms should be in the title or the running title
unless they are so common that they can be found in
standard dictionaries or unless they are defined in the
title.
(6) No unsupported claims for novelty or significance
should appear in the title or abstract, such as the use
of the words new, original, novel, important, and
significant.
(7) The abstract should be one paragraph and should be
limited to 200 words (100 words for Letters to the
Editor).
(8) Major section headings should be numbered by capital
roman numerals, starting with the introduction. Text of
such headings should be in capital letters.
(9) Reference citations should include the full titles and
page ranges of all cited papers.
(10) There should be no personal pronouns in the abstract.
(11) No more than one-half of the references should be to the
authors themselves.
(12) The total number of figures should not ordinarily be
more than 20 (See section V. H).
2323
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
(13) Line numbers to assist reviewers in commenting
on the manuscript may be included but they are not
mandatory.
B. Keyboarding instructions
Each submitted paper, even though submitted online,
should correspond to a hard copy manuscript. The electronic
version has to be prepared so that whatever is printed-out will
correspond to the following specifications:
(1) The print-out must be single sided.
(2) The print-out must be configured for standard US letter
paper (8.5⬙ by 11⬙).
(3) The text on any given page should be confined to an area
not to exceed 6.5⬙ by 9⬙. (One inch equals 2.54 cm.) All
of the margins when printed on standard US letter paper
should be at least 1⬙.
(4) The type font must be 12 pt, and the line spacing must
correspond to double spacing (approximately 1/3⬙ or
0.85 cm per line of print). The fonts used for the text
must be of a commonly used easily readable variety such
as Times, Helvetica, New York, Courier, Palatino, and
Computer Modern.
(5) The authors are requested to use computers with adequate
word-processing software in preparing their manuscripts.
Ideally, the software must be sufficiently complete that
all special symbols used in the manuscript are printed.
(The list of symbols available to AIP Publishing for the
publication of manuscripts includes virtually all symbols
that one can find in modern scientific literature. Authors
should refrain from inventing their own symbols.) Italics
are similarly designated with a single straight underline
in black pencil. It is preferred that vectors be designated
by bold face symbols within a published paper rather
than by arrows over the symbols.
(6) Manuscript pages must be numbered consecutively, with
the title page being page 1.
C. Order of pages
The manuscript pages must appear in the following
or der:
(1) Title page. (This includes the title, the list of authors,
their affiliations, with one complete affiliation for each
author appearing immediately after the author’s name,
an abbreviated title for use as a running title in the published version, and any appropriate footlines to title or
authors.)
(2) Abstract page, which may possibly be merged with the
title page if there is sufficient room. (This includes the
abstract with a separate line giving a prioritized listing
of the ASA-PACS numbers that apply to the manuscript.
The selected PACS numbers should be taken only from
the appendix concerned with acoustics of the overall
PACS listing.) Please note that the Journal requires the
abstract to be typed double spaced, just as for all of the
remainder of the manuscript.
(3) Text of the article. This must start on a new page.
(4) Acknowledgments.
Information for Contributors
2323
(5) Appendixes (if any).
(6) Textual footnotes. (Allowed only if the paper cites references by author name and year of publication.)
(7) References. (If the paper cites references by labeling them
with numbers according to the order in which they appear,
this section will also include textual footnotes.)
(8) Tables, each on a separate page and each with a caption
that is placed above the table.
(9) Collected figure captions.
Figures should ordinarily not be included in the “Article” file. Authors do, however, have the option of including
figures embedded in the text, providing there is no ambiguity
in distinguishing figure captions from the manuscript text
proper. This is understood to be done only for the convenience of the reviewers. Such embedded figures will be ignored in the production editing process. The figures that will
be used are those that were uploaded, one by one as separate
files, during the online submission process.
D. Title page of manuscript
The title page should include on separate lines, with appropriate intervening spacing: The article title, the name(s)
of author(s), one complete affiliation for each author, and the
date on which the manuscript is uploaded to the JASA manuscript submission system.
With a distinctive space intervening, the authors must
give, on a separate line, a suggested running title of six
words or less that contains a maximum of 50 characters. The
running title will be printed at the bottom of each printed
page, other than the first, when the paper appears in the Journal. Because the printing of running titles follows an abbreviated identification of the authors, the maximum permissible
length depends critically on the number of the authors and the
lengths of their names. The running title also appears on the
front cover of the Journal as part of an abbreviated table of
contents, and it is important that it give a nontrivial indication
of the article’s content, although some vagueness is to be
expected.
Titles should briefly convey the general subject matter of
the paper and should not serve as abstracts. The upper limit
is set at 17 words. They must be written using only words
and terminology that can be found in standard unabridged
US English dictionaries or in standard scientific/technical
dictionaries, and they must contain no acronyms other than
those that can be found in such dictionaries. (If authors
believe that the inclusion of a less common acronym in the
title will help in information retrieval and/or will help some
readers to better understand what is the subject matter of the
paper, then that acronym should be explicitly defined in the
title.) Ideally, titles should be such that one can easily identify the principal ASA-PACS numbers for the paper, and
consequently they should contain appropriate key words.
This will enable a reader doing a computer-assisted search to
determine whether the paper has any relevance to a given
research topic. Begin the first word of the title with a capital
letter; thereafter capitalize only proper nouns. The Journal
does not allow the use of subjective words such as “original,”
2324
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
“new,” “novel,” “important,” and “significant” in the title. In
general, words whose sole purpose is to tout the importance
of a work are regarded as unnecessary; words that clarify the
nature of the accomplishment are preferred.
In the list of authors, to simplify later indexing, adopt one
form of each name to use on the title pages of all submissions
to the Journal. It is preferred that the first name be spelled
out, especially if the last name is a commonly encountered
last name. If an author normally uses the middle name instead
of the first name, then an appropriate construction would be
one such as J. John Doe. Names must be written with last
name (family name) given last. Omit titles such as Professor,
Doctor, Colonel, Ph.D., and so on.
Each author may include only one current affiliation in
the manuscript. Put the author’s name above the institutional
affiliation. When there is more than one author with the
same institutional affiliation, put all such names above the
stating of that affiliation. (See recent issues of the Journal for
examples.)
In the stating of affiliations, give sufficient (but as briefly
as possible) information so that each author may be contacted
by mail by interested readers; e-mail addresses are optional.
Do not give websites, telephone numbers, or FAX numbers.
Names of states and countries should be written out in full.
If a post office box should be indicated, append this to the
zip code (as in 02537-0339). Use no abbreviations other than
D.C. (for District of Columbia). If the address is in the United
States, omit the country name.
The preferred order of listing of authors is in accord
with the extent of their contributions to the research and
to the actual preparation of the manuscript. (Thus, the last
listed author is presumed to be the person who has done the
least.)
The stated affiliation of any given author should be that
of the institution that employed the author at the time the
work was done. In the event an author was employed simultaneously by several institutions, the stated affiliation should
be that through which the financial support for the research
was channeled. If the current (at the time of publication)
affiliation is different, then that should be stated in a footline.
If an author is deceased then that should be stated in a footline. (Footlines are discussed further below.)
There is no upper limit to the number of authors of any
given paper. If the number becomes so large that the appearance of the paper when in print could look excessively awkward, the authors will be given the option of not explicitly
printing the author affiliations in the heading of the paper.
Instead, these can be handled by use of footlines as described
below. The Journal does not want organizations or institutions to be listed as authors. If there are a very large number
of authors, those who made lesser contributions can be designated by a group name, such a name ending with the word
“group.” A listing of the members of the group possibly including their addresses should be given in a footline.
Footlines to the title and to the authors’ names are consecutively ordered and flagged by lower case alphabetical
letters, as in Fletchera), Huntb), and Lindsayc). If there is any
history of the work’s being presented or published in part
earlier, then a footline flag should appear at the end of the
Information for Contributors
2324
title, and the first footline should be of the form exemplified
below:2
a)
Portions of this work were presented in “A modal distribution study of
violin vibrato,” Proceedings of International Computer Music Conference,
Thessaloniki, Greece, September 1997, and “Modal distribution analysis of
vibrato in musical signals,” Proceedings of SPIE International Symposium
on Optical Science and Technology, San Diego, CA, July 1998.
Authors have the option of giving a footline stating the
e-mail address of one author only (usually the corresponding
author), with an appropriate footline flag after that name and
with each footline having the form:
b)
Author to whom correspondence should be addressed. Electronic mail:
name@servername.com
E. Abstract page
Abstracts are often published separately from actual articles, and thus are more accessible than the articles themselves to many readers. Authors consequently must write abstracts so that readers without immediate access to the entire
article can decide whether the article is worth obtaining. The
abstract is customarily written last; the choice of what should
be said depends critically on what is said in the body of the
paper itself.
The abstract should not be a summary of the paper. Instead, it should give an accurate statement of the subject of
the paper, and it should be written so that it is intelligible
to a broad category of readers. Explicit results need not be
stated, but the nature of the results obtained should be stated.
Bear in mind that the abstract of a journal article, unlike the
abstract of a talk for a meeting, is backed-up by a written
article that is readily (if not immediately) accessible to the
reader.
Limit abstracts to 200 words (100 words for Letters to the
Editor). Displayed equations that are set apart from the text
count as 40 words. Do not use footnotes. If the authors decide
that it is imperative to cite a prior publication in the abstract,
then the reference should be embedded within the text and
enclosed within square brackets. These should be in one of
the two standard JASA formats discussed further below, but
titles of articles need not be given. The abstract should contain
no acknowledgments. In some circumstances, abstracts of
longer than 200 words will be allowed. If an author believes
that a longer abstract is essential for the paper, they should
send an e-mail message to jasa@aip.org with the subject line
“Longer abstract requested.” The text of the desired abstract
should be included in the memo, along with a statement of
why the author believes the longer abstract is essential. The
abstract will be reviewed by the editors, and possibly a revised
wording may be suggested.
Personal pronouns and explicit claims as to novelty
should be assiduously avoided. Do not repeat the title in
the abstract, and write the abstract with the recognition that
the reader has already read the title. Avoid use of acronyms
and unfamiliar abbreviations. If the initial writing leads to
the multiple use of a single lengthy phrase, avoid using an
2325
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
author-created acronym to achieve a reduction in length of
the abstract. Instead, use impersonal pronouns such as it and
these and shorter terms to allude to that phrase. The shortness of the abstract reduces the possibility that the reader will
misinterpret the allusion.
On the same page of the abstract, but separated from
the abstract by several blank lines, the authors must give the
principal ASA-PACS number for the paper, followed by up to
three other ASA-PACS numbers that apply. This should be in
the format exemplified below:
PACS numbers: 43.30.Pc, 43.30.Sf
The principal ASA-PACS number must be the first in this
list. All of the selected PACS numbers must begin with the
number 43, this corresponding to the appendix of the overall
PACS listing that is concerned with acoustics. Authors are
requested not to adopt a principal PACS number in the category of General Linear Acoustics (one beginning with
43.20) unless there is no specific area of acoustics with
which the subject matter can be associated. The more specific is the principal PACS number, the greater likelihood
that an appropriate match may be made with an Associate
Editor, and the greater likelihood that appropriate reviewers
will be recruited. When the paper is printed, the list of ASAPACS numbers will be immediately followed on the same
line by the initials, enclosed in brackets, of the Associate
Editor who handled the manuscript.
F. Section headings
The text of a manuscript, except for very short Letters to
the Editor, is customarily broken up into sections. Four types
of section headings are available: principal heading, first subheading, second subheading, and third subheading. The principal headings are typed boldface in all capital letters and
appear on separate lines from the text. These are numbered
by uppercase roman numerals (I, II, III, IV, etc.), with the
introductory section being principal section I. First subheadings are also typed on separate lines; these are numbered by
capital letters: A, B, C, etc. The typing of first subheadings
is bold-face, with only the first word and proper nouns being
capitalized. Second subheadings are ordered by numbers (1,
2, 3, etc.) and are also typed on separate lines. The typing of
second subheadings is italic bold-face, also with only the first
word and proper nouns capitalized. Third subheadings appear
in the text at the beginning of paragraphs. These are numbered
by lower case letters (a, b, c, etc.) and these are typed in italics
(not bold-faced). Examples of these types of headings can be
found in recent issues of the Journal. (In earlier issues, the
introduction section was not numbered; it is now required to
be numbered as the first principal section.)
Headings to appendixes have the same form as principal
headings, but are numbered by upper-case letters, with an
optional brief title following the identification of the section
as an appendix, as exemplified below:
APPENDIX C: CALCULATION OF IMPEDANCES
If there is only one appendix, the letter designation can
be omitted.
Information for Contributors
2325
V. STYLE REQUIREMENTS
A. Citations and footnotes
Regarding the format of citations made within the text,
authors have two options: (1) textual footnote style and
(2) alphabetical bibliographic list style.
In the textual footnote style, references and footnotes are
cited in the text by superscripted numerals, as in “the basic
equation was first derived by Rayleigh44 and was subsequently modified by Plesset45.” References and footnotes to
text material are intercalated and numbered consecutively in
order of first appearance. If a given reference must be cited at
different places in the text, and the citation is identical in all
details, then one must use the original number in the second
citation.
In the alphabetical bibliographic list style, footnotes as
such are handled as described above and are intended only
to explain or amplify remarks made in the text. Citations to
specific papers are flagged by parentheses that enclose either
the year of publication or the author’s name followed by the
year of publication, as in the phrases “some good theories
exist (Rayleigh, 1904)” and “a theory was advanced by
Rayleigh (1904).” In most of the papers where this style is
elected there are no footnotes, and only a bibliographic list
ordered alphabetically by the last name of the first author
appears at the end of the paper. In a few cases,3 there is a
list of footnotes followed by an alphabetized reference list.
Within a footnote, one has the option of referring to any
given reference in the same manner as is done in the text
proper.
Both styles are in common use in other journals, although the Journal of the Acoustical Society of America is
one of the few that allows authors a choice. Typically, the
textual footnote style is preferred for articles with a smaller
number of references, while the alphabetical bibliographic
list style is preferred for articles with a large number of references. The diversity of the articles published in the Journal
makes it infeasible to require just one style unilaterally.
B. General requirements for references
Regardless of what reference style the manuscript uses,
the format of the references must include the titles of articles.
For articles written in a language other than English, and for
which the Latin alphabet is used, give the actual title first in
the form in which it appeared in the original reference, followed by the English translation enclosed within parentheses.
For titles in other languages, give only the English translation,
followed by a statement enclosed in parentheses identifying
the language of publication. Do not give Latin-alphabet
transliterations of the original title. For titles in English and for
English translations of titles, use the same format as specified
above for the typing of the title on the title page. Begin the first
word of the title with a capital letter; thereafter capitalize only
those words that are specified by standard dictionaries to be
capitalized in ordinary prose.
One must include only references that can be obtained
by the reader. In particular, do not include references that
merely state: “personal communication.” (Possibly, one can
2326
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
give something analogous to this in a textual footnote, but
only as a means of crediting an idea or pinpointing a source.
In such a case an explanatory sentence or sentence fragment
is preferred to the vague term of “personal communication.”)
One should also not cite any paper that has only been submitted to a journal; if it has been accepted, then the citation
should include an estimated publication date. If one cites a
reference, then the listing must contain enough information
that the reader can obtain the paper. If thesis, reports, or
proceedings are cited, then the listing must contain specific
addresses to which one can write to buy or borrow the reference. In general, write the paper in such a manner that its
understanding does not depend on the reader having access to
references that are not easily obtained.
Authors should avoid giving references to material that
is posted on the internet, unless the material is truly archival,
as is the case for most online journals. If referring to nonarchival material posted on the internet is necessary to give
proper credit for priority, the authors should give the date at
which they last viewed the material online. If authors have
supplementary material that would be of interest to the readers of the article, then a proper posting of this in an archival
form is to make use of the AIP Publishing’s supplemental
material electronic depository. Instructions for how one posts
material can be found at the site <http://scitation.aip.org/
content/asa/journal/jasa/info/authors>. Appropriate items
for deposit include multimedia (e.g., movie files, audio files,
animated .gifs, 3D rendering files), color figures, data tables,
and text (e.g., appendices) that are too lengthy or of too
limited interest for inclusion in the printed journal. If authors
desire to make reference to materials posted by persons other
than by the authors, and if the posting is transitory, the authors should first seek to find alternate references of a more
archival form that they might cite instead. In all cases, the
reading of any material posted at a transitory site must not
be a prerequisite to the understanding of the material in the
paper itself, and when such material is cited, the authors
must take care to point out that the material will not necessarily be obtainable by future readers.
In the event that a reference may be found in several
places, as in the print version and the online version of a
journal, refer first to the version that is most apt to be archived.
In citing articles, give both the first and last pages that
include it. Including the last page will give the reader some
indication of the magnitude of the article. The copying en
toto of a lengthy article, for example, may be too costly for
the reader’s current purposes, especially if the chief objective
is merely to obtain a better indication of the actual subject
matter of the paper than is provided by the title.
The use of the expression “et al.” in listing authors’
names is encouraged in the body of the paper, but must not
be used in the actual listing of references, as reference lists in
papers are the primary sources of large data bases that persons use, among other purposes, to search by author. This
rule applies regardless of the number of authors of the cited
paper.
References to unpublished material in the standard format of other references must be avoided. Instead, append a
Information for Contributors
2326
graceful footnote or embed within the text a statement that
you are making use of some material that you have acquired
from another person—whatever material you actually use
of this nature must be peripheral to the development of the
principal train of thought of the paper. A critical reader will
not accept its validity without at least seeing something in
print. If the material is, for example, an unpublished derivation, and if the derivation is important to the substance of the
present paper, then repeat the derivation in the manuscript
with the original author’s permission, possibly including that
person as a coauthor.
Journal titles must ordinarily be abbreviated, and each
abbreviation must be in a “standard” form. The AIP Style
Manual1 gives a lengthy list of standard abbreviations that
are used for journals that report physics research, but the
interdisciplinary nature of acoustics is such that the list omits
many journals that are routinely cited in the Journal of the
Acoustical Society of America. For determination of what
abbreviations to use for journals not on the list, one can skim
the reference lists that appear at the ends of recent articles in
the Journal. The general style for making such abbreviations
(e.g., Journal is always abbreviated by “J.,” Applied is always
abbreviated by “Appl.,” International is always abbreviated by
“Int.,” etc.) must in any event emerge from a study of such
lists, so the authors should be able to make a good guess as to
the standard form. Should the guess be in error, this will often
be corrected in the copy-editing process. Egregious errors
are often made when the author lifts a citation from another
source without actually looking up the original source. An
author might be tempted, for example, to abbreviate a journal
title as “Pogg. Ann.,” taking this from some citation in a
19th century work. The journal cited is Annalen der Physik,
sometimes published with the title Annalen der Physik und
Chemie, with the standard abbreviation being “Ann. Phys.
(Leipzig).” The fact that J. C. Poggendorff was at one time the
editor of this journal gives very little help in the present era in
distinguishing it among the astronomical number of journals
that have been published. For Poggendorff’s contemporaries,
however, “Pogg. Ann.” had a distinct meaning.
Include in references the names of publishers of book
and standards and their locations. References to books and
proceedings must include chapter numbers and/or page
ranges.
C. Examples of reference formats
The number of possible nuances in the references that
one may desire to cite is very large, and the present document
cannot address all of them; a study of the reference lists at
the ends of articles in recent issues in the Journal will resolve
most questions. The following two lists, one for each of the
styles mentioned above, give some representative examples
for the more commonly encountered types of references. If
the authors do not find a definitive applicable format in the
examples below or in those they see in scanning past issues,
then it is suggested that they make their best effort to create
an applicable format that is consistent with the examples
that they have seen, following the general principles that the
information must be sufficiently complete that: (1) any present
2327
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
or future reader can decide whether the work is worth looking
at in more detail; (2) such a reader, without great effort, can
look at, borrow, photocopy, or buy a copy of the material;
and (3) a citation search, based on the title, an author name,
a journal name, or a publication category, will result in the
present paper being matched with the cited reference.
1. Textual footnote style
1
Y. Kawai, Prediction of noise propagation from a depressed road by using
boundary integral equations” (in Japanese), J. Acoust. Soc. Jpn. 56, 143–147
(2000).
2
L. S. Eisenberg, R. V. Shannon, A. S. Martinez, J. Wygonski, and A. Boothroyd, “Speech recognition with reduced spectral cues as a function of age,”
J. Acoust. Soc. Am. 107, 2704–2710 (2000).
3
J. B. Pierrehumbert, The Phonology and Phonetics of English Intonation
(Ph.D. dissertation, Mass. Inst. Tech., Cambridge, MA, 1980); as cited by
4D. R. Ladd, I. Mennen, and A. Schepman, J. Acoust. Soc. Am. 107, 2685–
2696 (2000).
4
F. A. McKiel, Jr., “Method and apparatus or sibilant classification in a speech
recognition system,” U. S. Patent No. 5,897,614 (27 April 1999). A brief
review by D. L. Rice appears in: J. Acoust. Soc. Am. 107, p. 2323 (2000).
5
A. N. Norris, “Finite-amplitude waves in solids, in Nonlinear Acoustics,
edited by M. F. Hamilton and D. T. Blackstock (Academic Press, San Diego,
1998), Chap. 9, pp. 263–277.
6
V. V. Muzychenko and S. A. Rybak, “Amplitude of resonance sound scattering by a finite cylindrical shell in a fluid” (in Russian), Akust. Zh. 32,
129–131 (1986); English transl.: Sov. Phys. Acoust. 32, 79–80 (1986).
7
M. Stremel and T. Carolus, “Experimental determination of the fluctuating
pressure on a rotating fan blade,” on the CD-ROM: Berlin, March 14–19,
Collected Papers, 137th Meeting of the Acoustical Society of America
and the 2nd Convention of the European Acoustics Association (ISBN
3-9804458-5-1, available from Deutsche Gesellschaft fuer Akustik, Fachbereich Physik, Universitaet Oldenburg, 26111 Oldenburg, Germany), paper 1PNSB_7.
8
ANSI S12.60-2002 (R2009) American National Standard Acoustical Performance Criteria, Design Requirements, and Guidelines for Schools (American
National Standards Institute, New York, 2002).
2. Alphabetical bibliographic list style
American National Standards Inst. (2002). ANSI S12.60 (R2009) American
National Standard Acoustical Performance Criteria, Design Requirements,
and Guidelines for Schools (American National Standards Inst., New
York).
Ando, Y. (1982). “Calculation of subjective preference in concert halls,” J.
Acoust. Soc. Am. Suppl. 1 71, S4-S5.
Bacon, S. P. (2000). “Hot topics in psychological and physiological acoustics:
Compression,” J. Acoust. Soc. Am. 107, 2864(A).
Bergeijk, W. A. van, Pierce, J. R., and David, E. E., Jr. (1960). Waves and the
Ear (Doubleday, Garden City, NY), Chap. 5, pp. 104-143.
Flatté, S. M., Dashen, R., Munk, W. H., Watson, K. M., and Zachariasen, F.
(1979). Sound Transmission through a Fluctuating Ocean (Cambridge
University Press, London), pp. 31-47.
Hamilton, W. R. (1837). “Third supplement to an essay on the theory of
systems of waves,” Trans. Roy. Irish Soc. 17 (part 1), 1-144; reprinted in:
The Mathematical Papers of Sir William Rowan Hamilton, Vol. II: Dynamics, edited by A. W. Conway and A. J. McConnell (Cambridge University
Press, London), pp. 162-211.
Helmholtz, H. (1859). “Theorie der Luftschwingungen in Röhren mit offenen Enden” (“Theory of air oscillations in tubes with open ends”), J. reine
ang. Math. 57, 1-72.
Kim, H.-S., Hong, J.-S., Sohn, D.-G., and Oh., J.-E. (1999). “Development of
an active muffler system for reducing exhaust noise and flow restriction in a
heavy vehicle,” Noise Control Eng. J. 47, 57-63.
Simpson, H. J., and Houston, B. H. (2000). “Synthetic array measurements
for waves propagating into a water-saturated sandy bottom ...,” J. Acoust.
Soc. Am. 107, 2329-2337.
Other examples may be found in the reference lists of
papers recently published in the Journal.
Information for Contributors
2327
D. Figure captions
The illustrations in the Journal have figure captions
rather than figure titles. Clarity, rather than brevity, is desired, so captions can extend over several lines. Ideally, a
caption must be worded so that a casual reader, on skimming
an article, can obtain some indication as to what an illustration is depicting, without actually reading the text of the
article. If an illustration is taken from another source, then
the caption must acknowledge and cite that source. Various
examples of captions can be found in the articles that appear
in recent issues of the Journal.
If the figure will appear in black and white in the printed
edition and in color online, the statement “(Color online)”
should be added to the figure caption. For color figures that
will appear in black and white in the printed edition of the
Journal, the reference to colors in the figure may not be
included in the caption, e.g., red circles, blue lines.
E. Acknowledgments
The section giving acknowledgments must not be numbered and must appear following the concluding section. It
is preferred that acknowledgments be limited to those who
helped with the research and with its formulation and to
agencies and institutions that provided financial support. Administrators, administrative assistants, associate editors,
and persons who assisted in the nontechnical aspects of the
manuscript preparation must not be acknowledged. In many
cases, sponsoring agencies require that articles give an acknowledgment and specify the format in which the acknowledgment must be stated—doing so is fully acceptable. Generally, the Journal expects that the page charges will be
honored for any paper that carries an acknowledgment to a
sponsoring organization.
F. Mathematical equations
Authors are expected to use computers with appropriate
software to typeset mathematical equations.
Authors are also urged to take the nature of the actual
layout of the journal pages into account when writing mathematical equations. A line in a column of text is typically 60
characters, but mathematical equations are often longer. To
insure that their papers look attractive when printed, authors
must seek to write sequences of equations, each of which fits
into a single column, some of which define symbols appearing
in another equation, even if such results in a greater number
of equations. If an equation whose length will exceed that
of a single column is unavoidable, then the authors must
write the equation so that it is neatly breakable into distinct
segments, each of which fits into a single column. The casting
of equations in a manner that requires the typesetting to revert
to a single column per page (rather than two columns per
page) format must be assiduously avoided. To make sure that
this possibility will not occur, authors familiar with desk-top
publishing software and techniques may find it convenient to
temporarily recast manuscripts into a form where the column
width corresponds to 60 text characters, so as to see whether
none of the line breaks within equations will be awkward.
2328
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Equations are numbered consecutively in the text in
the order in which they appear, the number designation is in
parentheses and on the right side of the page. The numbering
of the equations is independent of the section in which they
appear for the main body of the text. However, for each
appendix, a fresh numbering begins, so that the equations in
Appendix B are labeled (B1), (B2), etc. If there is only one
appendix, it is treated as if it were Appendix A in the numbering of equations.
G. Phonetic symbols
The phonetic symbols included in a JASA manuscript
should be taken from the International Phonetic Alphabet
(IPA), which is maintained by the International Phonetic
Association, whose home page is http://www.langsci.ucl.
ac.uk/ipa/. The display of the most recent version of the
alphabet can be found at http://www.langsci.ucl.ac.uk/ipa/
ipachart.html.
The total set of phonetic symbols that can be used by
AIP Publishing during the typesetting process is the set
included among the Unicode characters. This includes most
of the symbols and diacritics of the IPA chart, plus a few
compiled combinations, additional tonal representations, and
separated diacritics. A list of all such symbols is given in
the file phonsymbol.pdf which can be downloaded by going
to the JASA website http://scitation.aip.org/content/asa/
journal/jasa/info/authors and then clicking on the item List of
Phonetic Symbols. This file gives, for each symbol (displayed
in 3 different Unicode fonts, DoulosSIL, GentiumPlus, and
CharisSILCompact): its Unicode hex ID number, the Unicode
character set it is part of, its Unicode character name, and
its IPA definition (taken from the IPA chart). Most of these
symbols and their Unicode numbers are also available from
Professor John Wells of University College London at http://
www.phon.ucl.ac.uk/home/wells/ipa-unicode.htm#alfa,
without the Unicode character names and character set names.
The method of including such symbols in a manuscript
is to use, in conjunction with a word processor, a Unicodecompliant font that includes all symbols required. Fonts that
are not Unicode-compliant should not be used. Most computers
come with Unicode fonts that give partial coverage of the IPA.
Some sources where one can obtain Unicode fonts for Windows,
MacOS, and Linux with full IPA coverage are http://www.
phon.ucl.ac.uk/home/wells/ipa-unicode.htm and http://scripts.
sil.org/cms/scripts/page.php?item_id=SILFontList.
Further
information about which fonts contain a desired symbol set can
be found at http://www.alanwood.net/unicode/fontsbyrange.
html#u0250 and adjacent pages at that site. While authors
may use any Unicode-compliant font in their manuscript, AIP
Publishing reserves the right to replace the author’s font with
a Unicode font of its choice (currently one of the SIL fonts
Doulos, Gentium, or Charis, but subject to change in the future).
For LaTeX manuscripts, PXP’s LaTeX-processing
environment (MikTeX) supports the use of TIPA fonts. TIPA
fonts are available through the Comprehensive TeX Archive
Network at http://www.ctan.org/ (download from http://www.
ctan.org/pkg/tipa).
Information for Contributors
2328
H. Figures
Each figure should be manifestly legible when reduced
to one column of the printed journal page. Figures requiring
the full width of a journal page are discouraged, but exceptions can be made if the reasons for such are sufficiently
evident. The inclusion of figures in the manuscript should be
such that the manuscript, when published, should ordinarily
have no more than 30% of the space devoted to figures, and
such that the total number of figures should ordinarily not be
more than 20. In terms of the restriction of the total space for
figures, each figure part will be considered as occupying a
quarter page. Because of the advances in technology and the
increasingly wider use of computers in desk-top publishing,
it is strongly preferred that authors use computers exclusively in the preparation of illustrations. If any figures are
initially in the form of hard copy, they should be scanned
with a high quality scanner and converted to electronic form.
Each figure that is to be included in the paper should be cast
into one of several acceptable formats (TIFF, EPS, JPEG, or
PS) and put into a separate file.
The figures are numbered in the order in which they are
first referred to in the text. There must be one such referral
for every figure in the text. Each figure must have a caption,
and the captions are gathered together into a single list that
appears at the end of the manuscript. The numbering of the
figures, insofar as the online submission process is concerned, is achieved by uploading the individual figure files in
the appropriate sequence. The author should take care to
make sure that the sequence is correct, but the author will
also have the opportunity to view the merged manuscript and
to check on this sequencing.
For the most part, figures must be designed so that they
will fit within one column (3-3/8⬙) of the page, and yet be
intelligible to the reader. In rare instances, figures requiring
full page width are allowed, but the choice for using such a
figure must not be capricious.
A chief criticism of many contemporary papers is that
they contain far too many computer-generated graphical illustrations that present numerical results. An author develops
a certain general computational method (realized by software) and then uses it to exhaustively discuss a large number
of special cases. This practice must be avoided. Unless there
is an overwhelmingly important single point that the sequence of figures demonstrates as a whole, an applicable rule
of thumb is that the maximum number of figures of a given
type must be four.
The clarity of most papers is greatly improved if the
authors include one or more explanatory sketches. If, for
example, the mathematical development presumes a certain
geometrical arrangement, then a sketch of this arrangement
must be included in the manuscript. If the experiment is carried out with a certain setup of instrumentation and apparatuses, then a sketch is also appropriate. Various clichés,
such as Alice’s—“and what is the use of a book without
pictures?”—are strongly applicable to journal articles in
acoustics. The absence of any such figures in a manuscript,
even though they might have improved the clarity of the
2329
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
paper, is often construed as an indication of a callous lack of
sympathy for the reader’s potential difficulties when attempting to understand a paper.
Color figures can be included in the online version of the
Journal with no extra charge provided that these appear suitably as black and white figures in the print edition.
I. Tables
Tables are numbered by capital roman numerals
(TABLE III, TABLE IV, etc.) and are collected at the end of
the manuscript, following the references and preceding the
figure captions, one table per page. There should be a descriptive caption (not a title) above each table in the manuscript.
Footnotes to individual items in a table are designated
by raised lower case letters (0.123a, Martinb, etc.) The
footnotes as such are given below the table and should be
as brief as practicable. If the footnotes are to references
already cited in the text, then they should have forms such
as— aReference 10—or—bFirestone (1935)—depending on
the citation style adopted in the text. If the reference is not
cited in the text, then the footnote has the same form as
a textual footnote when the alphabetical bibliographic list
style is used. One would cast the footnote as in the second
example above and then include a reference to a 1935 work
by Firestone in the paper’s overall bibliographic list. If,
however, the textual footnote style is used and the reference
is not given in the text itself, an explicit reference listing
must be given in the table footnote itself. This should contain
the bare minimum of information necessary for a reader to
retrieve the reference. In general, it is recommended that
no footnote refer to references that are not already cited in
the text.
VI. THE COVER LETTER
The submission of an electronic file containing a cover
letter is now optional. Most of the Journal’s requirements
previously met by the submission of a signed cover letter are
now met during the detailed process of online submission.
The fact that the manuscript was transmitted by the corresponding author who was duly logged onto the system is
taken as prima facie proof that the de facto transmittal letter
has been signed by the corresponding author.
There are, however, some circumstances where a cover
letter file might be advisable or needed:
(1) If persons who would ordinarily have been included
as authors have given permission or requested that their names
not be included, then that must be so stated. (This requirement
is imposed because some awkward situations have arisen in the
past in which persons have complained Information for that
colleagues or former colleagues have deliberately omitted their
names as authors from papers to which they have contributed.
The Journal also has the policy that a paper may still be
published, even if one of the persons who has contributed to the
work refuses to allow his or her name to be included among the
list of authors, providing there is no question of plagiarism.)
Information for Contributors
2329
Unless a cover letter listing such exceptions is submitted, the
submittal process implies that the corresponding author is
attesting that the author list is complete.
(2) If there has been any prior presentation or any overlap
in concept with any other manuscripts that have been either
published or submitted for publication, this must be stated in
a cover letter. If the manuscript has been previously submitted
elsewhere for publication, and subsequently withdrawn, this
must also be disclosed. If none of these apply for the submitted
manuscript, then the submission process is construed to imply
that the corresponding author is attesting to such a fact.
(3) (Optional.) Reasons why the authors have selected to
submit their paper to JASA rather than some other journal.
These would ordinarily be supplied if the authors are concerned that there may be some questions as to the paper
meeting the “truly acoustics” criterion or of its being within
the scope of the Journal. If none of the references cited in
the submitted paper are to articles previously published in
the Journal, it is highly advisable that some strong reasons
be given for why the authors believe the paper falls within
the scope of the Journal.
(4) If the online submission includes the listing of one or
more persons who the authors prefer not be used as reviewers,
an explanation in a cover letter would be desirable.
(5) If the authors wish to make statements which they
feel are appropriate to be read by editors, but are inappropriate to be included in the actual manuscript, then such
should be included in a cover letter.
Cover letters are treated by the Peer X-Press system as
being distinct from rebuttal letters.
Rebuttal letters should be submitted with revised manuscripts, and the contents are usually such that the authors
give, when appropriate, rebuttals to suggestions and criticisms of the reviewers, and give detailed discussion of how
and why the revised manuscript differs from what was originally submitted.
file for the index of each volume is jasin.pdf. The listing of
the ASA-PACS numbers is at the beginning of this file.) It is
the authors’ responsibility to identify a principal ASA-PACS
number corresponding to the subject matter of the manuscript
and also to identify all other ASA-PACS numbers (up to a
total of four) that apply.
B. Suggestions for Associate Editors
In the suggestion of an Associate Editor who should
handle a specific manuscript, authors should consult a document titled “Associate Editors identified with PACS classification items” obtainable at the JASA web site <http://
scitation.aip.org/content/asa/journal/jasa/info/about>. Here
the Associate Editors are identified by their initials, and
the relation of the initials to the names is easily discerned
from the listing of Associate Editors on the back cover of
each issue, on the title page of each volume, and at the
online site <http://scitation.aip.org/content/asa/journal/jasa/
info/about>. (On the CD distribution of the Journal, the
appropriate file is jasae.pdf.)
Authors are not constrained to select Associate Editors
specifically identified with their choice of principal ASAPACS number and should note that the Journal has special
Associate Editors for Mathematical Acoustics, Computational
Acoustics, and Education in Acoustics. Review and tutorial
articles are ordinarily invited; submission of unsolicited review
articles or tutorial articles (other than those which can be
construed as papers on education in acoustics) without prior
discussion with the Editor-in-Chief is discouraged. Authors
should suggest the Associate Editor for Education in Acoustics
for tutorial papers that contain material which might be used
in standard courses on acoustics or material that supplements
standard textbooks.
C. Types of manuscripts
VII. EXPLANATIONS AND CATEGORIES
A. Subject classiication, ASA-PACS
Authors are asked in their online submittal and in their
manuscript to identify the subject classification of their paper
using the ASA-PACS system. The subject index of the Journal presently follows a specialized extension of the Physics
and Astronomy Classification Scheme4 (PACS) maintained
by AIP Publishing. Numbers in this scheme pertaining to
Acoustics have the general form: 43.nn.Aa, where n denotes
a digit, A denotes a capital alphabetical letter, and a denotes
a lower case letter. An amplified version of the section 43
listing appears as an appendix to AIP Publishing’s document,
and this is here referred to as the ASA-PACS system. The
ASA-PACS listing for acoustics appears at the end of
each volume of the Journal preceding the index (June and
December issues). It can also be found by first going to the
Journal’s site <http://scitation.aip.org/content/asa/journal/
jasa/info/authors> and then clicking the item: Physics and
Astronomy Classification Scheme (PACS), Section 43, Acoustics. (On the CD distribution of the Journal, the appropriate
2330
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Categories of papers that are published in the Journal
include the following:
1. Regular research articles
These are papers which report original research. There is
neither a lower limit nor an upper limit on their length, although authors must pay page charges if the length results in
more than 12 printed pages. The prime requirement is that
such papers must contain a complete account of the reported
research.
2. Education in acoustics articles
Such papers should be of potential interest to acoustics
educators. Examples include descriptions of laboratory experiments and of classroom demonstrations. Papers that describe computer simulations of basic acoustical phenomena
also fall within this category. Tutorial discussions on how to
present acoustical concepts, including mathematical derivations that might give students additional insight, are possible
contributions.
Information for Contributors
2330
3. Letters to the editor
These are shorter research contributions that can be
any of the following: (i) an announcement of a research
result, preliminary to the full of the research; (ii) a scientific
or technical discussion of a topic that is timely; (iii) brief
alternate derivations or alternate experimental evidence concerning acoustical phenomena; (iv) provocative articles that
may stimulate further research. Brevity is an essential feature
of a letter, and the Journal suggests 3 printed journal pages
as an upper limit, although it will allow up to 4 printed pages
in exceptional cases.
The Journal’s current format has been chosen so as to
give letters greater prominence. Their brevity in conjunction
with the possible timeliness of their contents gives impetus to a
quicker processing and to a shorter time lag between submission
and appearance in printed form in the Journal. (The quickest
route to publication that the Acoustical Society currently offers
is submission to the special section JASA Express Letters
(JASA-EL) of the Journal. For information regarding JASAEL, visit the site <http://scitation.aip.org/content/asa/journal/
jasael/info/authors>.)
Because the desire for brevity is regarded as important,
the author is not compelled to make a detailed attempt to
place the work within the context of current research; the
citations are relatively few and the review of related research
is limited. The author should have some reason for desiring
a more rapid publication than for a normal article, and the
editors and the reviewers should concur with this. The work
should have a modicum of completeness, to the extent that
the letter “tells a story” that is at least plausible to the reader,
and it should have some nontrivial support for what is being
related. Not all the loose strings need be tied together. Often
there is an implicit promise that the publication of the letter
will be followed up by a regular research article that fills in
the gaps and that does all the things that a regular research
article should do.
4. Errata
These must be corrections to what actually was printed.
Authors must explicitly identify the passages or equations
in the paper and then state what should replace them. Long
essays on why a mistake was made are not desired. A typical
line in an errata article would be of the form: Equation (23)
on page 6341 is incorrect. The correct version is ... . For
detailed examples, the authors should look at previously published errata articles in the Journal.
5. Comments on published papers
Occasionally, one or more readers, after reading a published paper, will decide to submit a paper giving comments
about that paper. The Journal welcomes submissions of
this type, although they are reviewed to make sure that the
comments are reasonable and that they are free of personal
slurs. The format of the title of a comments paper is rigidly
prescribed, and examples can be found in previous issues of
the Journal. The authors of the papers under criticism are
2331
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
frequently consulted as reviewers, but their unsubstantiated
opinion as to whether the letter is publishable is usually not
given much weight.
6. Replies to comments
Authors whose previously published paper has stimulated the submission of a comments paper, and which has
subsequently been accepted, have the opportunity to reply to
the comments. They are usually (but not invariably) notified
of the acceptance of the comments paper, and the Journal
prefers that the comments and the reply be published in successive pages of the same issue, although this is not always
practicable. Replies are also reviewed using criteria similar
to those of comments papers. As in the case of comments
papers, the format of the title of a reply paper is rigidly
prescribed, and examples can be found in the previous issues
of the Journal.
7. Forum letters
Forum letters are analogous to the “letters to the editor”
that one finds in the editorial section of major newspapers.
They may express opinions or advocate actions. They may
also relate anecdotes or historical facts that may be of general
interest to the readers of the Journal. They need not have a title
and should not have an abstract; they also should be brief,
and they should not be of a highly technical nature. These
are also submitted using the Peer X-Press system, but are not
handled as research articles. The applicable Associate Editor
is presently the Editor-in-Chief. For examples of acceptable
letters and the format that is desired, prospective authors of
such letters should consult examples that have appeared in
recent issues of the Journal.
8. Tutorial and review papers
Review and tutorial papers are occasionally accepted for
publication, but are difficult to handle within the peer-review
process. All are handled directly by the Editor-in-Chief, but
usually with extensive discussion with the relevant Associate
Editors. Usually such are invited, based on recommendations
from the Associate Editors and the Technical Committees
of the Society, and the tentative acceptance is based on a
submitted outline and on the editors’ acquaintance with the
prospective author’s past work. The format of such papers
is similar to those of regular research articles, although
there should be a table of contents following the abstract
for longer research articles. Submission is handled by the
online system, but the cover letter should discuss the history
of prior discussions with the editors. Because of the large
expenditure of time required to write an authorative review
article, authors are advised not to begin writing until they
have some assurance that there is a good likelihood of the
submission eventually being accepted.
9. Book reviews
All book reviews must be first invited by the Associate
Editor responsible for book reviews. The format for such
Information for Contributors
2331
reviews is prescribed by the Associate Editor, and the PXP
submittal process is used primarily to facilitate the incorporation of the reviews into the Journal.
VIII. FACTORS RELEVANT TO PUBLICATION
DECISIONS
A. Peer review system
The Journal uses a peer review system in the determination of which submitted manuscripts should be published.
The Associate Editors make the actual decisions; each editor
has specialized understanding and prior distinguished accomplishments in the subfield of acoustics that encompasses
the contributed manuscript. They seek advice from reviewers
who are knowledgeable in the general subject of the paper,
and the reviewers give opinions on various aspects of the
work; primary questions are whether the work is original and
whether it is correct. The Associate Editor and the reviewers
who examine the manuscript are the authors’ peers: persons
with comparable standing in the same research field as the
authors themselves. (Individuals interested in reviewing for
JASA or for JASA-EL can convey that interest via an e-mail
message to the Editor-in-Chief at <jasa@aip.org>.)
B. Selection criteria
Many submitted manuscripts are not selected for publication. Selection is based on the following factors: adherence
to the stylistic requirements of the Journal, clarity and eloquence of exposition, originality of the contribution, demonstrated understanding of previously published literature pertaining to the subject matter, appropriate discussion of the
relationships of the reported research to other current research or applications, appropriateness of the subject matter
to the Journal, correctness of the content of the article,
completeness of the reporting of results, the reproducibility
of the results, and the significance of the contribution. The
Journal reserves the right to refuse publication of any
submitted article without giving extensively documented
reasons, although the editors usually give suggestions that
can help the authors in the writing and submission of future
papers. The Associate Editor also has the option, but not
an obligation, of giving authors an opportunity to submit a
revised manuscript addressing specific criticisms raised in
the peer review process. The selection process occasionally
results in mistakes, but the time limitations of the editors
and the reviewers preclude extraordinary steps being taken to
insure that no mistakes are ever made. If an author feels that
the decision may have been affected by an a priori adverse
bias (such as a conflict of interest on the part of one of the
reviewers), the system allows authors to express the reasons
in writing and ask for an appeal review.
C. Scope of the Journal
Before one decides to submit a paper to the Journal of the
Acoustical Society, it is prudent to give some thought as to
whether the paper falls within the scope of the Journal. While
this can in principal be construed very broadly, it is often the
case that another journal would be a more appropriate choice.
2332
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
As a practical matter, the Journal would find it difficult to
give an adequate peer review to a submitted manuscript that
does not fall within the broader areas of expertise of any of
its Associate Editors. In the Journal’s peer-review process,
extensive efforts are made to match a submitted manuscript
with an Associate Editor knowledgeable in the field, and
the Editors have the option of declining to take on the task.
It is a tacit understanding that no Associate Editor should
accept a paper unless he or she understands the gist of the
paper and is able to make a knowledgeable assessment of the
relevance of the advice of the selected reviewers. If no one
wishes to handle a manuscript, the matter is referred to the
Editor-in-Chief and a possible resulting decision is that the
manuscript is outside the de facto scope of the Journal. When
such happens, it is often the case that the article either cites
no previously published papers in the Journal or else cites no
recent papers in any of the other journals that are commonly
associated with acoustics. Given that the Journal has been
in existence for over 80 years and has published of the order
of 35,000 papers on a wide variety of acoustical topics over
its lifetime, the absence of any references to previously
published papers in the Journal raises a flag signaling the
possibility that the paper lies outside the de facto scope of
the Journal.
Authors concerned that their work may be construed by
the Editors as not being within the scope of the Journal can
strengthen their case by citing other papers published in the
Journal that address related topics.
The Journal ordinarily selects for publication only articles that have a clear identification with acoustics. It would,
for example, not ordinarily publish articles that report results
and techniques that are not specifically applicable to acoustics, even though they could be of interest to some persons
whose work is concerned with acoustics. An editorial5 published in the October 1999 issue gives examples that are not
clearly identifiable with acoustics.
IX. POLICIES REGARDING PRIOR PUBLICATION
The Journal adheres assiduously to all applicable copyright laws, and authors must not submit articles whose publication will result in a violation of such laws. Furthermore,
the Journal follows the tradition of providing an orderly
archive of scientific research in which authors take care that
results and ideas are fully attributed to their originators. Conscious plagiarism is a serious breach of ethics, if not illegal.
(Submission of an article that is plagiarized, in part or in full,
may have serious repercussions on the future careers of the
authors.) Occasionally, authors rediscover older results and
submit papers reporting these results as though they were
new. The desire to safeguard the Journal from publishing
any such paper requires that submitted articles have a sufficient discussion of prior related literature to demonstrate the
authors’ familiarity with the literature and to establish the
credibility of the assertion that the authors have carried out a
thorough literature search.
In many cases, the authors themselves may have either
previously circulated, published, or presented work that has
substantial similarities with what is contained within the
Information for Contributors
2332
contributed manuscript. In general, JASA will not publish
work that has been previously published. (An exception
is when the previous publication is a letter to the editor,
and when pertinent details were omitted because of the
brief nature of the earlier reporting.) Presentations at
conferences are not construed as prior publication; neither
is the circulation of preprints or the posting of preprints
on any web site, providing the site does not have the
semblance of an archival online journal. Publication as such
implies that the work is currently, and for the indefinite
future, available, either for purchase or on loan, to a broad
segment of the research community. Often the Journal will
consider publishing manuscripts with tangible similarities
to other work previously published by the authors—
providing the following conditions are met: (1) the titles
are different; (2) the submitted manuscript contains no
extensive passages of text or figures that are the same as
in the previous publication; (3) the present manuscript is
a substantial update of the previous publication; (4) the
previous publication has substantially less availability than
would a publication in JASA; (5) the current manuscript
gives ample referencing to the prior publication and
explains how the current manuscript differs from the prior
publication. Decisions regarding such cases are made by the
Associate Editors, often in consultation with the Editor-inChief. (Inquiries prior to submission as to whether a given
manuscript with some prior history of publication may be
regarded as suitable for JASA should be addressed to the
Editor-in-Chief at <jasa@aip.org>.)
The Journal will not consider any manuscript for publication that is presently under consideration by another journal or which is substantially similar to another one under
consideration. If it should learn that such is the case, the
paper will be rejected and the editors of the other journal will
be notified.
Authors of an article previously published as a letter
to the editor, either as a regular letter or as a letter in
the JASA-EL (JASA Express Letters) section of the
Journal, where the original account was either abbreviated
or preliminary are encouraged to submit a more
comprehensive and up-dated account of their research to
the Journal.
A. Speculative papers
In some cases, a paper may be largely speculative; a new
theory may be offered for an as yet imperfectly understood
phenomenon, without complete confirmation by experiment.
Although such papers may be controversial, they often become the most important papers in the long-term development of a scientific field. They also play an important role
in the stimulation of good research. Such papers are intrinsically publishable in JASA, although explicit guidelines for
their selection are difficult to formulate. Of major importance
are (i) that the logical development be as complete as practicable, (ii) that the principal ideas be plausible and consistent with what is currently known, (iii) that there be no known
counter-examples, and (iv) that the authors give some hints
as to how the ideas might be checked by future experiments
2333
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
or numerical computations. In addition, the authors should
cite whatever prior literature exists that might indicate that
others have made similar speculations.
B. Multiple submissions
The current online submittal process requires that each
paper be submitted independently. Each received manuscript
will be separately reviewed and judged regarding its merits
for publication independently of the others. There is no formal mechanism for an author to request that two submissions, closely spaced in their times of submission, be regarded as a single submission.
In particular, the submission of two manuscripts, one
labeled “Part I” and the other labeled “Part II” is not allowed.
Submission of a single manuscript with the label “Part I” is
also not allowed. An author may submit a separate manuscript labeled “Part II,” if the text identifies which previously
accepted paper is to be regarded as “Part I.” Doing so may be
a convenient method for alerting potential readers to the fact
that the paper is a sequel to a previous paper by the author.
The author should not submit a paper so labeled, however,
unless the paper to be designated as “Part I” has already been
accepted, either for JASA or another journal.
The Associate Editors are instructed not to process any
manuscript that cannot be read without the help of as yet
unpublished papers that are still under review. Consequently,
authors are requested to hold back the submission of “sequels” to previously submitted papers until the disposition of
those papers is determined. Alternately, authors should write
the “sequels” so that the reading and comprehension of those
manuscripts does not require prior reading and access of papers whose publication is still uncertain.
X. SUGGESTIONS REGARDING CONTENT
A. Introductory section
Every paper begins with introductory paragraphs. Except for short Letters to the Editor, these paragraphs appear
within a separate principal section, usually with the heading
“Introduction.”
Although some discussion of the background of the
work may be advisable, a statement of the precise subject
of the work must appear within the first two paragraphs. The
reader need not fully understand the subject the first time it is
stated; subsequent sentences and paragraphs should clarify
the statement and should supply further necessary background. The extent of the clarification must be such that a
nonspecialist will be able to obtain a reasonable idea of what
the paper is about. The introduction should also explain to
the nonspecialist just how the present work fits into the context of other current work done by persons other than the
authors themselves. Beyond meeting these obligations, the
writing should be as concise as practicable.
The introduction must give the authors’ best arguments
as to why the work is original and significant. This is customarily done via a knowledgeable discussion of current and
prior literature. The authors should envision typical readers or
typical reviewers, and this should be a set of people that is not
Information for Contributors
2333
inordinately small, and the authors must write so as to convince
them. In some cases, both originality and significance will be
immediately evident to all such persons, and the arguments
can be brief. In other cases, the authors may have a daunting
task. It must not be assumed that readers and reviewers will
give the authors the benefit of the doubt.
B. Main body of text
The writing in the main body of the paper must follow a
consistent logical order. It should contain only material that
pertains to the main premise of the paper, and that premise
should have been stated in the introduction. While tutorial
discussions may in some places be appropriate, such should
be kept to a minimum and should be only to the extent necessary to keep the envisioned readers from becoming lost.
The writing throughout the text, including the introduction, must be in the present tense. It may be tempting to refer
to subsequent sections and passages in the manuscript in the
future tense, but the authors must assiduously avoid doing
so, using instead phrases such as “is discussed further below.”
Whenever pertinent results, primary or secondary, are
reached in the progress of the paper, the writing should point
out that these are pertinent results in such a manner that it
would get the attention of a reader who is rapidly scanning
the paper.
The requirement of a consistent logical order implies
that the logical steps appear in consecutive order. Readers
must not be referred to subsequent passages or to appendixes
to fill in key elements of the logical development. The fact
that any one such key element is lengthy or awkward is
insufficient reason to relegate it to an appendix. Authors can,
however, flag such passages giving the casual reader the option of skipping over them on first reading. The writing nevertheless must be directed toward the critical reader—a person who accepts no aspect of the paper on faith. (If the paper
has some elements that are primarily speculative, then that
should be explicitly stated, and the development should be
directed toward establishing the plausibility of the speculation for the critical reader.)
To achieve clarity and readability, the authors must explicitly state the purposes of lengthy descriptions or of
lengthy derivations at the beginning of the relevant passages.
There should be no mysteries throughout the manuscript as
to the direction in which the presentation is going.
Authors must take care that no reader becomes needlessly lost because of the use of lesser-known terminology.
All terms not in standard dictionaries must be defined when
they are first used. Acronyms should be avoided, but, when
they are necessary, they must be explicitly defined when first
used. The terminology must be consistent; different words
should not be used to represent the same concept.
Efforts must be taken to avoid insulting the reader with
the use of gratuitous terms or phrases such as obvious, wellknown, evident, or trivial. If the adjectives are applicable,
then they are unnecessary. If not, then the authors risk incurring the ill-will of the readers.
2334
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
If it becomes necessary to bring in externally obtained
results, then the reader must be apprised, preferably by an
explicit citation to accessible literature, of the source of
such results. There must be no vague allusions, such as “It
has been found that...” or “It can be shown that...” If the allusion is to a mathematical derivation that the authors have
themselves carried out, but which they feel is not worth describing in detail, then they should briefly outline how the
derivation can be carried out, with the implication that a
competent reader can fill in the necessary steps without difficulty.
For an archival journal such as JASA, reproducibility of
reported results is of prime importance. Consequently, authors must give a sufficiently detailed account, so that all
results, other than anecdotal, can be checked by a competent
reader with comparable research facilities. If the results are
numerical, then the authors must give estimates of the probable errors and state how they arrived at such estimates. (Anecdotal results are typically results of field experiments or
unique case studies; such are often worth publishing as they
can stimulate further work and can be used in conjunction
with other results to piece together a coherent understanding
of broader classes of phenomena.)
C. Concluding section
The last principal section of the article is customarily
labeled “Conclusions” or “Concluding Remarks.” This
should not repeat the abstract, and it should not restate the
subject of the paper. The wording should be directed toward
a person who has some, if not thorough, familiarity with the
main body of the text and who knows what the paper is all
about. The authors should review the principal results of the
paper and should point out just where these emerged in the
body of the text. There should be a frank discussion of the
limitations, if any, of the results, and there should be a broad
discussion of possible implications of these results.
Often the concluding section gracefully ends with speculations on what research might be done in the future to build
upon the results of the present paper. Here the authors must
write in a collegial tone. There should be no remarks stating
what the authors themselves intend to do next. They must be
careful not to imply that the future work in the subject matter
of the paper is the exclusive domain of the authors, and there
should be no allusions to work in progress or to work whose
publication is uncertain. It is conceivable that readers stimulated to do work along the lines suggested by the paper will
contact the authors directly to avoid a duplication of effort,
but that will be their choice. The spirit expressed in the paper
itself should be that anyone should be free to follow-up on the
suggestions made in the concluding section. A successful paper
is one that does incite such interest on the part of the readers
and one which is extensively cited in future papers written by
persons other than the authors themselves.
D. Appendixes
The Journal prefers that articles not include appendixes
unless there are strong reasons for their being included.
Details of mathematical developments or of experimental
Information for Contributors
2334
procedures that are critical to the understanding of the
substance of a paper must not be relegated to an appendix.
(Authors must bear in mind that readers can easily skim over
difficult passages in their first reading of a paper.) Lengthy
proofs of theorems may possibly be placed in appendixes
providing their stating as such in the main body of the text
is manifestly plausible. Short appendixes are generally
unnecessary and impede the comprehension of the paper.
Appendixes may be used for lengthy tabulations of data, of
explicit formulas for special cases, and of numerical results.
Editors and reviewers, however, may question whether their
inclusion is necessary.
result. Authors should assume that any reader has access to
some such textbook, and the authors should tacitly treat the
result as well-known and not requiring a reference citation.
Authors must not cite any reference that the authors
have not explicitly seen, unless the paper has a statement to
that effect, accompanied by a statement of how the authors
became aware of the reference. Such citations should be limited to crediting priority, and there must be no implied recommendations that readers should read literature which the
authors themselves have not read.
XI. SUGGESTIONS REGARDING STYLE
A. Quality of writing and word usage
E. Selection of references
References are typically cited extensively in the introduction, and the selection of such references can play an
important role in the potential usefulness of the paper to
future readers and in the opinions that readers and reviewers
form of the paper. No hard and fast rules can be set down as
to how authors can best select references and as to how they
should discuss them, but some suggestions can be found in
an editorial6 published in the May 2000 issue. If a paper falls
within the scope of the Journal, one would ordinarily expect to
find several references to papers previously published in JASA.
Demonstration of the relevance of the work is often accomplished via citations, with accompanying discussion, to
recent articles in JASA and analogous journals. The implied
claims to originality can be strengthened via citations, with
accompanying discussion, to prior work related to the subject
of the paper, sufficient to establish credibility that the authors
are familiar with the literature and are not duplicating previous published work. Unsupported assertions that the authors
are familiar with all applicable literature and that they have
carried out an exhaustive literature survey are generally unconvincing to the critical reader.
Authors must not make large block citations of many
references (e.g., four or more). There must be a stated reason
for the citation of each reference, although the same reason
can sometimes apply simultaneously to a small number of
references. The total number of references should be kept as
small a number as is consistent with the principal purposes
of the paper (45 references is a suggested upper limit for a
regular research article). Although nonspecialist readers may
find a given paper to be informative in regard to the general
state of a given field, the authors must not consciously write
a research paper so that it will fulfill a dual function of being
a review paper or of being a tutorial paper.
Less literate readers often form and propagate erroneous
opinions concerning priority of ideas and discoveries based
on the reading of recent papers, so authors must make a
conscious attempt to cite original sources. Secondary sources
can also be cited, if they are identified as such and especially
if they are more accessible or if they provide more readable
accounts. In such cases, reasons must be given as to why the
secondary sources are being cited. References to individual
textbooks for results that can be found in a large number of
analogous textbooks should not be given, unless the cited
textbook gives a uniquely clear or detailed discussion of the
2335
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
The Journal publishes articles in the English language
only. There are very few differences of substance between
British English style (as codified in the Oxford English
Dictionary7) and US English style, but authors frequently
must make choices in this respect, such as between alternate
spelling of words that end in either -or or -our, or in either
-ized or -ised, or in either -er or -re. Although now a de facto
international journal, JASA because of its historical origins
requires manuscripts to follow US English style conventions.
Articles published in JASA are expected to adhere to high
standards of scholarly writing. A formal writing style free of
slang is required. Good conversational skills do not necessarily
translate to good formal writing skills. Authors are expected
to make whatever use is necessary of standard authoritative
references in regard to English grammar and writing style in
preparing their manuscripts. Many good references exist—
among those frequently used by professional writers are
Webster’s Third New International Dictionary, Unabridged,8
Merriam-Webster’s Collegiate Dictionary, 11th Edition,9
Strunk and White s Elements of Style,10 and the Chicago
Manual of Style.11 (The Third New International is AIP
Publishing’s standard dictionary.) All authors are urged to
do their best to produce a high quality readable manuscript,
consistent with the best traditions of scholarly and erudite
writing. Occasional typographical errors and lapses of
grammar can be taken care of in the copy-editing phase of
the production process, and the instructions given here are
intended that there be ample white space in the printed-out
manuscript that such copy-editing can be carried out. Receipt
of a paper whose grammatical and style errors are so excessive
that they cannot be easily fixed by copy-editing will generally
result in the authors being notified that the submission is not
acceptable. Receipt of such a notification should not be construed as a rejection of the manuscript—the authors should
take steps, possibly with external help, to revise the manuscript so that it overcomes these deficiencies. (Authors needing
help or advice on scientific writing in the English language are
encouraged to contact colleagues, both within and outside their
own institutions, to crititque the writing in their manuscripts.
Unfortunately, the staff of the Journal does not have the time
to do this on a routine basis.)
There are some minor discrepancies in the stylistic rules
that are prescribed in various references—these generally arise
because of the differences in priorities that are set in different
publication categories. Newspapers, for example, put high
Information for Contributors
2335
emphasis on the efficient use of limited space for conveying
the news and for catching the interest of their readers. For
scholarly journals, on the other hand, the overwhelming
priority is clarity. In the references cited above, this is the
basis for most of the stated rules. In following this tradition,
the Journal, for example, requires a rigorous adherence to the
serial comma rule (Strunk’s rule number 2): In a series of three
or more terms with a single conjunction, use a comma after
each term except the last. Thus a JASA manuscript would refer
to the “theory of Rayleigh, Helmholtz, and Kirchhoff” rather
than to the “theory of Rayleigh, Helmholtz and Kirchhoff.”
The priority of clarity requires that authors only use
words that are likely to be understood by a large majority of
potential readers. Usable words are those whose definitions
may be found either in a standard unabridged English dictionary (such as the Webster’s Third New International mentioned above), in a standard scientific dictionary such as the
Academic Press Dictionary of Science and Technology,12 or
in a dictionary specifically devoted to acoustics such as the
Dictionary of Acoustics13 by C. L. Morfey. In some cases,
words and phrases that are not in any dictionary may be
in vogue among some workers in a given field, especially
among the authors and their colleagues. Authors must give
careful consideration to whether use of such terms in their
manuscript is necessary; and if the authors decide to use
them, precise definitions must be stated within the manuscript. Unilateral coinage of new terms by the authors is
discouraged. In some cases, words with different meanings
and with different spellings are pronounced exactly the same,
and authors must be careful to choose the right spelling.
Common errors are to interchange principal and principle
and to interchange role and roll.
B. Grammatical pitfalls
There are only a relatively small number of categories
of errors that authors frequently make in the preparation of
manuscripts. Authors should be aware of these common pitfalls and double-check that their manuscripts contain no errors in these categories. Some errors will be evident when
the manuscript is read aloud; others, depending on the background of the writers, may not be. Common categories are
(1) dangling participles, (2) lack of agreement in number
(plural versus singular) of verbs with their subjects, (3) omission of necessary articles (such as a, an, and the) that precede
nouns, (4) the use of incorrect case forms (subjective, objective,
possessive) for pronouns (e.g., who versus whom), and (5) use
of the incorrect form (present, past, past participle, and future)
in regard to tense for a verb. Individual authors may have their
own peculiar pitfalls, and an independent casual reading of
the manuscript by another person will generally pinpoint
such pitfalls. Given the recognition that such exist, a diligent
author should be able to go through the manuscript and find all
instances where errors of the identified types occur.
C. Active voice and personal pronouns
Many authorities on good writing emphasize that authors should use the active rather than the passive voice.
Doing so in scholarly writing, especially when mathematical
2336
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
expressions are present, is often infeasible, but the advice has
merit. In mathematical derivations, for example, some authors use the tutorial we to avoid using the passive voice, so
that one writes: “We substitute the expression on the right
side of Eq. (5) into Eq. (2) and obtain ...,” rather than: “The
right side of Eq. (5) is substituted into Eq. (2), with the result
being ... .” A preferable construction is to avoid the use of
the tutorial we and to use transitive verbs such as yields,
generates, produces, and leads to. Thus one would write
the example above as: “Substitution of Eq. (5) into Eq. (2)
yields ... .” Good writers frequently go over an early draft of a
manuscript, examine each sentence and phrase written using
the passive voice, and consider whether they can improve the
sentence by rewriting it.
In general, personal pronouns, including the “tutorial we,”
are preferably avoided in scholarly writing, so that the tone is
impersonal and dispassionate. In a few cases, it is appropriate
that an opinion be given or that a unique personal experience
be related, and personal pronouns are unavoidable. What
should be assiduously avoided are any egotistical statements
using personal pronouns. If a personal opinion needs to be
expressed, a preferred construction is to refer to the author in
the third person, such as: “the present writer believes that ... .”
D. Acronyms
Acronyms have the inconvenient feature that, should the
reader be unfamiliar with them, the reader is clueless as to
their meaning. Articles in scholarly journals should ideally
be intelligible to many generations of future readers, and
formerly common acronyms such as RCA (Radio Corporation of America, recently merged into the General Electric
Corporation) and REA (Rural Electrification Authority) may
have no meaning to such readers. Consequently, authors are
requested to use acronyms sparingly and generally only
when not using them would result in exceedingly awkward
prose. Acronyms, such as SONAR and LASER (currently
written in lower case, sonar and laser, as ordinary words),
that have become standard terms in the English language and
that can be readily found in abridged dictionaries, are exceptions. If the authors use acronyms not in this category, then
the meaning of the individual letters should be spelled out at
the time such an acronym is first introduced. An article containing, say, three or more acronyms in every paragraph will
be regarded as pretentious and deliberately opaque.
E. Computer programs
In some cases the archival reporting of research suggests
that authors give the names of specific computer programs
used in the research. If the computation or data processing
could just as well have been carried out with the aid of any
one of a variety of such programs, then the name should be
omitted. If the program has unique features that are used in the
current research, then the stating of the program name must be
accompanied by a brief explanation of the principal premises
and functions on which the relevant features are based. One
overriding consideration is that the Journal wishes to avoid
implied endorsements of any commercial product.
Information for Contributors
2336
F. Code words
Large research projects and large experiments that involve several research groups are frequently referred to by
code words. Research articles in the Journal must be intelligible to a much broader group of readers, both present and
future, than those individuals involved in the projects with
which such a code word is associated. If possible, such code
words should either not be used or else referred to in only a
parenthetical sense. If attempting to do this leads to exceptionally awkward writing, then the authors must take special
care to explicitly explain the nature of the project early in the
paper. They must avoid any impression that the paper is specifically directed toward members of some in-group.
REFERENCES
1
AIP Publication Board (R. T. Beyer, chair), AIP Style Manual (American
Institute of Physics, 2 Huntington Quadrangle, Suite 1NO1, Melville, NY
11747, 1990, 4th ed.). This is available online at <http://www.aip.org/
pubservs/style/4thed/toc.html>.
2
M. Mellody and G. H. Wakefield, “The time-frequency characteristics of
violin vibrato: Modal distribution analysis and synthesis,” J. Acoust. Soc.
Am. 107, 598-611 (2000).
3
See, for example, the paper: B. Møhl, M. Wahlberg, P. Madsen, L. A. Mller,
and A. Surlykke, “Sperm whale clicks: Directionality and source level
revisited,” J. Acoust. Soc. Am. 107, 638–648 (2000).
2337
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
4
American Institute of Physics, Physics and Astronomy Classification
Scheme 2003. A paper copy is available from AIP Publishing LLC, 1305
Walt Whitman Road, Suite 300, Melville, NY 11747-4300. It is also available
online at the site <http://www.aip.org/pacs/index.html>.
5
A. D. Pierce, Current criteria for selection of articles for publication, J.
Acoust. Soc. Am. 106, 1613–1616 (1999).
6
A. D. Pierce, Literate writing and collegial citing, J. Acoust. Soc. Am. 107,
2303–2311 (2000).
7
The Oxford English Dictionary, edited by J. Simpson and E. Weiner (Oxford University Press, 1989, 2nd edition), 20 volumes. Also published as
Oxford English Dictionary (Second Edition) on CD-ROM, version 2.0 (Oxford University Press, 1999). An online version is available by subscription
at the Internet site <http://www.oed.com/public/welcome>.
8
Webster’s Third New International Dictionary of the English Language,
Unabridged, Philip Babcock Gove, Editor-in-Chief (Merriam-Webster
Inc., Springfield, MA, 1993, principal copyright 1961) This is the eighth
in a series of dictionaries that has its beginning in Noah Webster’s American
Dictionary of the English Language (1828).
9
Merriam-Webster’s Collegiate Dictionary, 11th Edition (Merriam-Webster,
Springfield, MA, 2003, principal copyright 1993). (A freshly updated version is issued annually.)
10
W. Strunk, Jr. and E. B. White, The Elements of Style, with forward by
Roger Angell (Allyn and Bacon, 1999, 4th edition).
11
The Chicago Manual of Style: The Essential Guide for Writers, Editors, and
Publishers, with preface by John Grossman (University of Chicago Press,
1993, 14th edition).
12
Academic Press Dictionary of Science and Technology, edited by Christopher Morris (Academic Press, Inc., 1992).
13
C. L. Morfey, Dictionary of Acoustics (Academic Press, Inc., 2000).
Information for Contributors
2337
ASSOCIATE EDITORS IDENTIFIED WITH PACS CLASSIFICATION ITEMS
The Classification listed here is based on the Appendix to Section 43, “Acoustics,” of the current edition of the Physics and Astronomy Classification Scheme
PACS of AIP Publishing LLC. The full and most current listing of PACS can be found at the internet site <http://www.aip.org/pubservs/pacs.html>. In the
full PACS listing, all of the acoustics items are preceded by the primary classification number 43. The listing here omits the prefatory 43; a listing in the AIP
Publishing document such as 43.10.Ce will appear here as 10.Ce.
The present version of the Classification scheme is intended as a guide to authors of manuscripts submitted to the Journal who are asked at the time of
submission to suggest an Associate Editor who might handle the processing of their manuscript. Authors should note that they can also have their manuscripts
processed from any of the special standpoints of (i) Applied Acoustics, (ii) Computational Acoustics, (iii) Mathematical Acoustics, or (iv) Education in
Acoustics, and that there are special Associate Editors who have the responsibility for processing manuscripts from each of these standpoints.
The initials which appear in brackets following most of the listings correspond to the names of persons on the Editorial Board i.e., Associate Editors who
customarily edit material that falls within that classification. A listing of full names and institutional affiliations of members of the Editorial Board can be
found on the back cover of each issue of the Journal. A more detailed listing can be found at the internet site <http://asadl.org/jasa/for_authors_jasa>. The
most current version of the present document can also be found at that site.
[05]
05.Bp
05.Dr
05.Ft
05.Gv
05.Hw
05.Ky
05.Ma
05.Nb
05.Pc
05.Re
05.Sf
[10]
10.Ce
10.Df
10.Eg
10.Gi
10.Hj
10.Jk
10.Km
10.Ln
10.Mq
10.Nq
10.Pr
10.Qs
10.Sv
10.Vx
[15]
[20]
20.Bi
20.Dk
20.El
20.Fn
2338
Acoustical Society of America
Constitution and bylaws [EM]
History [ADP]
Honorary members [EM]
Publications ARLO. Echoes. ASA Web page,
electronic archives and references [ADP]
Meetings [EM]
Members and membership lists, personal
notes, fellows [EM]
Administrative committee activities [EM]
Technical committee activities; Technical
Council [EM]
Prizes, medals, and other awards [EM]
Regional chapters [EM]
Obituaries
General
Conferences, lectures, and announcements (not
of the Acoustical Society of America) [EM]
Other acoustical societies and their
publications]; online journals and other
electronic publications [ADP]
Biographical, historical, and personal notes
(not of the Acoustical Society of America)
[EM]
Editorials, Forum [ADP], [NX]
Books and book reviews [PLM]
Bibliographies [EM], [ADP]
Patents [DLR], [SAF]
Surveys and tutorial papers relating to
acoustics research, tutorial papers on applied
acoustics [ADP], [NX]
Tutorial papers of historical and philosophical
nature [ADP], [NX], [WA]
News with relevance to acoustics
nonacoustical theories of interest to
acoustics [EM], [ADP]
Information technology, internet,
nonacoustical devices of interest to acoustics
[ADP], [NX]
Notes relating to acoustics as a profession
[ADP], [NX]
Education in acoustics, tutorial papers of
interest to acoustics educators [LLT], [WA],
[BEA], [VWS], [PSW]
Errata [ADP]
Standards [SB], [PDS]
General linear acoustics
Mathematical theory of wave propagation
[MD], [SFW], [ANN], [RM], [RKS], [KML],
[CAS]
Ray acoustics [JES], [SFW], [ANN], [JAC],
[KML], [TFD], [TK]
Reflection, refraction, diffraction of acoustic
waves [JES], [OU], [SFW], [RM], [KML],
[GH], [TFD], [TK]
Scattering of acoustic waves [LLT], [JES],
[OU], [SFW], [RM], [KML], [GH], [TK]
20.Gp
20.Hq
20.Jr
20.Ks
20.Mv
20.Px
20.Rz
20.Tb
20.Wd
20.Ye
[25]
25.Ba
25.Cb
25.Dc
25.Ed
25.Fe
25.Gf
25.Hg
25.Jh
25.Lj
25.Nm
25.Qp
25.Rq
25.Ts
25.Uv
25.Vt
25.Yw
25.Zx
[28]
28.Bj
Reflection, refraction, diffraction,
interference, and scattering of elastic and
poroelastic waves [OU], [RM], [DF],
[RKS], [JAT], [DSB], [GH]
Velocity and attenuation of acoustic waves
[MD], [OU], [SFW], [TRH], [RAS], [NPC],
[JAT], [GH]
Velocity and attenuation of elastic and
poroelastic waves [ANN], [NPC], [RKS],
[GH]
Standing waves, resonance, normal modes
[LLT], [SFW], [RM], [JDM]
Waveguides, wave propagation in tubes and
ducts [OU], [LH], [RK], [JBL]
Transient radiation and scattering [LLT],
[JES], [ANN], [MDV], [DDE]
Steady-state radiation from sources,
impedance, radiation patterns, boundary
element methods [SFW], [RM], [FCS]
Interaction of vibrating structures with
surrounding medium [LLT], [RM], [FCS],
[LH]
Analogies [JDM]
Measurement methods and instrumentation
[SFW], [TRH], [JDM], [GH]
Nonlinear acoustics
Parameters of nonlinearity of the medium
[MD], [OAS], [ROC]
Macrosonic propagation, finite amplitude
sound; shock waves [OU], [MDV], [PBB],
[OAS], [ROC]
Nonlinear acoustics of solids [MD], [ANN],
[OAS]
Effect of nonlinearity on velocity and
attenuation [MD], [OAS], [ROC]
Effect of nonlinearity on acoustic surface
waves [MD], [MFH], [OAS]
Standing waves; resonance [OAS], [MFH]
Interaction of intense sound waves with noise
[OAS], [PBB]
Reflection, refraction, interference, scattering,
and diffraction of intense sound waves [OU],
[MDV], [PBB]
Parametric arrays, interaction of sound with
sound, virtual sources [TRH]
Acoustic streaming [JDM], [OAS], [LH]
Radiation pressure [ROC]
Solitons, chaos [MFH]
Nonlinear acoustical and dynamical systems
[MFH], [ROC]
Acoustic levitation [MFH]
Intense sound sources [ROC], [TRH]
Nonlinear acoustics of bubbly liquids [TGL],
[SWY]
Measurement methods and instrumentation
for nonlinear acoustics [ROC]
Aeroacoustics and atmospheric sound
Mechanisms affecting sound propagation in
air, sound speed in the air [DKW], [VEO],
[KML]
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
28.Dm
28.En
28.Fp
28.Gq
28.Hr
28.Js
28.Kt
28.Lv
28.Mw
28.Py
28.Ra
28.Tc
28.Vd
28.We
[30]
30.Bp
30.Cq
30.Dr
30.Es
30.Ft
30.Gv
30.Hw
30.Jx
30.Ky
30.Lz
Infrasound and acoustic-gravity waves
[DKW], [PBB]
Interaction of sound with ground surfaces,
ground cover and topography, acoustic
impedance of outdoor surfaces [OU], [KVH],
[VEO], [KML]
Outdoor sound propagation through a
stationary atmosphere, meteorological factors
[DKW], [KML], [TK]
Outdoor sound propagation and scattering in
a turbulent atmosphere, and in non-uniform
flow fields [VEO], [PBB], [KML]
Outdoor sound sources [JWP], [PBB], [TK]
Numerical models for outdoor propagation
[VEO], [NAG], [DKW]
Aerothermoacoustics and combustion
acoustics [AH], [JWP], [LH]
Statistical characteristics of sound fields and
propagation parameters [DKW], [VEO]
Shock and blast waves, sonic boom [VWS],
[ROC], [PBB]
Interaction of fluid motion and sound.
Doppler effect and sound in flow ducts [JWP],
[AH], [LH]
Generation of sound by fluid flow,
aerodynamic sound, and turbulence, [JWP],
[AH], [PBB], [TK], [LH]
Sound-in-air measurements, methods and
instrumentation for location, navigation,
altimetry, and sound ranging [JWP], [KVH],
[DKW]
Measurement methods and instrumentation
to determine or evaluate atmospheric
parameters, winds, turbulence, temperatures,
and pollutants in air [JWP], [DKW]
Measurement methods and instrumentation
for remote sensing and for inverse problems
[DKW]
Underwater sound
Normal mode propagation of sound in water
[BTH], [AMT], [MS], [NPC], [TFD]
Ray propagation of sound in water [JES],
[BTH], [JAC], [TFD]
Hybrid and asymptotic propagation theories,
related experiments [BTH], [JAC], [TFD]
Velocity, attenuation, refraction, and
diffraction in water, Doppler effect [BTH],
[DRD], [JAC], [TFD]
Volume scattering [BTH], [APL]
Backscattering, echoes, and reverberation
in water due to combinations of boundaries
[BTH], [APL]
Rough interface scattering [BTH], [JES], [APL]
Radiation from objects vibrating under water,
acoustic and mechanical impedance [BTH],
[DSB], [DF], [EGW], [DDE]
Structures and materials for absorbing sound
in water; propagation in fluid-filled permeable
material [BTH], [NPC], [FCS], [TRH]
Underwater applications of nonlinear
acoustics; explosions [BTH], [NAG], [OAS],
[SWY]
168th Meeting: Acoustical Society of America
2338
30.Ma
30.Nb
30.Pc
30.Qd
30.Re
30.Sf
30.Tg
30.Vh
30.Wi
30.Xm
30.Yj
30.Zk
[35]
35.Ae
35.Bf
35.Cg
35.Dh
35.Ei
35.Fj
35.Gk
35.Hl
35.Kp
35.Lq
35.Mr
35.Ns
35.Pt
35.Rw
35.Sx
35.Ty
2339
Acoustics of sediments; ice covers,
viscoelastic media; seismic underwater
acoustics [BTH], [NAG], [MS], [DSB]
Noise in water; generation mechanisms and
characteristics of the field [BTH], [KGS],
[MS], [JAC], [SWY]
Ocean parameter estimation by acoustical
methods; remote sensing; imaging,
inversion, acoustic tomography [BTH],
[KGS], [AMT], [MS], [JAC], [ZHM],
[HCS], [SED], [TFD], [APL]
Global scale acoustics; ocean basin
thermometry, transbasin acoustics [BTH], [JAC]
Signal coherence or fluctuation to sound
propagation/scattering in the ocean [BTH],
[KGS], [HCS], [TFD]
Acoustical detection of marine life; passive
and active [BTH], [CF], [DKM], [AMT],
[MS], [MCH], [APL]
Navigational instruments using underwater
sound [BTH], [HCS], [JAC]
Active sonar systems [BTH], [JES], [TRH],
[ZHM], [DDE]
Passive sonar systems and algorithms,
matched field processing in underwater
acoustics [BTH], [KGS], [HCS], [AMT],
[MS], [SED], [ZHM]
Underwater measurement and calibration
instrumentation and procedures [BTH],
[JAC], [TRH], [DDE]
Transducers and transducer arrays for
underwater sound; transducer calibration
[BTH], [TRH], [DDE]
Experimental modeling [BTH], [JES], [MS],
[TFD]
35.Ud
Ultrasonics, quantum acoustics, and
physical effects of sound
Ultrasonic velocity, dispersion, scattering,
diffraction, and attenuation in gases [VMK],
[MRH], [AGP], [GH], [TK]
Ultrasonic velocity, dispersion, scattering,
diffraction, and attenuation in liquids, liquid
crystals, suspensions, and emulsions [VMK],
[MRH], [AGP], [DSB], [NAG], [JDM], [GH]
Ultrasonic velocity, dispersion, scattering,
diffraction, and attenuation in solids; elastic
constants [VMK], [MRH], [AGP], [MD],
[MFH], [JDM], [JAT], [RKS], [GH], [TK]
Pretersonics (sound of frequency above
10 GHz); Brillouin scattering [VMK],
[MRH], [AGP], [MFH], [RLW]
Acoustic cavitation, vibration of gas bubbles
in liquids [VMK], [MRH], [AGP], [TGL],
[NAG], [DLM]
Ultrasonic relaxation processes in gases,
liquids, and solids [VMK], [MRH], [AGP],
[NAG]
Phonons in crystal lattices, quantum acoustics
[VMK], [MRH], [AGP], [DF], [LPF], [JDM]
Sonoluminescence [VMK], [MRH], [AGP],
[NAG], [TGL]
Plasma acoustics [VMK], [MRH], [AGP],
[MFH], [JDM]
Low-temperature acoustics, sound in liquid
helium [VMK], [MRH], [AGP], [JDM]
Acoustics of viscoelastic materials [VMK],
[MRH], [AGP], [LLT], [MD], [OU], [FCS],
[KVH], [GH]
Acoustical properties of thin films [VMK],
[MRH], [AGP], [ADP], [TK]
Surface waves in solids and liquids [VMK],
[MRH], [AGP], [MD], [ANN], [GH], [TK]
Magnetoacoustic effect; oscillations and
resonance [VMK], [MRH], [AGP], [DAB],
[DF], [LPF]
Acoustooptical effects, optoacoustics,
acoustical visualization, acoustical microscopy,
and acoustical holography [VMK], [MRH],
[AGP], [JDM], [TK]
Other physical effects of sound [VMK],
[MRH], [AGP], [MFH], [NAG]
38.Ja
35.Vz
35.Wa
35.Xd
35.Yb
35.Zc
[38]
38.Ar
38.Bs
38.Ct
38.Dv
38.Ew
38.Fx
38.Gy
38.Hz
38.Kb
38.Lc
38.Md
38.Ne
38.Pf
38.Qg
38.Rh
38.Si
38.Tj
38.Vk
38.Wl
38.Yn
38.Zp
Thermoacoustics, high temperature acoustics,
photoacoustic effect [VMK], [MRH], [AGP],
[JDM], [TB]
Chemical effects of ultrasound [VMK],
[MRH], [AGP], [TGL]
Biological effects of ultrasound, ultrasonic
tomography [VMK], [MRH], [AGP], [DLM],
[MCH], [SWY]
Nuclear acoustical resonance, acoustical
magnetic resonance [VMK], [MRH], [AGP],
[JDM]
Ultrasonic instrumentation and measurement
techniques [VMK], [MRH], [AGP], [ROC],
[GH], [KAW], [TK]
Use of ultrasonics in nondestructive testing,
industrial processes, and industrial products
[VMK], [MRH], [AGP], [MD], [JAT],
[ANN], [BEA], [GH], [TK]
Transduction; acoustical devices for the
generation and reproduction of sound
Transducing principles, materials, and
structures: general [MS], [DAB], [TRH], [DDE]
Electrostatic transducers [MS], [KG], [DAB],
[TRH], [MRB], [DDE]
Magnetostrictive transducers [DAB], [TRH],
[DDE]
Electromagnetic and electrodynamic
transducers [MS], [DAB], [TRH], [DDE]
Feedback transducers [MS]
Piezoelectric and ferroelectric transducers
[DAB], [KG], [TRH], [MRB], [DDE]
Semiconductor transducers [MS], [MRB]
Transducer arrays, acoustic interaction effects
in arrays [DAB], [TRH], [MS], [BEA],
[MRB], [DDE]
Loudspeakers and horns, practical sound
sources [MS], [MRB], [DDE]
Microphones and their calibration [MS], [MRB]
Amplifiers, attenuators, and audio controls
[MS]
Sound recording and reproducing systems,
general concepts [MAH], [MRB]
Mechanical, optical, and photographic
recording and reproducing systems [MS]
Hydroacoustic and hydraulic transducers [DAB]
Magnetic and electrostatic recording and
reproducing systems [MS]
Surface acoustic wave transducers [MS], [TK]
Telephones, earphones, sound power
telephones, and intercommunication systems
[MS]
Public address systems, sound-reinforcement
systems [ADP]
Stereophonic reproduction [ADP], [MRB]
Broadcasting (radio and television) [ADP]
Impulse transducers [MS]
Acoustooptic and photoacoustic transducers
[DAB], [MS]
40.Kd
40.Le
40.Ng
40.Ph
40.Qi
40.Rj
40.Sk
40.Tm
40.Vn
40.Yq
[50]
50.Ba
50.Cb
50.Ed
50.Fe
50.Gf
50.Hg
50.Jh
50.Ki
50.Lj
50.Nm
50.Pn
50.Qp
50.Rq
50.Sr
50.Vt
[40]
40.At
40.Cw
40.Dx
40.Ey
40.Fz
40.Ga
40.Hb
40.Jc
Structural acoustics and vibration
Experimental and theoretical studies of vibrating
systems [AJH], [NJK], [EAM], [KML], [EGW],
[DDE], [DF], [DAB], [FCS], [LC]
Vibrations of strings, rods, and beams [AJH],
[NJK], [EAM], [DDE], [EGW], [DAB],
[LPF], [JAT], [LC], [BEA]
Vibrations of membranes and plates [AJH],
[NJK], [EAM], [LLT], [MD], [EGW], [DAB],
[DF], [LPF], [LC], [JBL], [DDE]
Vibrations of shells [AJH], [NJK], [EAM],
[DAB], [DF], [LPF], [EGW], [LC], [DDE]
Acoustic scattering by elastic structures
[AJH], [NJK], [EAM], [LLT], [KML], [ANN],
[DSB], [TK], [EGW], [DDE]
Nonlinear vibration [AJH], [NJK], [EAM],
[JAT]
Random vibration [AJH], [NJK], [EAM]
Shock and shock reduction and absorption
[AJH], [NJK], [EAM], [OU]
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
50.Yw
[55]
55.Br
55.Cs
55.Dt
55.Ev
Impact and impact reduction, mechanical
transients [AJH], [NJK], [EAM], [FCS]
Techniques for nondestructive evaluation
and monitoring, acoustic emission [AJH],
[NJK], [EAM], [JAT], [BEA], [TK]
Effects of vibration and shock on biological
systems, including man [AJH], [NJK], [EAM],
[MCH]
Seismology and geophysical prospecting;
seismographs [AJH], [NJK], [EAM], [MFH],
[RKS], [ANN]
Effect of sound on structures, fatigue;
spatial statistics of structural vibration [AJH],
[NJK], [EAM], [JAT], [DDE]
Radiation from vibrating structures into fluid
media [AJH], [NJK], [EAM], [LLT], [KML],
[FCS], [EGW], [LC], [LH], [DDE]
Inverse problems in structural acoustics and
vibration [AJH], [NJK], [EAM], [KML],
[EGW], [LC], [DDE]
Vibration isolators, attenuators, and dampers
[AJH], [NJK], [EAM], [LC]
Active vibration control [AJH], [NJK], [EAM],
[BSC], [LC]
Instrumentation and techniques for tests and
measurement relating to shock and vibration,
including vibration pickups, indicators, and
generators, mechanical impedance [AJH],
[NJK], [EAM], [LC]
Noise: its effects and control
Noisiness: rating methods and criteria [GB],
[SF], [BSF]
Noise spectra, determination of sound power
[GB], [KVH]
Noise generation [KVH], [RK]
Noise masking systems [BSF]
Noise control at source: redesign,
application of absorptive materials and
reactive elements, mufflers, noise silencers,
noise barriers, and attenuators, etc. [OU],
[SFW], [RK], [FCS], [AH], [LC], [JBL],
[LH]
Noise control at the ear [FCS], [BSF]
Noise in buildings and general machinery
noise [RK], [KVH], [KML]
Active noise control [BSC], [LC]
Transportation noise sources: air, road, rail,
and marine vehicles [GB], [SFW], [SF],
[JWP], [KVH], [KML]
Aerodynamic and jet noise [SF], [JWP],
[AH], [LH]
Impulse noise and noise due to impact
[GB], [KVH], [SF]
Effects of noise on man and society [GB],
[BSF], [SF]
Environmental noise, measurement, analysis,
statistical characteristics [GB], [BSF], [SF]
Community noise, noise zoning, by-laws, and
legislation [GB], [BSF], [SF]
Topographical and meteorological factors in
noise propagation [PBB], [VEO]
Instrumentation and techniques for noise
measurement and analysis [GB], [KVH],
[RK]
Architectural acoustics
Room acoustics: theory and experiment;
reverberation, normal modes, diffusion,
transient and steady-state response [NX],
[MV], [JES], [FCS]
Stationary response of rooms to noise; spatial
statistics of room response; random testing
[NX], [MV], [JES]
Sound absorption in enclosures: theory and
measurement; use of absorption in offices,
commercial and domestic spaces [NX], [MV],
[JES], [FCS]
Sound absorption properties of materials:
theory and measurement of sound
absorption coefficients; acoustic impedance
and admittance [NX], [MV], [OU], [FCS]
168th Meeting: Acoustical Society of America
2339
55.Fw
55.Gx
55.Hy
55.Jz
55.Ka
55.Lb
55.Mc
55.Nd
55.Pe
55.Rg
55.Ti
55.Vj
55.Wk
[58]
58.Bh
58.Dj
58.Fm
58.Gn
58.Hp
58.Jq
58.Kr
58.Ls
58.Mt
58.Pw
58.Ry
58.Ta
58.Vb
58.Wc
Auditorium and enclosure design [NX],
[MV], [JES], [NX]
Studies of existing auditoria and enclosures
[NX], [MV], [JES]
Subjective effects in room acoustics, speech
in rooms [NX], [MV], [JES], [MAH]
Sound-reinforcement systems for rooms and
enclosures [NX], [MV], [MAH]
Computer simulation of acoustics in
enclosures, modeling [NX], [LLT], [MV],
[JES], [SFW], [NAG]
Electrical simulation of reverberation [NX],
[MV], [MAH]
Room acoustics measuring instruments,
computer measurement of room properties
[NX], [MV], [JES]
Reverberation room design: theory, applications
to measurements of sound absorption,
transmission loss, sound power [NX], [MV]
Anechoic chamber design, wedges [NX],
[ADP]
Sound transmission through walls and
through ducts: theory and measurement [NX],
[LLT], [FCS], [LC], [BEA]
Sound-isolating structures, values of
transmission coefficients [NX], [LLT], [LC]
Vibration-isolating supports in building
acoustics [NX], [ADP]
Damping of panels [NX], [LLT]
Acoustical measurements and
instrumentation
Acoustic impedance measurement [DAB],
[FCS]
Sound velocity [DKW], [TB], [GH], [TK]
Sound level meters, level recorders, sound
pressure, particle velocity, and sound
intensity measurements, meters, and
controllers [MS], [DKW], [TB], [KAW]
Acoustic impulse analyzers and
measurements [ADP]
Tuning forks, frequency standards; frequency
measuring and recording instruments; time
standards and chronographs [MS]
Wave and tone synthesizers [MAH]
Spectrum and frequency analyzers and
filters; acoustical and electrical oscillographs;
photoacoustic spectrometers; acoustical delay
lines and resonators [ADP]
Acoustical lenses and microscopes [ADP]
Phase meters [ADP]
Rayleigh disks [ADP]
Distortion: frequency, nonlinear, phase, and
transient; measurement of distortion [MS]
Computers and computer programs in
acoustics [FCS], [DSB], [VWS]
Calibration of acoustical devices and systems
[DAB]
Electrical and mechanical oscillators [ADP]
60.Hj
60.Jn
60.Kx
60.Lq
60.Mn
60.Np
60.Pt
60.Qv
60.Rw
60.Sx
60.Tj
60.Uv
60.Vx
60.Wy
[64]
64.Bt
64.Dw
64.Fy
64.Gz
64.Ha
64.Jb
64.Kc
64.Ld
64.Me
64.Nf
64.Pg
64.Qh
[60]
60.Ac
60.Bf
60.Cg
60.Dh
60.Ek
60.Fg
60.Gk
2340
Acoustic signal processing
Theory of acoustic signal processing [KGS],
[MAH]
Acoustic signal detection and classification,
applications to control systems [JES], [MRB],
[PJL], [ZHM], [MAH], [JAC]
Statistical properties of signals and noise
[KGS], [MAH], [TFD]
Signal processing for communications:
telephony and telemetry, sound pickup and
reproduction, multimedia [MAH], [HCS],
[MRB]
Acoustic signal coding, morphology, and
transformation [MAH]
Acoustic array systems and processing,
beam-forming [JES], [ZHM], [HCS], [AMT],
[MRB], [BEA], [TFD]
Space-time signal processing other than
matched field processing [JES], [ZHM],
[JAC], [MRB]
64.Ri
64.Sj
64.Tk
64.Vm
64.Wn
64.Yp
Time-frequency signal processing, wavelets
[KGS], [ZHM], [CAS], [PJL]
Source localization and parameter estimation
[JES], [KGS], [MAH], [ZHM], [MRB],
[SED]
Matched field processing [AIT], [AMT], [SED]
Acoustic imaging, displays, pattern
recognition, feature extraction [JES], [KGS],
[BEA], [MRB]
Adaptive processing [DKW], [MRB]
Acoustic signal processing techniques for
neural nets and learning systems [MAH],
[AMT]
Signal processing techniques for acoustic
inverse problems [ZHM], [MRB], [SED]
Signal processing instrumentation, integrated
systems, smart transducers, devices and
architectures, displays and interfaces for
acoustic systems [MAH], [MRB]
Remote sensing methods, acoustic tomography
[DKW], [JAC], [ZHM], [AMT]
Acoustic holography [JDM], [OAS], [EGW],
[MRB]
Wave front reconstruction, acoustic timereversal, and phase conjugation [OAS],
[HCS], [EGW], [BEA], [MRB]
Model-based signal processing [ZHM],
[MRB], [PJL]
Acoustic sensing and acquisition [MS],
[DKW]
Non-stationary signal analysis, non-linear
systems, and higher order statistics [PJL]
Physiological acoustics
Models and theories of the auditory system
[BLM], [ICB], [FCS], [CAS], [CA], [ELP]
Anatomy of the cochlea and auditory nerve
[BLM], [AMS], [ANP], [SFW], [RRF],
[CAS], [CA]
Anatomy of the auditory central nervous
system [BLM], [AMS], [ANP], [RRF],
[CAS], [CA]
Biochemistry and pharmacology of the
auditory system [BLM], [CAS], [CA]
Acoustical properties of the outer ear; middleear mechanics and reflex [BLM], [FCS],
[CAS], [CA], [ELP]
Otoacoustic emissions [BLM], [MAH],
[CAS], [CA], [ELP]
Cochlear mechanics [BLM], [KG], [CAS],
[CA], [ELP]
Physiology of hair cells [BLM], [KG], [CAS],
[CA], [ELP]
Effects of electrical stimulation, cochlear
implant [BLM], [ICB], [CAS], [CA], [ELP]
Cochlear electrophysiology [BLM], [ICB],
[KG], [CAS], [CA], [ELP]
Electrophysiology of the auditory nerve
[BLM], [AMS], [ICB], [CAS], [CA], [ELP]
Electrophysiology of the auditory central
nervous system [BLM], [AMS], [ICB],
[CAS], [ELP]
Evoked responses to sounds [BLM], [ICB],
[CAS], [CA], [ELP]
Neural responses to speech [BLM], [ICB],
[CAS], [ELP]
Physiology of sound generation and detection
by animals [BLM], [AMS], [MCH], [CAS]
Physiology of the somatosensory system
[BLM], [MCH]
Effects of noise and trauma on the auditory
system [BLM], [ICB], [CAS], [ELP]
Instruments and methods [BLM], [KG],
[MAH], [CAS]
66.Dc
66.Ed
66.Fe
66.Gf
66.Hg
66.Jh
66.Ki
66.Lj
66.Mk
66.Nm
66.Pn
66.Qp
66.Rq
66.Sr
66.Ts
66.Vt
66.Wv
66.Yw
[70]
70.Aj
70.Bk
70.Dn
70.Ep
70.Fq
70.Gr
70.Jt
70.Kv
70.Mn
[71]
71.An
71.Bp
71.Es
71.Ft
71.Gv
71.Hw
71.Ky
[66]
66.Ba
66.Cb
Psychological acoustics
Models and theories of auditory processes
[EB], [CAS], [ELP], [JFC]
Loudness, absolute threshold [MAS], [ELP]
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
71.Lz
71.Qr
Masking [VMR], [EAS], [FJG], [LRB], [EB],
[ELP], [JFC]
Auditory fatigue, temporary threshold shift
[EAS], [MAS], [ELP], [EB]
Discrimination: intensity and frequency
[VMR], [FJG], [EB]
Detection and discrimination of sound by
animals [ADP]
Pitch [ADP]
Timbre, timbre in musical acoustics [DD]
Subjective tones [JFC]
Perceptual effects of sound [VMR], [VB],
[DB], [EB], [JFC]
Temporal and sequential aspects of hearing;
auditory grouping in relation to music [EAS],
[FJG], [DB], [EB], [DD]
Phase effects [EB], [JFC]
Binaural hearing [VB], [LRB], [EB], [ELP],
[NAG], [JFC]
Localization of sound sources [VB], [FJG],
[LRB], [EB], [ELP], [JFC]
Dichotic listening [FJG], [LRB], [EB], [DD],
[ELP], [JFC]
Deafness, audiometry, aging effects [DS],
[FJG], [ICB], [MAS], [ELP], [JFC]
Auditory prostheses, hearing aids [DB], [VB],
[FJG], [ICB], [MAS], [JFC], [EB], [ELP]
Hearing protection [FCS]
Vibration and tactile senses [MCH]
Instruments and methods related to hearing
and its measurement [ADP]
Speech production
Anatomy and physiology of the vocal tract,
speech aerodynamics, auditory kinetics [ZZ],
[CYE], [CHS], [SSN], [LK]
Models and theories of speech production
[ZZ], [CYE], [CHS]
Disordered speech [ZZ], [CYE], [LK][CHS],
[DAB]
Development of speech production [CYE],
[DAB], [CHS], [ZZ], [LK]
Acoustical correlates of phonetic segments
and suprasegmental properties: stress,
timing, and intonation [CYE], [SSN],
[DAB], [CGC],
Larynx anatomy and function; voice
production characteristics [CYE], [CHS],
[LK], [ZZ]
Instrumentation and methodology for
speech production research [DAB], [CHS],
[LK], [ZZ]
Cross-linguistics speech production and
acoustics [DAB], [LK]
Relations between speech production and
perception [CYE], [DAB], [CHS], [CGC],
[ZZ]
Speech perception
Models and theories of speech perception
[TCB], [MSS], [ICB], [MAH], [CGC]
Perception of voice and talker characteristics
[TCB], [MSS], [CGC], [JHM], [MSV],
[MAH]
Vowel and consonant perception; perception
of words, sentences, and fluent speech [TCB],
[MSS], [DB], [CGC], [MAH]
Development of speech perception [TCB],
[MSS], [CA], [MAH], [DB]
Measures of speech perception (intelligibility
and quality) [TCB], [MSS], [VB], [ICB],
[CGC], [MAH], [MAS]
Cross-language perception of speech [TCB],
[MSS], [MAH], [CGC]
Speech perception by the hearing impaired
[TCB], [MSS], [DB], [VB], [FJG], [ICB], [EB]
Speech perception by the aging [TCB],
[MSS], [DB], [MAH]
Neurophysiology of speech perception
[TCB], [MSS], [ICB], [MAH]
168th Meeting: Acoustical Society of America
2340
71.Rt
71.Sy
[72]
72.Ar
72.Bs
72.Ct
72.Dv
72.Fx
72.Gy
72.Ja
72.Kb
72.Lc
72.Ne
72.Pf
72.Qr
72.Uv
2341
Sensory mechanisms in speech perception
[TCB], [MSS], [ICB], [MAH], [DB]
Spoken language processing by humans
[TCB], [MSS], [DB], [MSV], [MAH],
[CGC]
Speech processing and communication
systems
Speech analysis and analysis techniques;
parametric representation of speech [CYE],
[SSN]
Neural networks for speech recognition
[CYE], [SSN]
Acoustical methods for determining vocal
tract shapes [CYE], [SSN], [ZZ]
Speech-noise interaction [CYE], [SSN]
Talker identification and adaptation
algorithms [CYE], [SSN]
Narrow, medium, and wideband speech
coding [CYE], [SSN]
Speech synthesis and synthesis techniques
[CYE], [SSN], [SAF]
Speech communication systems and dialog
systems [CYE]
Time and frequency alignment procedures for
speech [CYE], [SSN]
Automatic speech recognition systems
[CYE], [SSN]
Automatic talker recognition systems [CYE],
[SSN]
Auditory synthesis and recognition [CYE],
[SSN]
Forensic acoustics [CYE]
[75]
75.Bc
75.Cd
75.De
75.Ef
75.Fg
75.Gh
75.Hi
75.Kk
75.Lm
75.Mn
75.Np
75.Pq
75.Qr
75.Rs
75.St
75.Tv
75.Wx
75.Xz
75.Yy
75.Zz
Music and musical instruments
Scales, intonation, vibrato, composition
[DD], [MAH]
Music perception and cognition [DD], [MAH],
[DB]
Bowed stringed instruments [TRM], [JW]
Woodwinds [TRM], [JW], [AH]
Brass instruments and other lip vibrated
instruments [TRM], [JW], [ZZ]
Plucked stringed instruments [TRM], [JW]
Drums [TRM]
Bells, gongs, cymbals, mallet percussion and
similar instruments [TRM]
Free reed instruments [TRM], [JW], [AH], [ZZ]
Pianos and other struck stringed instruments
[TRM]
Pipe organs [TRM], [JW]
Reed woodwind instruments [AH], [TRM],
[JW], [ZZ]
Flutes and similar instruments [AH], [TRM],
[JW]
Singing [DD], [TRM], [JW]
Musical performance, training, and analysis
[DD], [DB]
Electroacoustic and electronic instruments
[DD]
Electronic and computer music [MAH]
Automatic music recognition, classification
and information retrieval [DD], [SSN]
Instrumentation measurement methods for
musical acoustics [TRM], [JW]
Analysis, synthesis, and processing of
musical sounds [DD], [MAH]
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
[80]
80.Cs
80.Ev
80.Gx
80.Jz
80.Ka
80.Lb
80.Nd
80.Pe
80.Qf
80.Sh
80.Vj
Bioacoustics
Acoustical characteristics of biological media:
molecular species, cellular level tissues
[MLD], [RRF], [DLM], [TK], [SWY], [GH],
[KAW], [TK]
Acoustical measurement methods in
biological systems and media [CCC], [DLM],
[MLD], [RRF], [SWY], [GH], [KAW]
Mechanisms of action of acoustic energy
on biological systems: physical processes,
sites of action [MLD], [ANP], [RRF], [GH],
[SWY], [KAW]
Use of acoustic energy (with or without other
forms) in studies of structure and function
of biological systems [MLD], [TJR], [ANP],
[RRF], [DLM], [GH], [SWY], [KAW]
Sound production by animals: mechanisms,
characteristics, populations, biosonar [MLD],
[WWA], [CT], [AMS], [ANP], [DKM], [JJF],
[AMT], [ZZ]
Sound reception by animals: anatomy,
physiology, auditory capacities, processing
[MLD], [AMS], [ANP], [DKM], [JJF]
Effects of noise on animals and associated
behavior, protective mechanisms [MLD],
[AMS], [ANP], [DKM], [JJF], [AMT]
Agroacoustics [RRF], [WA], [MCH]
Medical diagnosis with acoustics [MDV],
[DLM], [GH], [SWY], [KAW]
Medical use of ultrasonics for tissue
modification (permanent and temporary)
[DLM], [ROC], [MDV], [GH], [SWY], [KAW]
Acoustical medical instrumentation and
measurement techniques [DLM], [MCH],
[MDV], [GH], [SWY], [KAW]
168th Meeting: Acoustical Society of America
2341
ETHICAL PRINCIPLES OF THE ACOUSTICAL SOCIETY OF AMERICA
FOR RESEARCH INVOLVING HUMAN AND NON-HUMAN
ANIMALS IN RESEARCH AND PUBLISHING AND PRESENTATIONS
The Acoustical Society of America (ASA) has endorsed the following ethical principles associated with the use of human and non-human
animals in research, and for publishing and presentations. The principles endorsed by the Society follow the form of those adopted by the American
Psychological Association (APA), along with excerpts borrowed from the Council for International Organizations of Medical Sciences (CIOMS). The
ASA acknowledges the difficulty in making ethical judgments, but the ASA wishes to set minimum socially accepted ethical standards for publishing
in its journals and presenting at its meetings. These Ethical Principles are based on the principle that the individual author or presenter bears the
responsibility for the ethical conduct of their research and is publication or presentation.
Authors of manuscripts submitted for publication in a journal of the Acoustical Society of America or presenting a paper at a meeting of the
Society are obligated to follow the ethical principles of the Society. Failure to accept the ethical principles of the ASA shall result in the immediate
rejection of manuscripts and/or proposals for publication or presentation. False indications of having followed the Ethical Principles of the ASA
may be brought to the Ethics and Grievances Committee of the ASA.
APPROVAL BY APPROPRIATE GOVERNING
AUTHORITY
The ASA requires all authors to abide by the principles of ethical
research as a prerequisite for participation in Society-wide activities (e.g.,
publication of papers, presentations at meetings, etc.). Furthermore, the Society endorses the view that all research involving human and non-human
vertebrate animals requires approval by the appropriate governing authority
(e.g., institutional review board [IRB], or institutional animal care and use
committee [IACUC], Health Insurance Portability and Accountability Act
[HIPAA], or by other governing authorities used in many countries) and
adopts the requirement that all research must be conducted in accordance
with an approved research protocol as a precondition for participation in
ASA programs. If no such governing authority exists, then the intent of the
ASA Ethical Principles described in this document must be met. All research
involving the use of human or non-human animals must have met the ASA
Ethical Principles prior to the materials being submitted to the ASA for
publication or presentation.
USE OF HUMAN SUBJECTS IN RESEARCH-Applicable
when human subjects are used in the research
Research involving the use of human subjects should have been approved by an existing appropriate governing authority (e.g., an institutional
review board [IRB]) whose policies are consistent with the Ethical Principles
of the ASA or the research should have met the following criteria:
Informed Consent
When obtaining informed consent from prospective participants in a
research protocol that has been approved by the appropriate and responsiblegoverning body, authors must have clearly and simply specified to the participants beforehand:
1. The purpose of the research, the expected duration of the study, and
all procedures that were to be used.
2. The right of participants to decline to participate and to withdraw
from the research in question after participation began.
3. The foreseeable consequences of declining or withdrawing from a
study.
4. Anticipated factors that may have influenced a prospective participant’s willingness to participate in a research project, such as potential risks,
discomfort, or adverse effects.
5. All prospective research benefits.
6. The limits of confidentially.
7. Incentives for participation.
8. Whom to contact for questions about the research and the rights
of research participants. The office/person must have willingly provided an
atmosphere in which prospective participants were able to ask questions and
receive answers.
Authors conducting intervention research involving the use of experimental treatments must have clarified, for each prospective participant, the
following issues at the outset of the research:
1. The experimental nature of the treatment;
2. The services that were or were not to be available to the control
group(s) if appropriate;
2342
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
3. The means by which assignment to treatment and control groups
were made;
4. Available treatment alternatives if an individual did not wish to
participate in the research or wished to withdraw once a study had begun;
and
5. Compensation for expenses incurred as a result of participating in a
study including, if appropriate, whether reimbursement from the participant
or a third-party payer was sought.
Informed Consent for Recording Voices and Images in
Research
Authors must have obtained informed consent from research participants prior to recording their voices or images for data collection unless:
1. The research consisted solely of naturalistic observations in public
places, and it was not anticipated that the recording would be used in a
manner that could have caused personal identification or harm, or
2. The research design included deception. If deceptive tactics
were a necessary component of the research design, consent for the use of
recordings was obtained during the debriefing session.
Client/Patient, Student, and Subordinate
Research Participants
When authors conduct research with clients/patients, students, or subordinates as participants, they must have taken steps to protect the prospective
participants from adverse consequences of declining or withdrawing from
participation.
Dispensing With Informed Consent for
Research
Authors may have dispensed with the requirement to obtain informed
consent when:
1. It was reasonable to assume that the research protocol in question did
not create distress or harm to the participant and involves:
a. The study of normal educational practices, curricula, or classroom
management methods that were conducted in educational settings
b. Anonymous questionnaires, naturalistic observations, or archival
research for which disclosure of responses would not place participants at
risk of criminal or civil liability or damage their financial standing, employability, or reputation, and confidentiality
c. The study of factors related to job or organization effectiveness
conducted in organizational settings for which there was no risk to participants’ employability, and confidentiality.
2. Dispensation is permitted by law.
3. The research involved the collection or study of existing data, documents, records, pathological specimens, or diagnostic specimens, if these
sources are publicly available or if the information is recorded by the investigator in such a manner that subjects cannot be identified, directly or through
identifiers linked to the subjects.
Offering Inducements for Research
Participation
(a) Authors must not have made excessive or inappropriate financial
or other inducements for research participation when such inducements are
likely to coerce participation.
168th Meeting: Acoustical Society of America
2342
(b) When offering professional services as an inducement for research
participation, authors must have clarified the nature of the services, as well as
the risks, obligations, and limitations.
Deception in Research
(a) Authors must not have conducted a study involving deception unless they had determined that the use of deceptive techniques was justified by
the study’s significant prospective scientific, educational, or applied value and
that effective non-deceptive alternative procedures were not feasible.
(b) Authors must not have deceived prospective participants about
research that is reasonably expected to cause physical pain or severe emotional distress.
(c) Authors must have explained any deception that was an integral
feature of the design and conduct of an experiment to participants as early as
was feasible, preferably at the conclusion of their participation, but no later
than at the conclusion of the data collection period, and participants were
freely permitted to withdraw their data.
Debrieing
(a) Authors must have provided a prompt opportunity for participants
to obtain appropriate information about the nature, results, and conclusions
of the research project for which they were a part, and they must have taken
reasonable steps to correct any misconceptions that participants may have had
of which the experimenters were aware.
(b) If scientific or humane values justified delaying or withholding
relevant information, authors must have taken reasonable measures to
reduce the risk of harm.
(c) If authors were aware that research procedures had harmed a participant, they must have taken reasonable steps to have minimized the harm.
HUMANE CARE AND USE OF NON-HUMAN
VERTEBRATE ANIMALS IN RESEARCH-Applicable when
non-human vertebrate animals are used in the
research
The advancement of science and the development of improved means
to protect the health and well being both of human and non-human vertebrate animals often require the use of intact individuals representing a wide
variety of species in experiments designed to address reasonable scientific
questions. Vertebrate animal experiments should have been undertaken only
after due consideration of the relevance for health, conservation, and the
advancement of scientific knowledge. (Modified from the Council for International Organizations of Medical Science (CIOMS) document: “International Guiding Principles for Biomedical Research Involving Animals1985”).
Research involving the use of vertebrate animals should have been approved
by an existing appropriate governing authority (e.g., an institutional animal
care and use committee [IACUC]) whose policies are consistent with the
Ethical Principles of the ASA or the research should have met the following
criteria:
The proper and humane treatment of vertebrate animals in research
demands that investigators:
1. Acquired, cared for, used, interacted with, observed, and disposed
of animals in compliance with all current federal, state, and local laws and
regulations, and with professional standards.
2. Are knowledgeable of applicable research methods and are experienced in the care of laboratory animals, supervised all procedures involving
animals, and assumed responsibility for the comfort, health, and humane
treatment of experimental animals under all circumstances.
3. Have insured that the current research is not repetitive of previously
published work.
4. Should have used alternatives (e.g., mathematical models, computer
simulations, etc.) when possible and reasonable.
5. Must have performed surgical procedures that were under appropriate anesthesia and followed techniques that avoided infection and minimized
pain during and after surgery.
6. Have ensured that all subordinates who use animals as a part of their
employment or education received instruction in research methods and in the
care, maintenance, and handling of the species that were used, commensurate
with the nature of their role as a member of the research team.
2343
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
7. Must have made all reasonable efforts to minimize the number of
vertebrate animals used, the discomfort, the illness, and the pain of all animal
subjects.
8. Must have made all reasonable efforts to minimize any harm to the
environment necessary for the safety and well being of animals that were
observed or may have been affective as part of a research study.
9. Must have made all reasonable efforts to have monitored and then
mitigated any possible adverse affects to animals that were observed as a
function of the experimental protocol.
10. Who have used a procedure subjecting animals to pain, stress, or
privation may have done so only when an alternative procedure was unavailable; the goal was justified by its prospective scientific, educational, or
applied value; and the protocol had been approved by an appropriate review
board.
11. Proceeded rapidly to humanely terminate an animal’s life when it
was necessary and appropriate, always minimizing pain and always in accordance with accepted procedures as determined by an appropriate review
board.
PUBLICATION and PRESENTATION ETHICS-For
publications in ASA journals and presentations at ASA
sponsored meetings
Plagiarism
Authors must not have presented portions of another’s work or data as
their own under any circumstances.
Publication Credit
Authors have taken responsibility and credit, including authorship
credit, only for work they have actually performed or to which they have
substantially contributed. Principal authorship and other publication credits
accurately reflect the relative scientific or professional contributions of the
individuals involved, regardless of their relative status. Mere possession of
an institutional position, such as a department chair, does not justify authorship credit. Minor contributions to the research or to the writing of the paper
should have been acknowledged appropriately, such as in footnotes or in an
introductory statement.
Duplicate Publication of Data
Authors did not publish, as original data, findings that have been previously published. This does not preclude the republication of data when they
are accompanied by proper acknowledgment as defined by the publication
policies of the ASA.
Reporting Research Results
If authors discover significant errors in published data, reasonable steps
must be made in as timely a manner as possible to rectify such errors. Errors
can be rectified by a correction, retraction, erratum, or other appropriate
publication means.
DISCLOSURE OF CONFLICTS OF INTEREST
If the publication or presentation of the work could directly benefit the
author(s), especially financially, then the author(s) must disclose the nature
of the conflict:
1) The complete affiliation(s) of each author and sources of funding for
the published or presented research should be clearly described in the paper
or publication abstract.
2) If the publication or presentation of the research would directly lead
to the financial gain of the authors(s), then a statement to this effect must
appear in the acknowledgment section of the paper or presentation abstract or
in a footnote of a paper.
3) If the research that is to be published or presented is in a controversial area and the publication or presentation presents only one view in
regard to the controversy, then the existence of the controversy and this view
must be provided in the acknowledgment section of the paper or presentation abstract or in a footnote of a paper. It is the responsibility of the author
to determine if the paper or presentation is in a controversial area and if the
person is expressing a singular view regarding the controversy.
168th Meeting: Acoustical Society of America
2343
Sustaining Members of the Acoustical Society of America
The Acoustical Society is grateful for the financial assistance being given by the Sustaining Members listed below and invites applications
for sustaining membership from other individuals or corporations who are interested in the welfare of the Society.
Application for membership may be made to the Executive Director of the Society and is subject to the approval of the Executive Council.
Dues of $1000.00 for small businesses (annual gross below $100 million) and $2000.00 for large businesses (annual gross above $100
million or staff of commensurate size) include a subscription to the Journal as well as a yearly membership certificate suitable for
framing. Small businesses may choose not to receive a subscription to the Journal at reduced dues of $500/year.
Additional information and application forms may be obtained from Elaine Moran, Office Manager, Acoustical Society of America,
1305 Walt Whitman Road, Suite 300, Melville, NY 11747-4300. Telephone: (516) 576-2360; E-mail: asa@aip.org
Acentech Incorporated
JBL Professional
www.acentech.com
Cambridge, Massachusetts
Consultants in Acoustics, Audiovisual and Vibration
www.jblpro.com
Northridge, California
Loudspeakers and Transducers of All Types
ACO Paciic Inc.
Knowles Electronics, Inc.
www.acopacific.com
Belmont, California
Measurement Microphones, the ACOustic Interface™ System
www.knowlesinc.com
Itasca, Illinois
Manufacturing Engineers: Microphones, Recording, and Special
Audio Products
Applied Physical Sciences Corp.
www.aphysci.com
Groton, Connecticut
Advanced R&D and Systems Solutions for Complex National Defense Needs
Massa Products Corporation
BBN Technologies
www.massa.com
Hingham, Massachusetts
Design and Manufacture of Sonar and Ultrasonic Transducers
Computer-Controlled OEM Systems
www.bbn.com
Cambridge, Massachusetts
R&D company providing custom advanced research based solutions
Meyer Sound Laboratories, Inc.
Boeing Commercial Airplane Group
www.meyersound.com
Berkeley, California
Manufacture Loudspeakers and Acoustical Test Equipment
www.boeing.com
Seattle, Washington
Producer of Aircraft and Aerospace Products
National Council of Acoustical Consultants
Bose Corporation
www.bose.com
Framingham, Massachusetts
Loudspeaker Systems for Sound Reinforcement and Reproduction
D’Addario & Company, Inc.
www.daddario.com
Farmingdale, New York
D’Addario strings for musical instruments, Evans drumheads, Rico woodwind
reeds and Planet Waves accessories
www.ncac.com
Indianapolis, Indiana
An Association of Independent Firms Consulting in Acoustics
Raytheon Company
Integrated Defense Systems
www.raytheon.com
Portsmouth, Rhode Island
Sonar Systems and Oceanographic Instrumentation: R&D
in Underwater Sound Propagation and Signal Processing
Science Applications International Corporation
G.R.A.S.
Sound & Vibration ApS
www.gras.dk
Vedbaek, Denmark
Measurement microphones, Intensity probes, Calibrators
Industrial Acoustics Company
Acoustic and Marine Systems Operation
Arlington, Virginia
Underwater Acoustics; Signal Processing; Physical Oceanography; Hydrographic Surveys; Seismology; Undersea and Seismic Systems
Shure Incorporated
InfoComm International Standards
www.shure.com
Niles, Illinois
Design, development, and manufacture of cabled and wireless microphones
for broadcasting, professional recording, sound reinforcement, mobile communications, and voice input–output applications; audio circuitry equipment;
high fidelity phonograph cartridges and styli: automatic mixing systems; and
related audio components and accessories. The firm was founded in 1925.
www.infocomm.org
Fairfax, Virginia
Advancing Audiovisual Communications Globally
Sperian Hearing Protection, LLC
www.industrialacoustics.com
Bronx, New York
Research, Engineering and Manufacturing–Products and Services for Noise
Control and Acoustically Conditioned Environments
International Business Machines Corporation
www.ibm.com/us/
Yorktown Heights, New York
Manufacturer of Business Machines
2344
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
www.howardleight.com
San Diego, California
Howard Leight hearing protection, intelligent protection for military environments, in-ear dosimetry, real-world verification of attenuation, and education
supported by the NVLAP-accredited Howard Leight Acoustical Testing Laboratory
168th Meeting: Acoustical Society of America
2344
Thales Underwater Systems
Wenger Corporation
www.tms-sonar.com
Somerset, United Kingdom
Prime contract management, customer support services, sonar design and
production, masts and communications systems design and production
www.wengercorp.com
Owatonna, Minnesota
Design and Manufacturing of Architectural
Acoustical Products including Absorbers, Diffusers, Modular Sound
Isolating Practice Rooms, Acoustical Shells and Clouds for Music
Rehearsal and Performance Spaces
3M Occupational Health & Environmental Safety
Division
www.3m.com/occsafety
Minneapolis, Minnesota
Products for personal and environmental safety, featuring E·A·R and Peltor
brand hearing protection and fit testing, Quest measurement instrumentation,
audiological devices, materials for control of noise, vibration, and mechanical
energy, and the E·A·RCALSM laboratory for research, development, and
education, NVLAP-accredited since 1992.
Hearing conservation resource center
www.e-a-r.com/hearingconservation
Wyle Laboratories
www.wyle.com/services/arc.html
Arlington, Virginia
The Wyle Acoustics Group provides a wide range of professional services
focused on acoustics, vibration, and their allied technologies, including services to the aviation industry
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
ACOUSTICAL ·
SOCIETY ·
OF ·
AMERICA
APPLICATION FOR SUSTAINING MEMBERSHIP
The Bylaws provide that any person, corporation, or organization contributing annual dues as fixed by the Executive
Council shall be eligible for election to Sustaining Membership in the Society.
Dues have been fixed by the Executive Council as follows: $1000 for small businesses 共annual gross below $100
million兲; $2000 for large businesses 共annual gross above $100 million or staff of commensurate size兲. Dues include one
year subscription to The Journal of the Acoustical Society of America and programs of Meetings of the Society. Please
do not send dues with application. Small businesses may choose not to receive a subscription to the Journal at
reduced dues of $500/year. If elected, you will be billed.
Name of Company
Address
Size of Business:
关
兴
Small business
关
兴 Small business—No Journal
关
兴
Large business
Type of Business
Please enclose a copy of your organization’s brochure.
In listing of Sustaining Members in the Journal we should like to indicate our products or services as follows:
共please do not exceed fifty characters兲
Name of company representative to whom journal should be sent:
It is understood that a Sustaining Member will not use the membership for promotional purposes.
Signature of company representatives making application:
Please send completed applications to: Executive Director, Acoustical Society of America, 1305 Walt Whitman Road,
Suite 300, Melville, NY 11747-4300, (516) 576-2360
2345
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2345
MEMBERSHIP INFORMATION AND APPLICATION INSTRUCTIONS
Applicants may apply for one of four grades of membership, depending on their qualifications: Student Member, Associate Member,
Corresponding Electronic Associate Member or full Member. To apply for Student Membership, fill out Parts I and II of the application; to
apply for Associate, Corresponding Electronic Associate, or full Membership, or to transfer to these grades, fill out Parts I and III.
BENEFITS OF MEMBERSHIP
JASA Online–Vol. 1 (1929) to present
JASA tables of contents e-mail alerts
JASA, printed or CD ROM
JASA Express Letters–online
Acoustics Today–the quarterly magazine
Proceedings of Meetings on Acoustics
Noise Control and Sound, It’s Uses and Control–
online archival magazines
Acoustics Research Letters Online (ARLO)–
online archive
Programs for Meetings
Meeting Calls for Papers
Reduced Meeting Registration Fees
5 Free ASA standards per year-download only
Standards Discounts
Society Membership Directory
Electronic Announcements
Physics Today
Eligibility to vote and hold office in ASA
Eligibility to be elected Fellow
Participation in ASA Committees
Full Member
*
*
*
*
*
*
Associate
*
*
*
*
*
*
ce-Associate
*
*
Student
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
Online
Online
*
*
*
Online
*
*
*
*
*
Online
Online
*
*
*
Online
*
*
Online
Online
Online
*
*
Online
Online
*
*
*
Online
*
*
*
*
*
*
QUALIFICATIONS FOR EACH GRADE OF MEMBERSHIP AND ANNUAL DUES
Student: Any student interested in acoustics who is enrolled in an accredited college or university for half time or more (at least eight
semester hours). Dues: $45 per year.
Associate: Any individual interested in acoustics. Dues: $95 per year. After five years, the dues of an Associate increase to that of a full
Member.
Corresponding Electronic Associate: Any individual residing in a developing country who wishes to have access to ASA’s online
publications only including The Journal of the Acoustical Society of America and Meeting Programs [see http://acousticalsociety.org/
membership/membership_and_benefits]. Dues $45 per year.
Member: Any person active in acoustics, who has an academic degree in acoustics or in a closely related field or who has had the
equivalent of an academic degree in scientific or professional experience in acoustics, shall be eligible for election to Membership in the
Society. A nonmember applying for full Membership will automatically be made an interim Associate Member, and must submit $95 with
the application for the first year’s dues. Election to full Membership may require six months or more for processing; dues as a full Member
will be billed for subsequent years.
JOURNAL OPTIONS AND COSTS FOR FULL MEMBERS AND ASSOCIATE MEMBERS ONLY
• ONLINE JOURNAL. All members will receive access to the The Journal of the Acoustical Society of America (JASA) at no charge in
addition to dues.
• PRINT JOURNAL. Twelve monthly issues of The Journal of the Acoustical Society of America. Cost: $35 in addition to dues.
• CD-ROM. The CD ROM mailed bimonthly. This option includes all of the material published in the Journal on CD ROM. Cost: $35 in
addition to dues.
• COMBINATION OF THE CD-ROM AND PRINTED JOURNAL. The CD-ROM mailed bimonthly and the printed journal mailed
monthly. Cost: $70 in addition to dues.
• EFFECTIVE DATE OF MEMBERSHIP. If your application for membership and dues payment are received by 15 September, your
membership and Journal subscription will begin during the current year and you will receive all back issues for the year. If you select the
print journal option. If your application is received after 15 September, however, your dues payment will be applied to the following year and
your Journal subscription will begin the following year.
OVERSEAS AIR DELIVERY OF JOURNALS
Members outside North, South, and Central America can choose to have print journals sent by air freight at a cost of $165 in addition to dues.
JASA on CD-ROM is sent by air mail at no charge in addition to dues.
ACOUSTICAL SOCIETY OF AMERICA
1305 Walt Whitman Road, Suite 300, Melville, NY 11747-4300, asa@aip.org
APPLICATION FOR MEMBERSHIP
Applicants may apply for one of four grades of membership, depending on their qualifications: Student Member, Associate Member,
Corresponding Electronic Associate Member or full Member. To apply for Student Membership, fill out Parts I and II of this form; to apply
for Associate, Corresponding Electronic Associate, or full Membership, or to transfer to these grades, fill out Parts I and III.
PART I. TO BE COMPLETED BY ALL APPLICANTS (Please print or type all entries)
CHECK ONE BOX
IN EACH COLUMN
ON THE RIGHT
NON-MEMBER APPLYING FOR:
MEMBER REQUESTING TRANSFER TO:
STUDENT MEMBERSHIP
ASSOCIATE MEMBERSHIP
CORRESPONDING ELECTRONIC
ASSOCIATE MEMBERSHIP
FULL MEMBERSHIP
Note that your choice of
journal option may increase or decrease the
amount you must remit.
SELECT JOURNAL OPTION:
Student members will automatically receive access to The Journal of the Acoustical Society of America online at no charge in addition to
dues. Remit $45. (Note: Student members may also receive the Journal on CD ROM at an additional charge of $35.)
Corresponding Electronic Associate Members will automatically receive access to The Journal of the Acoustical Society of America and
Meeting Programs online at no charge in addition to dues. Remit $45.
Applicants for Associate or full Membership must select one Journal option from those listed below. Note that your selection of journal
option determines the amount you must remit.
[ ] Online access only—$95
[ ] Online access plus print Journal $130
[ ] Online access plus CD ROM—$130
[ ] Online access plus print Journal and CD ROM combination—$165
Applications received after 15 September: Membership and Journal subscriptions begin the following year.
OPTIONAL AIR DELIVERY: Applicants from outside North, South, and Central America may choose air freight delivery of print journals
for an additional charge of $165. If you wish to receive journals by air, remit the additional amount owed with your dues. JASA on CD-ROM
is sent by air mail at no charge in addition to dues.
MOBILE PHONE: AREA CODE/NUMBER
CHECK PERFERRED ADDRESS FOR MAIL:
HOME
ORGANIZATION
Part I Continued
PART I CONTINUED: ACOUSTICAL AREAS OF INTEREST TO APPLICANT. Indicate your three main areas of interest below, using
1 for your main interest, 2 for your second, and 3 for your third interest. (DO NOT USE CHECK MARKS.)
ACOUSTICAL OCEANOGRAPHY M
ANIMAL BIOACOUSTICS L
ARCHITECTURAL ACOUSTICS A
BIOMEDICAL ACOUSTICS K
ENGINEERING ACOUSTICS B
MUSICAL ACOUSTICS C
NOISE & NOISE CONTROL D
PHYSICAL ACOUSTICS E
PSYCHOLOGICAL &
PHYSIOLOGICAL ACOUSTICS F
SIGNAL PROCESSING IN ACOUSTICS N
SPEECH COMMUNICATION H
STRUCTURAL ACOUSTICS
& VIBRATION G
UNDERWATER ACOUSTICS J
PART II: APPLICATION FOR STUDENT MEMBERSHIP
PART III: APPLICATION FOR ASSOCIATE MEMBERSHIP, CORRESPONDING ELECTRONIC ASSOCIATE
MEMBERSHIP OR FULL MEMBERSHIP (and interim Associate Membership)
SUMMARIZE YOUR MAJOR PROFESSIONAL EXPERIENCE on the lines below: list employers, duties and position titles, and dates,
beginning with your present position. Attach additional sheets if more space is required.
SPONSORS AND REFERENCES: An application for full Membership requires the names, addresses, and signatures of two references who
must be full Members or Fellows of the Acoustical Society. Names and signatures are NOT required for Associate Membership, Corresponding Electronic Associate Membership or Student Membership applications.
MAIL THIS COMPLETED APPLICATION, WITH APPROPRIATE PAYMENT TO: ACOUSTICAL SOCIETY OF AMERICA,
1305 WALT WHITMAN ROAD, SUITE 300, MELVILLE, NY 11747-4300.
METHOD OF PAYMENT
씲 Check or money order enclosed for $
씲 American Express
씲 VISA 씲 MasterCard
共U.S. funds/drawn on U.S. bank兲
Signature
Account Number
(Credit card orders must be signed)
Expiration Date
씲씲씲씲씲씲씲씲씲씲씲씲씲씲씲씲
Mo.
씲씲 Yr. 씲씲
Security Code
씲씲씲씲
Due to security risks and Payment Card Industry (PCI) data security standards e-mail is NOT an acceptable way to transmit credit card
information. Please return this form by Fax (631-923-2875) or by postal mail.
Regional Chapters and Student Chapters
Anyone interested in becoming a member of a regional chapter or in learning if a meeting of the chapter will be held while he/she is
in the local area of the chapter, either permanently or on travel, is welcome to contact the appropriate chapter representative. Contact
information is listed below for each chapter representative.
Anyone interested in organizing a regional chapter in an area not covered by any of the chapters below is invited to contact the
Cochairs of the Committee on Regional Chapters for information and assistance: Catherine Rogers, University of South Florida,
Tampa, FL, crogers@cas.usf.edu and Evelyn M. Hoglund, Ohio State University, Columbus, OH 43204, hoglund1@osu.edu
AUSTIN STUDENT CHAPTER
GREATER BOSTON
MID-SOUTH
Benjamin C. Treweek
10000 Burnet Rd.
Austin, TX 78758
Email: btreweek@utexas.edu
Eric Reuter
Reuter Associates, LLC
10 Vaughan Mall, Ste. 201A
Portsmouth, NH 03801
Tel: 603-430-2081
Email: ereuter@reuterassociates.com
Tiffany Gray
NCPA
Univ. of Mississippi
University, MS 38677
Tel: 662-915-5808
Email: midsouthASAchapter@gmail.com
GEORGIA INSTITUTE OF TECHNOLOGY STUDENT CHAPTER
UNIVERSITY OF NEBRASKA
STUDENT CHAPTER
BRIGHAM YOUNG UNIVERSITY
STUDENT CHAPTER
Kent L. Gee
Dept. of Physics & Astronomy
Brigham Young Univ.
N283 ESC
Provo, UT 84602
Tel: 801-422-5144
Email: kentgee@byu.edu
www.acoustics.byu.edu
CENTRAL OHIO
Angelo Campanella
Campanella Associates
3201 Ridgewood Dr.
Hilliard, OH 43026-2453
Tel: 614-876-5108
Email: a.campanella@att.net
Charlise Lemons
Georgia Institute of Technology
Atlanta, GA 30332-0405
Tel: 404-822-4181
Email: clemons@gatech.edu
Hyun Hong
Architectural Engineering
Univ. of Nebraska
Peter Kiewit Institute
1110 S. 67th St.
Omaha, NE 68182-0681
Tel: 402-305-7997
Email: unoasa@gmail.com
UNIVERSITY OF HARTFORD
STUDENT CHAPTER
Robert Celmer
Mechanical Engineering Dept., UT-205
Univ. of Hartford
200 Bloomfield Ave.
West Hartford, CT 06117
Tel: 860-768-4792
Email: celmer@hartford.edu
NARRAGANSETT
David A. Brown
Univ. of Massachusetts, Dartmouth
151 Martime St.
Fall River, MA 02723
Tel: 508-910-9852
Email: dbacoustics@cox.net
CHICAGO
Lauren Ronsse
Columbia College Chicago
33 E, Congress Pkwy., Ste. 601
Chicago, IL 60605
Email: lronsse@colum.edu
UNIVERSITY OF CINCINNATI
STUDENT CHAPTER
Kyle T. Rich
Biomedical Engineering
Univ. of Cincinnati
231 Albert Sabin Way
Cincinnati, OH 45267
Email: richkt@mail.uc.edu
UNIVERSITY OF KANSAS
STUDENT CHAPTER
Robert C. Coffeen
Univ. of Kansas
School of Architecture, Design, and Planning
Marvin Hall
1465 Jayhawk Blvd.
Lawrence, KS 66045
Tel: 785-864-4376
Email: coffeen@ku.edu
LOS ANGELES
Neil A. Shaw
www.asala.org
COLUMBIA COLLEGE CHICAGO
STUDENT CHAPTER
METROPOLITAN NEW YORK
Sandra Guzman
Dept. of Audio Arts and Acoustics
Columbia College Chicago
33 E. Congress Pkwy., Rm. 6010
Chicago, IL 60605
Email: sguzman@colum.edu
Richard F. Riedel
Riedel Audio Acoustics
443 Potter Blvd.
Brightwaters, NY 11718
Tel: 631-968-2879
Email: riedelaudio@optonline.net
FLORIDA
MEXICO CITY
Richard J. Morris
Communication Science and Disorders
Florida State Univ.
201 W. Bloxham
Tallahassee, FL 32306-1200
Email: richard.morris@cci.fsu.edu
Sergio Beristain
Inst. Mexicano de Acustica AC
PO Box 12-1022
Mexico City 03001, Mexico
Tel: 52-55-682-2830
Email: sberista@hotmail.com
2349
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
NORTH CAROLINA
Noral Stewart
Stewart Acoustical Consultants
7330 Chapel Hill Rd., Ste.101
Rayleigh, NC
Email: noral@sacnc.com
NORTH TEXAS
Peter F. Assmann
School of Behavioral and Brain Sciences
Univ. of Texas-Dallas
Box 830688 GR 4.1
Richardson, TX 75083
Tel: 972-883-2435
Email: assmann@utdallas.edu
NORTHEASTERN UNIVERSITY
STUDENT CHAPTER
Victoria Suha
Email: suha.v@husky.neu.ed
ORANGE COUNTY
David Lubman
14301 Middletown Ln.
Westminster, CA 92683
Tel: 714-373-3050
Email: dlubman@dlacoustics.com
168th Meeting: Acoustical Society of America
2349
PENNSYLVANIA STATE
UNIVERSITY STUDENT CHAPTER
Anand Swaminathan
Pennsylvania State Univ.
201 Applied Science Bldg.
University Park, PA 16802
Tel: 848-448-5920
Email: azs563@psu.edu
www.psuasa.org
PHILADELPHIA
Kenneth W. Good, Jr.
Armstrong World Industries, Inc.
2500 Columbia Ave.
Lancaster, PA 17603
Tel: 717-396-6325
Email: kwgoodjr@armstrong.com
SAN DIEGO
UPPER MIDWEST
Paul A. Baxley
SPAWAR Systems Center, Pacific
49575 Gate Road, Room 170
San Diego, CA 92152-6435
Tel: 619-553-5634
Email: paul.baxley@navy.mil
David Braslau
David Braslau Associates, Inc.
6603 Queen Ave. South, Ste. N
Richfield, MN 55423
Tel: 612-331-4571
Email: david@braslau.com
SEATTLE STUDENT CHAPTER
WASHINGTON, DC
Camilo Perez
Applied Physics Lab.
Univ. of Washington
1013 N.E. 40th St,
Seattle, WA 98105-6698
Email: camipiri@uw.edu
Matthew V. Golden
Scantek, Inc.
6430 Dobbin Rd., Ste. C
Columbia, MD 21045
Tel: 410-290-7726
Email: m.golden@scantek.com
PURDUE UNIVERSITY
STUDENT CHAPTER
Kao Ming Li
Purdue Univ.
585 Purdue Mall
West Lafayette, IN 47907
Tel: 765-494-1099
Email: mmkmli@purdue.edu
Email: purdueASA@gmail.com
2350
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
168th Meeting: Acoustical Society of America
2350
ACOUSTICAL SOCIETY OF AMERICA
BOOKS, CDS, DVD, VIDEOS ON ACOUSTICS
ACOUSTICAL DESIGN OF MUSIC EDUCATION
FACILITIES. Edward R. McCue and Richard H. Talaske,
Eds. Plans, photographs, and descriptions of 50 facilities with
explanatory text and essays on the design process. 236 pp, paper,
1990. Price: $23. Item # 0-88318-8104
summary of Harvey Fletcher’s 33 years of acoustics work at Bell
Labs. A new introduction, index, and complete bibliography of
Fletcher’s work are important additions to this classic volume.
487 pp, hardcover 1995 (original published 1953). Price: $40.
Item # 1-56396-3930
ACOUSTICAL DESIGN OF THEATERS FOR DRAMA
PERFORMANCE: 1985–2010. David T. Bradley, Erica E.
Ryherd, & Michelle C. Vigeant, Eds. Descriptions, color images,
and technical and acoustical data of 130 drama theatres from
around the world, with an acoustics overview, glossary, and
essays reflecting on the theatre design process. 334 pp, hardcover
2010. Price: $45. Item # 978-0-9846084-5-4
AEROACOUSTICS OF FLIGHT VEHICLES: THEORY
AND PRACTICE. Harvey H. Hubbard, Ed. Two-volumes
oriented toward flight vehicles emphasizing the underlying
concepts of noise generation, propagation, predicting and control.
Vol. 1 589 pp/Vol. 2 426 pp, hardcover 1994 (original published
1991). Price per 2-vol. set: $58. Item # 1-56396-404X
ACOUSTICAL DESIGNING IN ARCHITECTURE. Vern O.
Knudsen and Cyril M. Harris. Comprehensive, non-mathematical
treatment of architectural acoustics; general principles of
acoustical designing. 408 pp, paper, 1980 (original published
1950). Price: $23. Item # 0-88318-267X
ACOUSTICAL MEASUREMENTS. Leo L. Beranek. Classic
text with more than half revised or rewritten. 841 pp, hardcover
1989 (original published 1948). Available on Amazon.com
ACOUSTICS. Leo L. Beranek. Source of practical acoustical
concepts and theory, with information on microphones,
loudspeakers and speaker enclosures, and room acoustics. 491
pp, hardcover 1986 (original published 1954). Available on
Amazon.com
ACOUSTICS—AN INTRODUCTION TO ITS PHYSICAL
PRINCIPLES AND APPLICATIONS. Allan D. Pierce.
Textbook introducing the physical principles and theoretical
basis of acoustics, concentrating on concepts and points of view
that have proven useful in applications such as noise control,
underwater sound, architectural acoustics, audio engineering,
nondestructive testing, remote sensing, and medical ultrasonics.
Includes problems and answers. 678 pp, hardcover 1989 (original
published 1981). Price: $33. Item # 0-88318-6128
ACOUSTICS, ELASTICITY AND THERMODYNAMICS
OF POROUS MEDIA: TWENTY-ONE PAPERS BY M. A.
BIOT. Ivan Tolstoy, Ed. Presents Biot’s theory of porous media
with applications to acoustic wave propagation, geophysics,
seismology, soil mechanics, strength of porous materials, and
viscoelasticity. 272 pp, hardcover 1991. Price: $28. Item #
1-56396-0141
ACOUSTICS OF AUDITORIUMS IN PUBLIC BUILDINGS.
Leonid I. Makrinenko, John S. Bradley, Ed. Presents developments
resulting from studies of building physics. 172 pp, hardcover
1994 (original published 1986). Price: $38. Item # 1-56396-3604
ACOUSTICS OF WORSHIP SPACES. David Lubman
and Ewart A. Wetherill, Eds. Drawings, photographs, and
accompanying data of worship houses provide information on the
acoustical design of chapels, churches, mosques, temples, and
synagogues. 91 pp, paper 1985. Price: $23. Item # 0-88318-4664
ASA EDITION OF SPEECH AND HEARING IN
COMMUNICATION. Harvey Fletcher; Jont B. Allen, Ed. A
COLLECTED PAPERS ON ACOUSTICS. Wallace Clement
Sabine. Classic work on acoustics for architects and acousticians.
304 pp, hardcover 1993 (originally published 1921). Price: $28.
Item # 0-932146-600
CONCERT HALLS AND OPERA HOUSES. Leo L. Beranek.
Over 200 photos and architectural drawings of 100 concert halls
and opera houses in 31 countries with rank-ordering of 79 halls
and houses according to acoustical quality. 653 pp. hardcover
2003. Price: $50. Item # 0-387-95524-0
CRYSTAL ACOUSTICS. M.J.P. Musgrave. For physicists
and engineers who study stress-wave propagation in anisotropic
media and crystals. 406 pp. hardcover (originally published
1970). Price: $34. Item # 0-9744067-0-8
DEAF ARCHITECTS & BLIND ACOUSTICIANS? Robert
E. Apfel. A primer for the student, the architect and the planner.
105 pp. paper 1998. Price: $22. Item #0-9663331-0-1
THE EAR AS A COMMUNICATION RECEIVER. Eberhard
Zwicker & Richard Feldtkeller. Translated by Hannes Müsch,
Søren Buus, Mary Florentine. Translation of the classic Das Ohr
Als Nachrichtenempfänger. Aimed at communication engineers
and sensory psychologists. Comprehensive coverage of the
excitation pattern model and loudness calculation schemes. 297
pp, hardcover 1999 (original published 1967). Price: $50. Item
# 1-56396-881-9
ELECTROACOUSTICS: THE ANALYSIS OF TRANSDUCTION, AND ITS HISTORICAL BACKGROUND.
Frederick V. Hunt. Analysis of the conceptual development
of electroacoustics including origins of echo ranging, the
crystal oscillator, evolution of the dynamic loudspeaker, and
electromechanical coupling, 260 pp, paper 1982 (original
published 1954). Available on Amazon.com
ELEMENTS OF ACOUSTICS. Samuel Temkin. Treatment of
acoustics as a branch of fluid mechanics. Main topics include
propagation in uniform fluids at rest, trans-mission and reflection
phenomena, attenuation and dispersion, and emission. 515 pp.
hardcover 2001 (original published 1981). Price: $30. Item #
1-56396-997-1
EXPERIMENTS IN HEARING. Georg von Békésy. Classic
on hearing containing vital roots of contemporary auditory
knowledge. 760 pp, paper 1989 (original published 1960). Price:
$23. Item # 0-88318-6306
FOUNDATIONS OF ACOUSTICS. Eugen Skudrzyk. An
advanced treatment of the mathematical and physical foundations
of acoustics. Topics include integral transforms and Fourier
analysis, signal processing, probability and statistics, solutions
to the wave equation, radiation and diffraction of sound. 790 pp.
hardcover 2008 (originally published 1971). Price: $60. Item #
3-211-80988-0
HALLS FOR MUSIC PERFORMANCE: TWO DECADES
OF EXPERIENCE, 1962–1982. Richard H. Talaske, Ewart A.
Wetherill, and William J. Cavanaugh, Eds. Drawings, photos,
and technical and physical data on 80 halls; examines standards
of quality and technical capabilities of performing arts facilities.
192 pp, paper 1982. Price: $23. Item # 0-88318-4125
HALLS FOR MUSIC PERFORMANCE: ANOTHER TWO
DECADES OF EXPERIENCE 1982–2002. Ian Hoffman,
Christopher Storch, and Timothy Foulkes, Eds. Drawings,
color photos, technical and physical data on 142 halls. 301 pp,
hardcover 2003. Price: $56. Item # 0-9744067-2-4
HANDBOOK OF ACOUSTICAL MEASUREMENTS
AND NOISE CONTROL, THIRD EDITION. Cyril M.
Harris. Comprehensive coverage of noise control and measuring
instruments containing over 50 chapters written by top experts
in the field. 1024 pp, hardcover 1998 (original published 1991).
Price: $56. Item # 1-56396-774
HEARING: ITS PSYCHOLOGY AND PHYSIOLOGY.
Stanley Smith Stevens & Hallowell Davis. Volume leads readers
from the fundamentals of the psycho-physiology of hearing to a
complete understanding of the anatomy and physiology of the
ear. 512 pp, paper 1983 (originally published 1938). OUT-OFPRINT
PROPAGATION OF SOUND IN THE OCEAN. Contains
papers on explosive sounds in shallow water and long-range
sound transmission by J. Lamar Worzel, C. L. Pekeris, and
Maurice Ewing. hardcover 2000 (original published 1948). Price:
$37. Item #1-56396-9688
RESEARCH PAPERS IN VIOLIN ACOUSTICS 1975–1993.
Carleen M. Hutchins, Ed., Virginia Benade, Assoc. Ed. Contains
120 research papers with an annotated bibliography of over 400
references. Introductory essay relates the development of the
violin to the scientific advances from the early 15th Century to
the present. Vol. 1, 656 pp; Vol. 2, 656 pp. hardcover 1996. Price:
$120 for the two-volume set. Item # 1-56396-6093
NONLINEAR ACOUSTICS. Mark F. Hamilton and David T.
Blackstock. Research monograph and reference for scientists
and engineers, and textbook for a graduate course in nonlinear
acoustics. 15 chapters written by leading experts in the field. 455
pp, hardcover, 2008 (originally published in 1996). Price: $45.
Item # 0-97440-6759
NONLINEAR ACOUSTICS. Robert T. Beyer. A concise
overview of the depth and breadth of nonlinear acoustics with
an appendix containing references to new developments. 452 pp,
hardcover 1997 (originally published 1974). Price: $45. Item #
1-56396-724-3
NONLINEAR UNDERWATER ACOUSTICS. B. K. Novikov,
O. V. Rudenko, V. I. Timoshenko. Translated by Robert T. Beyer.
Applies the basic theory of nonlinear acoustic propagation
to directional sound sources and receivers, including design
nomographs and construction details of parametric arrays. 272
pp., paper 1987. Price: $34. Item # 0-88318-5229
OCEAN ACOUSTICS. Ivan Tolstoy and Clarence S. Clay.
Presents the theory of sound propagation in the ocean and
compares the theoretical predictions with experimental data.
Updated with reprints of papers by the authors supplementing
and clarifying the material in the original edition. 381 pp, paper
1987 (original published 1966). Available on Amazon.com
ORIGINS IN ACOUSTICS. Frederick V. Hunt. History of
acoustics from antiquity to the time of Isaac Newton. 224 pp,
hardcover 1992. Price: $19. Item # 0-300-022204
PAPERS IN SPEECH COMMUNICATION. Papers charting
four decades of progress in understanding the nature of human
speech production, and in applying this knowledge to problems of
speech processing. Contains papers from a wide range of journals
from such fields as engineering, physics, psychology, and speech
and hearing science. 1991, hardcover.
Speech Production. Raymond D. Kent, Bishnu S. Atal, Joanne
L. Miller, Eds. 880 pp. Item # 0-88318-9585
Speech Processing. Bishnu S. Atal, Raymond D. Kent, Joanne
L. Miller, Eds. 672 pp. Item # 0-88318-9607
Price: $38 ea.
RIDING THE WAVES. Leo L. Beranek. A life in sound, science,
and industry. 312 pp. hardcover 2008. Price: $20. Item # 978-0-26202629-1
THE SABINES AT RIVERBANK. John W. Kopec. History
of Riverbank Laboratories and the role of the Sabines (Wallace
Clement, Paul Earls, and Hale Johnson) in the science of
architectural acoustics. 210 pp. hardcover 1997. Price: $19. Item
# 0-932146-61-9
SONICS, TECHNIQUES FOR THE USE OF SOUND
AND ULTRASOUND IN ENGINEERING AND SCIENCE.
Theodor F. Hueter and Richard H. Bolt. Work encompassing the
analysis, testing, and processing of materials and products by
the use of mechanical vibratory energy. 456 pp, hardcover 2000
(original published 1954). Price: $30. Item # 1-56396-9556
SOUND IDEAS. Deborah Melone and Eric W. Wood. Early
days of Bolt Beranek and Newman Inc. to the rise of Acentech
Inc. 363 pp. hardcover 2005. Price: $25. Item # 200-692-0681
SOUND, STRUCTURES, AND THEIR INTERACTION.
Miguel C. Junger and David Feit. Theoretical acoustics, structural
vibrations, and interaction of elastic structures with an ambient
acoustic medium. 451 pp, hardcover 1993 (original published
1972). Price: $23. Item # 0-262-100347
THEATRES FOR DRAMA PERFORMANCE: RECENT
EXPERIENCE IN ACOUSTICAL DESIGN. Richard H.
Talaske and Richard E. Boner, Eds. Plans, photos, and descriptions
of theatre designs, supplemented by essays on theatre design and
an extensive bibliography. 167 pp, paper 1987. Price: $23. Item
# 0-88318-5164
THERMOACOUSTICS. Gregory W. Swift. A unifying
thermoacoustic perspective to heat engines and refrigerators.
Includes a CD ROM with animations and DELTAE and its User’s
Guide. 300 pp, paper, 2002. Price: $50. Item # 0-7354-0065-2
VIBRATION AND SOUND. Philip M. Morse. Covers the broad
spectrum of acoustics theory, including wave motion, radiation
problems, propagation of sound waves, and transient phenomena.
468 pp, hardcover 1981 (originally published 1936). Price: $28.
Item # 0-88318-2874
VIBRATION OF PLATES. Arthur W. Leissa. 353 pp, hardcover
1993 (original published 1969). Item # 1-56396-2942
VIBRATION OF SHELLS. Arthur W. Leissa. 428 pp, hardcover
1993 (original published 1973). Item # 1-56396-2934
SET ITEM # 1-56396-KIT. Monographs dedicated to the
organization and summarization of knowledge existing in the
field of continuum vibrations. Price: $28 ea.; $50 for 2-volume
set.
CDs, DVD, VIDEOS, STANDARDS
Auditory Demonstrations (CD). Teaching adjunct for lectures or courses on hearing and auditory effects. Provides signals for teaching
laboratories. Contains 39 sections demonstrating various characteristics of hearing. Includes booklet containing introductions and
narrations of each topic and bibliographies for additional information. Issued in1989. Price: $23. Item # AD-CD-BK
Measuring Speech Production (DVD). Demonstrations for use in teaching courses on speech acoustics, physiology, and instrumentation.
Includes booklet describing the demonstrations and bibliographies for more information. Issued 1993. Price: $52. Item # MS-DVD
Scientific Papers of Lord Rayleigh (CD ROM). Over 440 papers covering topics on sounds, mathematics, general mechanics,
hydrodynamics, optics and properties of gasses by Lord Rayleigh (John William Strutt) the author of the Theory of Sound. Price: $40.
Item # 0-9744067-4-0
Proceedings of the Sabine Centennial Symposium (CD ROM). Held June 1994. Price: $50. Item # INCE25-CD
Fifty Years of Speech Communication (VHS). Lectures presented by distinguished researchers at the ASA/ICA meeting in June 1998
covering development of the field of Speech Communication. Lecturers: G. Fant, K.N. Stevens, J.L. Flanagan, A.M. Liberman, L.A.
Chistovich—presented by R.J. Porter, Jr., K.S. Harris, P. Ladefoged, and V. Fromkin. Issued in 2000. Price: $30. Item # VID-Halfcent
Speech Perception (VHS). Presented by Patricia K. Kuhl. Segments include: I. General introduction to speech/language processing;
Spoken language processing; II. Classic issues in speech perception; III. Phonetic perception; IV. Model of developmental speech
perception; V. Cross-modal speech perception: Links to production; VI. Biology and neuroscience connections. Issued 1997. Price:
$30. Item # SP-VID
Standards on Acoustics. Visit http://scitation.aip.org/content/asa/standards to purchase for download National (ANSI) and International
(ISO) Standards on topics ranging from measuring environmental sound to standards for calibrating microphones.
Order the following from ASA, 1305 Walt Whitman Road, Suite 300, Melville, NY 11747-4300; asa@aip.org; Fax: 631-9232875 Telephone orders not accepted. Prepayment required by check (drawn on US bank) or by VISA, MasterCard, American
Express.
Study of Speech and Hearing at Bell Telephone Laboratories (CD). Nearly 10,000 pages of internal documents from AT&T archives
including historical documents, correspondence files, and laboratory notebooks on topics from equipment requisitions to discussions of
project plans, and experimental results. Price: $20.
Collected Works of Distinguished Acousticians CD - Isadore Rudnick (CD + DVD). 3 disc set includes reprints of papers by
Isadore Rudnick from scientific journals, a montage of photographs with colleagues and family, and video recordings of the Memorial
Session held at the 135th meeting of the ASA. Price: $50.
Technical Memoranda issued by Acoustics Research Laboratory-Harvard University (CD). The Harvard Research Laboratory
was established in 1946 to support basic research in acoustics. Includes 61 reports issued between 1946 and 1971 on topics such as
radiation, propagation, scattering, bubbles, cavitation, and properties of solids, liquids, and gasses. Price $25.
ORDER FORM FOR ASA BOOKS, CDS, DVD, VIDEOS
1. Payment must accompany order. Payment may be made by check or
international money order in U.S. funds drawn on U.S. bank or by VISA,
MasterCard, or American Express credit card.
2. Send orders to: Acoustical Society of America, Publications, P.O. Box 1020,
Sewickley, PA 15143-9998; Tel.: 412-741-1979; Fax: 412-741-0609.
Item #
3. All orders must include shipping costs (see below).
4. A 10% discount applies on orders of 5 or more copies of the same title only.
5. Returns are not accepted.
Quantity
Title
Price
Total
Subtotal
Shipping costs for all orders are based on weight and distance.
For quote visit http://www.abdi-ecommerce10.com/asa,
email: asapubs@abdintl.com, or call 412-741-1979
10% discount on orders of 5 or more of the same title
Total
Name _______________________________________________________________________________________________________
Address _____________________________________________________________________________________________________
_____________________________________________________________________________________________________________
City ________________________ State ______________________ ZIP/Postal _________________ Country ____________________
Tel.: ________________________ Fax: ________________________ Email: ____________________________________________
Method of Payment
[
] Check or money order enclosed for $__________ (U.S. funds/drawn on U.S. bank made payable to the Acoustical Society of
America)
[
] VISA
[
] MasterCard
[
] American Express
Cardholders signature_________________________________________________________________
(Credit card orders must be signed)
Card # ____________________________________________________________ Expires Mo. __________________ Yr._______________
THANK YOU FOR YOUR ORDER!
Due to security risks and Payment Card Industry (PCI) data security standards e-mail is NOT an acceptable way to transmit credit card
information. Please use our secure web page to process your credit card payment (http://www.abdi-ecommerce10.com/asa) or securely fax
this form to (516-576-2377).
The Scientific Papers of Lord Rayleigh are now available on CD ROM from the Acoustical Society
of America. The CD contains over 440 papers covering topics on sound, mathematics, general
mechanics, hydrodynamics, optics, and properties of gasses. Files are in pdf format and readable
with Adobe Acrobat® reader.
Lord Rayleigh was indisputably the single most significant contributor to the world’s literature in
acoustics. In addition to his epochal two volume treatise, The Theory of Sound, he wrote some 440
articles on acoustics and related subjects during the fiRy years of his distinguished research career. He
is generally regarded as one of the best and clearest writers of scientific articles of his generation, and
his papers continue to be read and extensively cited by modem researchers in acoustics.
ISBN 0-9744067-4-0
Price: $40.00
$40 ASA members; $70 nonmembers
Due to security risks and Payment Card Industry (PCI) data security standards e-mail is NOT an acceptable way
to transmit credit card information. Please use our secure web page to process your credit card payment (http://
www.abdi-ecommerce10.com/asa) or securely fax this form to (516-576-2377).
Due to security risks and Payment Card Industry (PCI) data security standards e-mail is NOT an acceptable
way to transmit credit card information. Please use our secure web page to process your credit card payment
(http://www.abdi-ecommerce10.com/asa) or securely fax this form to (516-576-2377).
AUTHOR INDEX
to papers presented at
168th Meeting: Acoustical Society of America
Abadi, Shima H.–2092
Abawi, Ahmad T.–2086
Abbasi, Mustafa Z.–2166, 2219
Abdelaziz, Mohammed–2243
Abel, Markus–2163
Abel, Markus W.–2163
Abell, Alexandra–2127
Abercrombie, Clemeth–2090
Abkowitz, Paul M.–2268
Abraham, Douglas A.–2225
Abuhabshah, Rami–2125
Acquaviva, Andrew A.–2252
Adelman-Larsen, Niels W.–2116
Adibi, Yasaman–2280
Agarwal, Amal–2084
Agnew, Zarinah–2243
Aguirre, Sergio L.–2097, 2282
Ahmad, Syed A.–2125
Ahn, SangKeun–2209
Ahnert, Wolfgang–2089
Aho, Katherine–2256
Ahroon, William A.–2165
Ahuja, K.–2169
Ainslie, Michael A.–2217, 2247,
2297, Cochair Session 3aUW
(2216)
Akamatsu, Tmonari–2152
Akamatsu, Tomonari–2155
Akrofi, Kwaku–2309
Albert, Donald G.–2139
Alberts, W. C. Kirkpatrick–2139,
2169
Albin, Aaron L.–2082
Alexander, Jennifer–2106
Alexander, Joshua–2311
Alexander, Joshua M.–2310
Ali, Hussnain–2083
Alizad, Azra–2159
Alkayed, Nabil J.–2280
Allen, Jont B.–2251
Allgood, Daniel C.–2136
Almekkawy, Mohamed Khaled–2280
Alù, Andrea–2099
Alu, Andrea–2099, 2281
Alvarez, Alberto–2155
Alvord, David–2169
Alwan, Abeer–2259, 2295, Cochair
Session 4aSCa (2259)
Alzqhoul, Esam A.–2083
Amador, Carolina–2124
Amano, Shigeaki–2175
Ammi, Azzdine Y.–2280
Amon, Dan–2112
Amundin, Mats–2248
Anderson, Brian E.–2252, 2265,
Cochair Session 4aSPb (2265)
Anderson, Paul–2308, 2310
Anderson, Paul W.–2242
Andrews, Mark–2226
Andrews, Russel D.–2091
Andriolo, Artur–2073, 2277
Anikin, Igor I.–2318
Antoni, Jérôme–2171
2357
Antoniak, Maria–2175
Archangeli, Diana–2082, 2104
Archangeli, Diana B.–2105
Arena, David A.–2219
Argo, Theodore F.–2165
Aristizabal, Sara–2124
Arnhold, Anja–2173
Aronov, Boris–2131
Arora, Manish–2256
Arrieta, Rodolf–2268
Ashida, Hiroki–2168
Assous, Said–2255, Cochair Session
4aPAa (2254)
Astolfi, Arianna–2294
Atagi, Eriko–2109
Athanasopoulou, Angeliki–2176
Attenborough, Keith–2078, Cochair
Session 1aNS (2076), Cochair
Session 1pNS (2098)
Au, Jenny–2256
Au, Whitlow–2075
Au, Whitlow W.–2246
Au, Whitlow W. L.–2154
Aubert, Allan–2079
Auchere, Jean christophe–2253
August, Tanya–2212
Aumann, Aric R.–2286
Aunsri, Nattapol–2085
Avendano, Alex–2279
Awuor, Ivy–2302
Azad, Hassan–2218
Azbaid El Ouahabi, Abdelhalim–
2076
Azusawa, Aki–2168
Barbieri, Renato–2282, 2305
Barbone, Paul E.–2141, 2159, Chair
Session 3pID (2222)
Barbosa, Adriano–2310
Barbosa, Adriano V.–2105
Barcenas, Teresa–2212
Barclay, David–2317
Barkley, Yvonne M.–2154
Barlow, Jay–2117, 2245
Barthe, Peter G.–2125
Bartram, Nina–2261
Bash, Rachel E.–2307
Basile, David–2192
Bassuet, Alban–2218
Batchelor, Heidi A.–2277
Battaglia, Paul–2218
Baumann-Pickering, Simone–2073,
Cochair Session 3aAB (2184)
Baumgartner, Mark F.–2093, 2116
Baxter, Christopher D. P.–2156
Beauchamp, James–2202
Beauchamp, James W.–2150
Becker, Kara–2295
Beckman, Mary E.–2174
Belding, Heather–2291
Bell, Joel–2246
Belmonte, Andrew–2207
Benech, Nicolas–2196
Benke, Harald–2091, 2248
Benoit-Bird, Kelly J.–2186
Bent, Tessa–2109, 2199, 2212, 2273,
Chair Session 1pSCb (2106)
Beranek, Leo L.–2130, 2162
Berg, Katelyn–2311
Berger, Elliott H.–2134, 2135, 2165,
Baars, Woutijn J.–2101
Cochair Session 2aNSa (2133),
Babaniyi, Olalekan A.–2159
Cochair Session 2pNSa (2165)
Bader, Kenneth B.–2095, 2199
Bergeson-Dana, Tonya R.–2262
Bader, Rolf–2132, 2163, Chair
Bergler, Kevin–2279
Session 2pMU (2163)
Beristain, Sergio–2118, 2182
Badiey, Mohsen–2119, 2148, 2317
Bernadin, Shonda–2293
Baelde, Maxime–2284
Bernal, Ximena–2184
Baese-Berk, Melissa M.–2146
Berry, David–2259
Baggeroer, Arthur–2148
Berry, Matthew G.–2100
Baggeroer, Arthur B.–2187, Cochair Bharadwaj, Hari–2258
Session 3aAO (2187)
Bhatta, Ambika–2140
Bai, Mingsian R.–2084
Bhojani, Naeem–2191
Bailakanavar, Mahesh–2195
Bigelow, Timothy–2096, 2157, 2279
Bailey, Michael–2192, 2193, 2278,
Bigelow, Timothy A.–2279, 2280
2301
Binder, Alexander–2129
Bailey, Michael R.–2191, 2193,
Binder, Carolyn–2074
2249, 2250, 2251
Birkett, Stephen–2132
Balestriero, Randall–2217
Blaeser, Susan B.–Cochair Session
Ballard, Megan S.–2120, 2178,
3aUW (2216)
2252, 2317, Chair Session 2pUW Blanc-Benon, Philippe–2289
(2178), Cochair Session 2aAO
Blanchard, Nathan–2215
(2119)
Blanco, Cynthia P.–2109
Balletto, Emilio–2184
Blasingame, Michael–2263
Bang, Hye-Young–2262
Bleifnick, Jay–2128
Banks, Russell–2294
Blevins, Matthew G.–2126, 2200
Barbar, Steve–2115, 2151
Blomgren, Philip M.–2191
Barbero, Francesca–2074, 2184
Blotter, Jonathan D.–2199
Barbieri, Nilson–2282, 2305
Blumsack, Judith–2307
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Bocko, Mark–2202
Boebinger, Dana–2243
Bohn, Alexander C.–2093
Boisvert, Jeffrey–2195
Bolshakova, Ekaterina S.–2290
Bolton, J. S.–2141, 2183
Bolton, J. Stuart–2197
Bomberger Brown, Mary–2073
Bonadies, Marguerite–2144
Bonelli, Simona–2074, 2184
Boning, Willem–2218
Bonnel, Julien–2119
Bonomo, Anthony L.–2179
Borchers, David–2245
Bottalico, Pasquale–2294
Boubriak, Olga–2281
Bouchard, Kristofer E.–2104
Bouchoux, Guillaume–2095
Boutin, Claude–2077
Boutoussov, Dmitri–2279
Boyce, Suzanne–2261
Boyce, Suzanne E.–2082, 2105
Boyd, Brenna N.–2126, 2274
Boyle, John K.–2112
Braasch, Jonas–2150, 2198
Bradley, David–2188
Bradley, David T.–2243, Chair
Session 4aAAb (2243)
Bradlow, Ann–2201, 2241
Bradlow, Ann R.–2263
Brady, Michael C.–2215
Brady, Steven–2277
Brand, Thomas–2273
Brandão, Eric–2141
Brandao, Alexandre–2282
Brandewie, Eugene–2242
Bridal, Lori–2123
Bridgewater, Ben–2089, 2090
Brigham, John C.–2124
Brill, Laura C.–2126
Britton, Deb–2115
Broda, Andrew L.–2128
Brooks, Todd–2198
Brouard, Brunuo–2077
Brown, David A.–2131, 2189
Brown, Michael–2156
Brown, Michael G.–2156
Brown, Stephen–2254
Brule, Stephane–2077
Brum, Ricardo–2097, 2282, 2305
Brundiers, Katharina–2248
Brungart, Timothy A.–2208
B T Nair, Balamurali–2083
Bucaro, Joseph–2086, 2112
Bucaro, Joseph A.–2111, 2112, 2194
Buck, John–2189
Buck, John R.–2093, 2147, 2154
Buckingham, Michael J.–2276
Bueno, Odair C.–2074
Bui, Thanh Minh–2123
Bunting, Gregory–2141
Burdin, Rachel S.–2172
Burgess, Alison–2300
168th Meeting: Acoustical Society of America
2357
Burnett, David–2086, 2140
Burns, Dan–2254
Burov, Valentin–2220
Bush, Dane R.–2214
Buss, Emily–2242
Bustamante, Omar A.–2118
Butko, Daniel–2127, 2151
Butler, Kevin–Cochair Session
1pAA (2088)
Byrne, David C.–2134
Byun, Gi Hoon–2148
Cacace, Anthony T.–2258
Cade, David–2186
Cai, Tingli–2309
Cain, Charles–2250
Cain, Charles A.–2193, 2248, 2250,
2251, 2280
Cain, Jericho E.–2139
Calandruccio, Lauren–2242
Çalıs
kan, Mehmet–2219
Calvo, David C.–2252
Campanella, Angelo J.–2131, 2207
Campbell, Richard L.–2120
Canchero, Andres–2167
Canney, Michael–2220, 2301
Cao, Rui–2141
Capone, Dean E.–2208
Carbotte, Suzanne M.–2092
Cardinale, Matthew R.–2306
Cariani, Peter–2164
Carignan, Christopher–2104
Carlén, Ida–2248
Carlisle, Robert–2300, 2302
Carlos, Amanda A.–2074
Carlson, Lindsey C.–2124
Carlström, Julia–2248
Carpenter-Thompson, Jake–2309
Carter, J. Parkman–2090
Carthel, Craig–2075
Casacci, Luca P.–2184
Casali, John–2166
Case, Alexander U.–2130, 2151,
2271, Cochair Session 2aAA
(2114), Cochair Session 2pAA
(2150), Cochair Session 4pAAa
(2270)
Cash, Brandon J.–2307
Cassaci, Luca P.–2074
Casserly, Elizabeth D.–2313
Cataldo, Edson–2282
Catheline, Stefan–2196, 2279
Cavanaugh, William J.–2162,
Cochair Session 2pID (2161)
Cechetto, Clement–2185
Celano, Joseph W.–2204
Celis Murillo, Antonio–2276
Celmer, Robert–2219
Cesar, Lima–2243
Chéenne, Dominique J.–2128
Cha, Yongwon–2244
Chabassier, Juliette–2133
Chan, Julian–2175
Chan, Weiwei–2256
Chandra, Kavitha–2140, 2256, 2289
Chandrasekaran, Bharath–2263,
2264, 2314
Chandrika, Unnikrishnan K.–2268
Chang, Andrea Y.–2178, 2179
Chang, Edward F.–2104
Chang, Yueh-chin–2145, 2173
2358
Chang, Yung-hsiang Shawn–2175
Chapelon, Jean-Yves–2220, 2279
Chapin, William L.–2286
Chapman, Ross–2188, 2316
Chavali, Vaibhav–2147
Che, Xiaohua–2254
Cheinet, Sylvain–2138
Chelliah, Kanthasamy–2172
Chen, Chi-Fang–2074, 2178
Chen, Chifang–2316
Chen, Ching-Cheng–2084
Chen, Gang–2295
Chen, Hsin-Hung–2179
Chen, Jessica–2154
Chen, Jun–2205
Chen, Li-mei–2312
Chen, Shigao–2159
Chen, Sinead H.–2243
Chen, Tianrun–2317
Chen, Weirong–2145
Chen, Wei-rong–2145
Chen, Yi-Tong–2266
Chen, Yongjue–2215
Chesnais, Céline–2077
Chevillet, John R.–2278
Cheyne, Harold A.–2117
Chhetri, Dinesh–2294
Chiaramello, Emma–2313
Chien, Yu-Fu–2177
Chirala, Mohan–2280
Chiu, Chen–2185
Chiu, Ching-Sang–2178, 2316
Chiu, Linus–2179, 2316
Chiu, Linus Y.–2178
Cho, Sungho–2298
Cho, Sunghye–2108
Cho, Tongjun–2209
Choi, Inyong–2258
Choi, James–2300
Choi, Jee W.–2149
Choi, Jee Woong–2298
Choi, Jeung-Yoon–2174
Choi, Wongyu–2281
Cholewiak, Danielle–2277
Choo, Andre–2289
Choo, Youngmin–2180
Chotiros, Nicholas–2268
Chotiros, Nicholas P.–2179, 2268,
2269
Chou, Lien-Siang–2074, 2186
Christensen, Benjamin Y.–2081
Christian, Andrew–2285, 2287
Chu, Chung-Ray–2179
Chuen, Lorraine–2307
Church, Charles C.–2249
Cipolla, Jeffrey–2195
Civale, John–2301
Clark, Brad–2129
Clark, Cathy Ann–2178
Clark, Grace A.–2084, Chair Session
4aSPa (2264)
Clayards, Meghan–2262
Clement, Gregory T.–2159, 2160
Cleveland, Robin–2281
Clopper, Cynthia G.–2172
Coburn, Michael–2192
Coffeen, Robert C.–2090, Cochair
Session 1pAA (2088)
Coiado, Olivia C.–2096
Colbert, Sadie B.–2096
Colin, Mathieu E.–2297
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Collier, Sandra–2138
Collier, Sandra L.–2139
Collin, Jamie–2300
Collin, Samantha–2132
Colonius, Tim–2080, 2081, 2192,
Cochair Session 3aBA (2191)
Colosi, John A.–Cochair Session
5aUW (2315)
Colosi, John A.–2149, 2155, 2316,
Chair Session 2pAO (2155)
Colson, Brendan–2261
Conant, David F.–2104
Connick, Robert–2089
Connors, Bret–2192
Connors, Bret A.–2191
Cook, Sara–2312
Coralic, Vedran–2192
Coraluppi, Stefano–2075
Corke, Thomas C.–2200
Corkeron, Peter–2277
Coron, Alain–2123
Corrêa, Fernando–2282
Costa, Marcia–2301
Costley, R. Daniel–2252
Costley, Richard D.–2178, Chair
Session 4aEA (2251)
Cottingham, James P.–2201, 2202,
2283
Coulouvrat, François–2279
Coussios, Constantin–2281, 2302
Coussios, Constantin C.–2300
Coviello, Christian–2300, 2302
Coyle, Whitney L.–2283, Cochair
Session 3aID (2197)
Craig, Adam–2305
Cray, Benjamin A.–2196
Cremaldi, Lucien–2252
Cremer, Marta J.–2277
Crone, Timothy J.–2092
Crowley, Alex–2308
Crum, Lawrence–2301
Crum, Lawrence A.–2249, Cochair
Session 3pBA (2219)
Csapó, Tamás G.–2128
Culver, R. L.–2213
Culver, R. Lee–2222, Chair Session
3aSPa (2213), Chair Session
3aSPb (2214), Cochair Session
4aSPb (2265)
Cummins, Phil R.–2085
Cunitz, Bryan–2193
Cunitz, Bryan W.–2192, 2193
Cuppoletti, Dan–2101
Curley, Devyn P.–2285
Czarnota, Gregory–2123
Czech, Joseph J.–2079
Davis, Catherine M.–2280
Davis, Gabriel–2279
Davis, Genevieve–2277
Dayeh, Maher A.–2223
de Graaff, Boris–2097
De Jesus Diaz, Luis–2178
de Jong, Kenneth–2106
Dele-Oni, Purity–2256
de Moustier, Christian–2267
Denis, Max–2159, 2256, 2289
Deppe, Jill–2276
DeRuiter, Stacy L.–2247
Desa, Keith–2285
De Saedeleer, Jessica–2284
Deshpande, Shruti B.–2306
de Souza, Olmiro C.–2304
Dettmer, Jan–2085, 2268, 2269,
2298
Dey, Saikat–2086, 2194
D’Hondt, Steven–2156
Diaz-Alvarez, Henry–2266
Dichter, Ben–2104
Diedesch, Anna C.–Chair Session
5aPPb (2308)
Diedesch, Anna C.–2198, 2308
Dighe, Manjiri–2193
Dilley, Laura–2176, 2312
Dimitrijevic, Andrew–2306
D’Mello, Sydney–2215
Dmitrieva, Olga–2174, Chair
Session 2pSC (2172)
Doc, Jean-Baptiste–2283
Dodsworth, Robin–2104
Doedens, Ric–2183
Doerschuk, Peter–2144
Dong, David W.–2181
Dong, Qi–2106
Dong, Weijia–2148
Dooley, Wesley L.–2130
Dosso, Stan–2268, 2269
Dosso, Stan E.–2085, 2298
Dostal, Jack–2284, Chair Session
3aMU (2201)
Dou, Chunyan–2301
Dowling, David R.–2148, 2158,
2188
Downing, Micah–2079
Doyley, Marvin M.–2302
D’Spain, Gerald–2092
D’Spain, Gerald L.–2118, 2277
Dubno, President, Judy R.–Chair
Session (2228)
Duda, Timothy–2315, 2316
Duda, Timothy F.–2316, 2317,
Cochair Session 2aAO (2119)
Dudley, Christopher–2088
Dudley Ward, Nicholas F.–2289
Dumont, Alain–2255
Dunmire, Barbrina–2192, 2193
Dunn, Floyd–2219
Duryea, Alex–2301
Duryea, Alexander–2302
Duryea, Alexander P.–2193, 2280
Duvanenko, Natalie E.–2260
Dziak, Robert P.–2154
Dzieciuch, Matthew A.–2149
Dahl, Peter H.–2187, 2206, 2216,
2226, 2227, 2297
Dalby, Jonathan–2212
Dall’Osto, David R.–2226, 2227,
2297
Danielson, D. Kyle–2263
Danilewicz, Daniel–2277
Darcy, Isabelle–2109
da Silva, Andrey R.–2141, 2305
David, Bonnett E.–2217
Eastland, Grant C.–2088
Davidson, Lisa–2103
Davies, Patricia–2197, 2287, Cochair Ebbini, Emad S.–2124, 2280
Eccles, David–2255, Cochair
Session 4pNS (2285)
Session 4aPAa (2254)
Davis, Andrea K.–2261
168th Meeting: Acoustical Society of America
2358
Eddins, Ann C.–2291
Eddins, David A.–2291, 2293,
2295
Edelmann, Geoffrey F.–2214
Elam, W. T.–2297
Elbes, Delphine–2281
Eligator, Ronald–2088
Elkington, Peter–2255
Elko, Gary W.–2130
Eller, Anthony I.–2296
Ellis, Dale D.–2297
Ellis, Donna A.–2182
Enoch, Stefan–2077
Ensberg, David–2225
Erdol, Nurgun–2073
Esfahanian, Mahdi–2073
Espana, Aubrey–2087, 2110
Espana, Aubrey L.–2087, 2111,
Chair Session 1pUW (2110)
Espy-Wilson, Carol–2082, 2312
Etchenique, Nikki–2132
Evan, Andrew–2192
Evan, Andrew P.–2191
Evans, Neal–2223
Evans, Samuel–2243
Ezekoye, Ofodike A.–2166, 2219
Fackler, Cameron J.–2084, 2162,
Cochair Session 1aSP (2084)
Falvey, Dan–2185
Fan, Lina–2148
Farahani, Mehrdad H.–2224
Farmer, Casey–2219
Farmer, Casey M.–2166
Farr, Navid–2249, 2278
Farrell, Daniel–2160
Farrell, Dara M.–2206
Fatemi, Mostafa–2124, 2159
Faulkner, Kathleen F.–2314
Fazzio, Robert–2159
Fehler, Michael–2254
Feistel, Stefan–2089
Feleppa, Ernest J.–2123, 2157
Feltovich, Helen–2124
Ferguson, Elizabeth–2246
Ferguson, Sarah H.–2210
Ferracane, Elisa–2109
Ferrier, John–2127
Fink, Mathias–2282
Fischell, Erin M.–2110
Fischer, Jost–2163
Fischer, Jost L.–2163
Fisher, Daniel–2129
Fleury, Romain–2099, 2281
Florêncio, Dinei A.–2265
Fogerty, Daniel–2211
Folmer, Robert–2291
Foote, Kenneth G.–2217
Forssén, Jens–2286
Forsythe, Hannah–2176
Fosnight, Tyler R.–2096, 2125
Fournet, Michelle–2153, Cochair
Session 1pAB (2091)
Fowlkes, J. B.–2251
Fowlkes, Jeffrey B.–Cochair Session
4aBA (2248), Cochair Session
4pBA (2278)
Fox, Robert A.–2312, 2313
Foye, Michelle–2143
Francis, Alexander L.–Chair Session
5aSC (2310)
2359
Francis, Alexander L.–2107, 2145
Frankford, Saul–2176
Franklin, Thomas D.–2220
Frazer, Brittany–2296
Frederickson, Carl–2126, 2127
Frederickson, Nicholas L.–2126
Freeman, Lauren A.–2276
Freeman, Robin–2276
Freeman, Simon E.–2276
Freeman, Valerie–2175
Fregosi, Selene–2119
Freiheit, Ronald–2115
Frisk, George V.–Cochair Session
3aUW (2216)
Frush Holt, Rachael–2263
Fu, Pei-Chuan–2256
Fu, Yanqing–2075
Fuhrman, Robert A.–2310
Fujita, Kiyotaka–2168
Fukushima, Takeshi–2255
Fullan, Ryan–2129
Gaffney, Rebecca G.–2306
Galatius, Anders–2248
Gallagher, Hilary–2133
Gallagher, Hilary L.–2079, 2134
Gallot, Thomas–2254
Gallun, Frederick–2291
Gallun, Frederick J.–2242, 2311,
Cochair Session 4aPP (2257),
Cochair Session 4pPP (2291)
Gao, Shunji–2300
Gao, Ximing–2136
Garcı́a-Chocano, Victor M.–2076
Garcia, Paula B.–2211
Garcia, Tiffany S.–2074
Gardner, Michael–2129
Garellek, Marc–2295
Garello, René–2266
Garrett, Steven L.–Chair Session
2aID (2129)
Gassmann, Martin–2092
Gaudette, Jason E.–2093
Gauthier, Marianne–2095
Gavrilov, Leonid–2220
Gawarkiewicz, Glen–2315, 2316
Gee, Kent–2079
Gee, Kent L.–2079, 2081, 2100,
2101, 2102, 2128, 2135, 2167,
2169, 2171, 2199, Cochair
Session 1aPA (2079), Cochair
Session 1pPA (2100), Cochair
Session 2aNSb (2135)
Gendron, Paul–2189
Gerard, Odile–2075
Gerges-Naisef, Haidy–2122
Gerken, LouAnn–2261
Gerratt, Bruce–2295
Gerratt, Bruce R.–2295
Gerstoft, Peter–2304
Ghassemi, Marzyeh–2260
Ghoshal, Goutam–2125
Giacomoni, Clothilde–2136
Giammarinaro, Bruno–2279
Giard, Jennifer–2197
Giard, Jennifer L.–2156
Giegold, Carl–2114, 2244, 2274
Giguere, Christian–2165
Gilbert, Keith–2265
Gillani, Uzair–2075
Gillespie, Doug–2093
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Gillespie, Douglas–2277
Gillespie, Douglas M.–2117
Giordano, Nicholas–2284, Chair
Session 2aMU (2132)
Giorli, Giacomo–2246
Gipson, Karen–2202
Giraldez, Maria D.–2278
Gjebic, Julia–2202
Gkikopoulou, Kalliopi–2117
Gladden, Joseph R.–2290
Gladden, Josh R.–2207, Cochair
Session 4aPAb (2256), Cochair
Session 4pPA (2288)
Glauser, Mark N.–2100
Glean, Aldo A.–2194
Glosemeyer Petrone, Robin S.–2089
Glotin, Hervé–2217
Goad, Heather–2262
Godin, Oleg–2156
Godin, Oleg A.–2156
Goerlitz, Holger R.–2185
Gogineni, Sivaram–2100
Goldberg, Hannah–2258
Goldhor, Richard–2265
Goldman, Geoffrey H.–2213, 2266
Goldsberry, Benjamin M.–2156
Goldstein, Julius L.–2309
Goldstein, Louis–2143
Golubev, V.–2168
Gomez, Antonio–2281
Gong, Zheng–2093, 2147, 2226,
2317
Gopala, Anumanchipalli K.–2104
Gordon, Jonathan–2093
Gordon, Samuel–2242
Götze, Simone–2091
Graetzer, Simone–2294
Graham, Susan–2302
Granlund, Sonia–2262, 2313
Grass, Kotoko N.–2177
Gray, Michael D.–2159
Greenleaf, James F.–2124
Greenwood, L. Ashleigh–2306
Greuel, Alison J.–2263
Griesinger, David H.–2242, Cochair
Session 4aAAa (2241), Cochair
Session 4pAAb (2273)
Griffiths, Emily–2117
Grigorieva, Natalie S.–2180
Groby, Jean-Philippe–2077
Grogirev, Valery–2155
Guan, Shane–2186
Guarino, Joe–2209
Guazzo, Regina A.–2153
Guenneau, Sebastien R.–2077
Guerrero, Quinton–2124
Guild, Matthew D.–2076, 2099
Guillemain, Philippe–2283
Guillemin, Bernard J.–2083
Guilloteau, Alexis–2283
Guiu, Pierre–2135
Gunderson, Aaron M.–2087, 2088
Guo, Mingfei–2113
Guo, Yuanming–2215
Gupta, Anupam K.–2075
Guri, Dominic–2127
Gutiérrez-Jagüey, Joaquı́n–2118
Gutmark, Ephraim–2101, 2126,
2144
Guttag, John V.–2260
Gyongy, Miklos–2300
Haberman, Michael R.–2098, 2099,
2200
Hackert, Chris–2223
Hahn-Powell, Gustave V.–2082,
2104
Haley, Patrick–2316
Hall, Hubert S.–2209
Hall, Neal A.–2200
Hall, Timothy–2122
Hall, Timothy J.–2124, 2159
Hall, Timothy L.–2193, 2250, 2251,
2280, 2301, 2302
Halvorsen, Michele B.–2205, 2217
Hambric, Stephen A.–2142
Hamilton, Mark F.–2099, 2158,
2188, 2200
Hamilton, Robert–2209
Hamilton, Sarah–2261
Hamilton, Sarah M.–2261
Han, Aiguo–2158
Han, Jeong-Im–2106
Han, Sungwoo–2145
Hanan, Zachary A.–2285
Handa, Rajash–2192
Handa, Rajash K.–2191
Handzy, Nestor–2207
Hanna, Kristin E.–2274
Hans, Stéphane–2077
Hansen, Uwe J.–Cochair Session
5aED (2303)
Hansen, Colin–2221
Hansen, Colin H.–2136
Hansen, Jonh H.–2083
Hansen, Uwe J.–2113, Chair Session
1eID (2113), Chair Session
2aED (2126), Chair Session
2pEDa (2160), Chair Session
2pEDb (2161), Chair Session
3pED (2221), Cochair Session
2pPA (2170)
Hanson, Helen–2260
Hao, Yen-Chen–2106
Harada, Tetsuo–2108
Hardage, Haven–2127
Hardwick, Jonathan R.–2285, 2287
Hariram, Varsha–2311
Harker, Blaine–2101
Harker, Blaine M.–2100, 2102
Harms, Andrew–2214
Harne, Ryan L.–2196
Harper, Jonathan–2193
Harper, Jonathan D.–2192, 2193
Harris, Catriona M.–2247
Harris, Danielle–2117, 2245, 2275,
2277, Cochair Session 4aAB
(2245), Cochair Session 4pAB
(2275)
Hartmann, Lenz–2164
Hartmann, William–2309
Hashemi, Hedieh–2312
Hasjim, Bima–2302
Haslam, Mara–2108
Hastings, Mardi C.–2206
Hathaway, Kent K.–2178
Hawkins, Anthony D.–2205
Haworth, Kevin J.–Cochair Session
5aBA (2300)
Haworth, Kevin J.–2199, 2303
Hazan, Valerie–2262, 2313
He, Ruoying–2117
He, Xiao–2254
168th Meeting: Acoustical Society of America
2359
Headrick, Robert H.–2188
Heald, Shannon L.–2202, 2261
Heaney, Kevin D.–2120, 2296
Heeb, Nicholas–2101
Hefner, Brian T.–2225, 2267, Chair
Session 4aUW (2267)
Hegland, Erica L.–2306
Heitmann, Kristof–2206
Helble, Tyler A.–2092, 2277
Hellweg, Robert D.–Cochair Session
3aNS (2203)
Henderson, Brenda S.–2080
Henessee, Spencer–2201
Henke, Christian–2288
Henyey, Frank S.–2297
Herbert, Sean T.–2153
Hermand, Jean-Pierre–2284
Hessler, George–2204
Heutschi, Kurt–2286
H Farahani, Mehrdad–2294
Hickey, Craig J.–2139
Hicks, Ashley J.–2098, 2099
Hicks, Keaton T.–2226
Hildebrand, John–2092, 2118
Hildebrand, John A.–2148, 2153
Hildebrand, Matthew S.–2115
Hill, James–2129
Hillman, Robert E.–2260
Hines, Paul–2268, 2269
Hines, Paul C.–2074, 2226
Hirayama, Makoto J.–2143
Hitchcock, Elaine R.–2262
Hobbs, Christopher M.–2079
Hoch, Matthew–2307
Hochmuth, Sabine–2273
Hodgkiss, William–2091
Hodgkiss, William S.–2225
Hodgson, Murray–2151
Holden, Andrew–2217
Holderied, Marc W.–2185
Holland, Charles W.–2085, 2121,
2268, 2269, 2296, 2298
Holland, Christy K.–2095, 2199,
2303
Holland, Mark R.–2122
Holliday, Jeffrey J.–2108
Holliday, Nicole–2173
Holt, R. Glynn–2256
Holthoff, Ellen L.–2256
Holz, Annelise C.–2277
Hong, Hyun–2200, 2274
Hong, Suk-Yoon–2141
Hooi, Fong Ming–2096, 2125
Hoover, K. Anthony–2114, Cochair
Session 2aAA (2114), Cochair
Session 2pAA (2150)
Hord, Samuel–2128
Hori, Hiroshi–2213
Horie, Seichi–2167
Horn, Andrew G.–2184
Horner, Terry G.–2314
Hossen, Jakir–2085
Høst-Madsen, Anders–2094
Houpt, Joseph W.–2307
Houston, Brian–2086, 2112
Houston, Brian H.–2111, 2112, 2194
Houston, Janice–2136
Howarth, Thomas R.–2131
Howe, Bruce–2118
Howe, Thomas–2110
Howell, Mark–2159
2360
Howson, Phil–2144, 2145
Hsi, Ryan–2192
Hsieh, Feng-fan–2173
Hsu, Timothy Y.–2133
Hsu, Wei Chen–2312
Huang, Bin–2124
Huang, Ming-Jer–2247
Huang, Ting–2173
Huang, Wei–2093, 2246, 2298
Hughes, Michael–2095, 2264, 2282
Hu, Huijing–2294
Hull, Andrew J.–2196, Cochair
Session 3aEA (2194)
Hulva, Andrew M.–2128
Humes, Larry E.–2213, 2257, 2311,
2314
Hunter, Eric J.–2294, Cochair
Session 4pAAa (2270)
Huntzicker, Steven–2302
Husain, Fatima T.–2309
Hutter, Michele–2291
Huttunen, Tomi–2289
Hwang, Joo Ha–2249, 2278
Hwang, Joo-Ha–2301
Hynynen, Kullervo–2300
Hu, Zhong–2279, 2280
Ierley, Glenn–2092
Ignisca, Anamaria–2226
Ilinskii, Yurii A.–2158
Imaizumi, Tomohito–2152, 2155
Imran, Muhammad–2151, 2244
Ing, Ros Kiri–2282
Ingersoll, Brooke–2312
Inoue, Jinro–2167
Isakson, Marcia–2268
Isakson, Marcia J.–2156, 2179,
2188, 2200, 2268, 2269
Ishida, Yoshihisa–2214, 2215
Ishii, Tatsuya–2137
Islam, Upol–2304
Ito, Masanori–2155
Itoh, Miho–2074
Izuhara, Wataru–2255
Jacewicz, Ewa–2312, 2313
Jain, Ankita D.–2226
Jakien, Kasey M.–2242
Jakits, Thomas–2218
James, Michael–2079
James, Michael M.–2079, 2081,
2100, 2101, 2102, 2167, 2169
Jang, Hyung Suk–2274
Janssen, Sarah–2128
Jardin, Paula P.–2097
Järvikivi, Juhani–2173
Jasinski, Christopher–2200
Jeger, Nathan–2282
Jeon, Eunbeom–2209
Jeon, Jin Yong–2151, 2244, 2274
Jeske, Andrew–2107
Jia, Kun–2283
Jiang, Yong-Min–2155
Jiang, Yue–2082
Jig, Kyrie–2112
Jig, Kyrie K.–2111
Joaj, Dilip S.–2285
Joh, Cheeyoung–2253
Johnsen, Eric–2158, 2280
Johnson, Chip–2246
Johnson, Cynthia–2192
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Johnson, Cynthia D.–2191
Johnson, Keith–2173, Cochair
Session 1pSCa (2103)
Johnson, Mark–2117
Johnston, William–2203
Jones, Chris–2118
Jones, Gareth–2185
Jones, Kate E.–2276
Jones, Ryan M.–2300
Jones, Zack–2174
Jongman, Allard–2107, 2173, 2174
Joseph, John E.–2247
Ju, Xiaodong–2254
Judge, John–2209
Judge, John A.–2186, 2194
Jüssi, Ivar–2248
Kaipio, Jari P.–2289
Kaliski, Kenneth–2203, Cochair
Session 3aNS (2203)
Kallay, Jeffrey–2263
Kampel, Sean D.–2242
Kamrath, Matthew–2304, Cochair
Session 2aSAa (2140)
Kan, Weiwei–2076
Kanada, Sunao–2143
Kandhadai, Padmapriya–2263
Kang, Jian–2164
Kang, Yoonjnung–2145
Kanter, Shane J.–2090, 2244
Kaplan, Maxwell B.–2153
Kapusta, Matthew–2101
Karami, Mohsen–2152
Kargl, Steven G.–2087, 2110, 2111
Karlin, Robin–2144
Karlin, Robin P.–2176
Karunakaran, Chandra–2125
Karzova, Maria M.–2193, 2289
Katayama, Makito–2255
Katsnelson, Boris–2155, 2156, 2317
Kaul, Sanjiv–2280
Kausel, Wilfried–2284
Kawai, Shin–2152
Ke, Fangyu–2202
Keck, Casey–2261
Kedrinskiy, Valeriy–2290
Keen, Sara–2275
Keil, Martin–2164
Keil, Ryan D.–2096, 2125
Keith, Robert W.–2306
Kellison, Todd–2117
Kelly, Jamie R.–2095
Kemmerer, Jeremy P.–2125
Kemp, John N.–2149
Kenny, R. Jeremy–Cochair Session
2pNSb (2167)
Key, Charles R.–2117
Khan, Sameer ud Dowla–2295
Kho, Hyo-in–2209
Khokhlova, Tatiana–2251, 2278,
2301
Khokhlova, Tatiana D.–2249,
2250
Khokhlova, Vera–2220, 2251, 2278,
2301
Khokhlova, Vera A.–2191, 2193,
2249, 2250, 2289, Cochair
Session 4aBA (2248), Cochair
Session 4pBA (2278)
Khosla, Sid–2126, 2144
Kidd, Gary R.–2308, 2311, 2314
Kiefte, Michael–2081, Chair Session
1aSC (2081)
Kiel, Barry V.–2100
Kieper, Ronald W.–2134
Kil, Hyun-Gwon–2141
Kim, Hak-sung–2209
Kim, Hui-Kwan–2206
Kim, Jea Soo–2148
Kim, Junghun–2149
Kim, Kang–2302
Kim, Kyung-Ho–2106
Kim, Nicholas–2183
Kim, Noori–2251
Kim, Yong-Joe–2095
Kim, Yong Tae–2094
Kim, Yousok–2209
King, Eoin A.–2219
Kinjo, Atsushi–2155
Kinnick, Randall R.–2124
Kitahara, Mafuyu–2146
Kitterman, Susan–2113
Klaseboer, Evert–2289
Klegerman, Melvin E.–2095
Klinck, Holger–2074, 2118, 2119,
2154, Cochair Session 2aAB
(2116)
Klinck, Karolin–2154
Kloepper, Laura–2156
Kloepper, Laura N.–2093, 2154,
2160
Klos, Jacob–2223
Kluender, Keith R.–2082
Kniffin, Gabriel P.–2112
Knobles, David P.–2297, 2317
Knopik, Valerie S.–2264, 2314
Knorr, Hannah D.–2128
Knox, Don–2305
Koblitz, Jens C.–2091, 2248
Koch, Rachelle–2202
Koch, Robert A.–2297
Koch, Robert M.–Cochair Session
2aSAa (2140)
Kochetov, Alexei–2145
Koenig, Laura L.–2173, 2262
Kolar, Miriam A.–2270
Kolios, Michael C.–2096
Kollmeier, Birger–2273
Komova, Ekaterina–2144
Konarski, Stephanie G.–2099
Kondaurova, Maria V.–2262, Chair
Session 4aSCb (2261)
Kong, Eunjong–2145
Kong, Eun Jong–2174
Kopechek, Jonathan A.–2302
Kopf, Lisa M.–2293
Korakas, Alexios–2121
Korkowski, Kristi R.–2096
Korman, Murray S.–2128, 2129,
2170, Cochair Session 2pPA
(2170)
Korzyukov, Oleg–2176
Kosawat, Krit–2315
Kosecka, Monika–2248
Kowalok, Ian–2243
Koza, Radek–2248
Kozlov, Alexander I.–2196
Kraft, Barbara J.–2267
Kreider, Wayne–2157, 2191, 2193,
2249, 2250, 2278, 2301, Cochair
Session 3aBA (2191)
Kreiman, Jody–2295
168th Meeting: Acoustical Society of America
2360
Kripfgans, Oliver D.–Cochair
Session 5aBA (2300)
Kripfgans, Oliver–2301
Krolik, Jeffrey–2214
Krylov, Victor V.–2076
Krysl, Petr–2086
Kujawa, Sharon G.–2258
Kumar, Anu–2246
Kumar, Viksit–2157
Kuperman, William–2091
Kuperman, William A.–2189
Kurbatski, K.–2168
Küsel, Elizabeth T.–2275
Kwak, Yunsang–2210
Kwan, James–2302
Kwon, Bomjun J.–2271
Kyhn, Line–2248
La Follett, Jon R.–2110
Lafon, Cyril–2220, 2279
Lagarrigue, Clément–2077
Lahiri, Aditi–2175
Lähivaara, Timo–2289
Laidre, Kristin–2091
Lalonde, Kaylah–2263
Lam, Boji–2263
Lambaré, Hadrien–2168
Lammers, Marc O.–2276
Lan, Yu–2318
Laney, Jonathan–2114
Lang, William W.–2161
Langer, Matthew D.–2094
Lapotre, Céline–2134
La Rivière, Patrick J.–2157
Larson, Charles R.–2176, 2294
Lavery, Andone C.–2190, 2200,
Cochair Session 3aAO (2187)
Law, Wai Ling–2145
Lawless, Martin S.–2272
Layman, Jr., Christopher N.–2252
Le Bas, Pierre-Yves–2252, 2265
Le Cocq, Cecile–2135
Lee, Adrian KC–Cochair Session
4aPP (2257), Cochair Session
4pPP (2291)
Lee, Chan–2141
Lee, Chao-Yang–2315
Lee, Dohyung–2137
Lee, Franklin–2192, 2193
Lee, Franklin C.–2193
Lee, Goun–2173
Lee, Greg–2143
Lee, Hunki–2137
Lee, Hyunjung–2108
Lee, Jaewook–2083
Lee, Joonhee–2183, 2200
Lee, Kevin M.–2207, 2252, Cochair
Session 3aPA (2205)
Lee, Kwang H.–2112
Lee, Sunwoong–2147
Leek, Marjorie–2291
LEE-KIM, SANG-IM–2310
Lehrman, Paul D.–2285
Leib, Stewart J.–2080
Leibold, Lori J.–2242
Leishman, Timothy W.–2199
Le Magueresse, Thibaut–2171
Lembke, Chad–2117
Lendvay, Thomas S.–2193
Lengeris, Angelos–2107
Leonard, Martha L.–2184
2361
Lermusiaux, Pierre F.–2316
Lester, Rosemary A.–2293
Leta, Fabiana R.–2282
Levow, Gina-Anne–2175
Levy, Roger–2107
Lewis, George K.–2094
Lewis, M. Samantha–2291
Li, Chunxiao–2113
Li, Fenfang–2289
Li, Fenghua–2148
Li, Guangyan–2191
Li, Haisen–2318
Li, Kai Ming–2138, 2139, 2197,
2205, Cochair Session 2aPA
(2138)
Li, Mingxing–2175
Li, Ruo–2318
Li, Shihuai–2288
Li, TianYun–2224
Li, Wei–2215
Li, Xinyan–2288
Li, Xiukun–2189
Li, Xu–2224, 2318
Li, Xy–2288
Li, Yang–2189
Lim, Hansol–2151
Lim, Raymond–2087, 2112
Lima, Key F.–2282, 2305
Lin, Chyi-Her–2312
Lin, Kuang-Wei–2250
Lin, Shen-Jer–2266
Lin, Susan–Cochair Session 1pSCa
(2103)
Lin, Tzu-Hao–2074, 2186
Lin, Ying-Tsong–2093, 2121, 2315,
2316, 2317
Lin, Yu Ching–2312
Lin, Yuh-Jyh–2312
Lin, Yung-Chieh–2312
Lindemuth, Michael–2117
Lindsey, Stephen–2182
Lingeman, James–2192
Lingeman, James E.–2191, 2192
Lippert, Stephan–2206
Lippert, Tristan–2206
Liu, Chang–2106, 2211
Liu, Dalong–2124
Liu, Emily–2178
Liu, GuoQing–2224, 2318
Liu, Hanjun–2294
Liu, Peng–2254
Liu, Tengxiao–2159
Liu, Ying–2294
Liu, Zhongzheng–2095
Liu, Ziyue–2192, 2193
Llanos, Fernando–2082, 2107
Lodhavia, Anjli–2176
Loebach, Jeremy–2307, 2311
Lof, John–2300
Logan, Roger M.–2160
Logawa, Banda–2151
Loisa, Olli–2248
Lomotey, Charlotte F.–2177
Long, Gayle–2313
Lopes, Joseph L.–2268
López Arzate, Diana C.–2247
Lopez Prego, Beatriz–2107
Lotto, Andrew–2108
Lotto, Andrew J.–2293, 2307
Loubeau, Alexandra–2223, Cochair
Session 3pNS (2223)
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
LoVerde, John J.–2181, 2219
Lowenstein, Joanna H.–2262
Lowrie, Allen–2269
Lozupone, David–2203
Lu, Huancai–2113, 2142
Lu, Jia–2148
Lu, Junqiang–2254
Lu, Wei–2318
Luan, Yi–2175
Lubert, Caroline P.–2126, 2137
Lucas, Tim C. D.–2276
Luchies, Adam–2158
Luczkovich, Joseph J.–2276
Luegmair, Georg–2294
Luh, Wenming–2144
Lulich, Meredith D.–2144
Lulich, Steven–2199, 2259
Lulich, Steven M.–2104, 2127,
2128, 2144, 2260, Cochair
Session 4aSCa (2259)
Lunsford, Chris–2091
Luo, Dan–2285
Lynch, James–Cochair Session
5aUW (2315)
Lynch, James–2155, 2315
Lynch, James F.–2093
Lyons, Gregory W.–2139
Lyrintzis, A.–2168
Lysoivanov, Yuri–2150
MacAulay, Jamie–2091
Macaulay, Jamie D.–2093
MacAuslan, Joel–2265
MacConaghy, Brian–2095
MacGillivray, Alexander O.–2206
Machi, Junji–2123, 2157
Mack, Gregory–2168
Maddox, Alexandra–2126
Maddox, W. T.–2264
Maddox, W. Todd–2314
Magliula, Elizabeth A.–2194
Magstadt, Andrew S.–2100
Mahdavi Mazdeh, Mohsen–2105
Mahon, Merle–2313
Majdinasab, Fatemeh–2312
Maki, C. T.–2256
Maki, Daniel P.–2308
Makris, Nicholas C.–2093, 2147,
2226, 2246, 2317
Malcolm, Alison–2254
Maling, George C.–2161
Malla, Bhupatindra–2101
Malphurs, David E.–2112
Malykhin, Andrey–2317
Mamou, Jonathan–2123, 2157,
Cochair Session 2aBA (2122)
Mamou-Mani, Adrien–2132, 2284
Mankbadi, Reda–2168
Manley, David–2089
Mann, David–2117
Maraghechi, Borna–2096
Marcus, Logan S.–2256
Mareze, Paulo–2141
Margolina, Tetyana–2247
Market, Jennifer–2255
Markham, Benjamin–2089, 2162
Marques, Tiago A.–2245
Marsh, Christopher A.–2153
Marsh, Jon–2095, 2264, 2282
Marshall, Andrew–2223
Marsteller, Marisa–2308
Marston, Philip L.–2087, 2088,
2110, 2111, 2172, 2298
Marston, Timothy M.–2110, 2172
Martin, James S.–2159
Martin, Stephen–2092
Mast, T. Douglas–2096, 2125, 2199,
2302
Masud, Salwa–2258
Mathias, Delphine–2091
Matias, Luis–2275
Matsumoto, Haru–2118, 2119
Matsuo, Ikuo–2152, 2155
Mattson, Steve–2286
Matula, Thomas J.–2095, 2279
Maussang, Frédéric–2266
Maxwell, Adam–2193, 2278, 2301
Maxwell, Adam D.–2157, 2249,
2250, 2251
Mayell, Marcus–2089
Maynard, Julian D.–2131
Mayoral, Salvador–2080
Mazzocco, Elizabeth–2128
McAteer, James A.–2191
McCammon, Diana–2226
McCarthy, John–2095, 2264, 2282
McComas, Sarah–2266
McCullough, Elizabeth A.–2109
McDaniel, J. Gregory–2194, Cochair
Session 3aEA (2194)
McDannold, Nathan–2221
McDonald, Mark A.–2148
McDougal, Forrest–2127
McFarland, Dennis J.–2258
McGeary, John E.–2264, 2314
McGee, JoAnn–2073
McGettigan, Carolyn–2243
McGough, Robert–2096, 2125, 2128,
2159, Chair Session 1pBA
(2094)
McKay, Scotty–2127
McKenna, Elizabeth A.–2134
McKenna, Mihan–2266
McKinley, Richard–2079
McKinley, Richard L.–2079, 2133,
2134, 2166, Cochair Session
1aPA (2079), Cochair Session
1pPA (2100)
McKinnon, Daniel–2203
McLaughlin, Dennis K.–2101
McMullen, Andrew–2287
McNeese, Andrew–2178
McNeese, Andrew R.–2207, 2252
McPhee, Peter–2203
McPherson, David D.–2095
Means, Steve L.–2214
Meegan, G. Douglas–2165
Meekings, Sophie–2243
Mehmohammadi, Mohammad–2159
Mehraei, Golbarg–2258
Mehta, Daryush–2260
Meixner, Duane–2159
Mellinger, David K.–2118, 2119,
2153, 2154, 2275, Cochair
Session 1pAB (2091), Cochair
Session 2aAB (2116)
Melodelima, David–2220
Menard, Lucie–2105
Meng, Qingxin–2265
Mental, Rebecca–2143
Merkens, Karlina–2153
Mi, Lin–2106
168th Meeting: Acoustical Society of America
2361
Michalopoulou, Zoi-Heleni–2085
Mielke, Jeff–2104
Mikhalvesky, Peter–2148
Mikhaylova, Daria A.–2180
Mikkelsen, Lonnie–2248
Miksis-Olds, Jennifer L.–2186
Miller, Amanda L.–2103
Miller, Douglas–2301
Miller, Douglas L.–2158
Miller, Greg–2114
Miller, Gregory A.–2244, 2274
Miller, James D.–2212, 2308, 2311
Miller, James G.–2122
Miller, James H.–2156, 2178, 2190,
2197, 2206
Miller, Rita J.–2125
Miller, Taylor L.–2176
Mirabito, Chris–2316
Mishima, Yuka–2074
Mitra, Vikramjit–2082
Mitran, Sorin M.–2191, 2192
Miyamoto, Yoshinori–2074
Miyashita, Takuya–2303
Mizoguchi, Ai–2103
Mizumachi, Mitsunori–2314
Moeller, Niklas–2183
Molinari, Michael–2281
Molis, Michelle R.–2311
Mollashahi, Maryam–2312
Monson, Brian B.–2272, 2307
Moon, Wonkyu–2253
Mooney, T. A.–2153
Moorcroft, Elizabeth–2276
Moore, David–2305
Moore, David R.–2257
Moore, Keith A.–2203
Moore, Thomas R.–2132, 2284
Moquin, Philippe–2265
Mora, Pablo–2101
Moran, John–2091
Morasutti, Jon–2129
Morgan, Andrew–2090
Morgan, Mallory–2243
Moriconi, Stefano–2313
Morisaka, Tadamichi–2074
Morlet, Thierry–2306
Moron, Juliana R.–2073
Morrill, Tuuli–2146, 2176
Morris, Philip–2101
Morris, Richard J.–2293, Chair
Session 4pSC (2293)
Morrison, Andrew C.–2170
Morrison, Andrew C. H.–Chair
Session 4pMU (2283)
Morrison, Andrew C. H.–Cochair
Session 5aED (2303)
Morshed, Mir Md M.–2136
Moss, Cynthia F.–2185, Chair
Session 2pAB (2152)
Moss, Geoffrey R.–2131
Mott, Brian–2280
Mousel, John–2224
Moyal, Olivier–2253
Muehleisen, Ralph T.–2172
Muellner, Herbert–2218
Muenchow, Andreas–2317
Muenster, Malte–2164
Muhlestein, Michael B.–2098
Muir, Thomas G.–2178, 2252
Mukae, Junpei–2214
Müller, Rolf–2075
2362
Mullins, Lindsay–2261
Munthuli, Adirek–2315
Munyazikwiye, Gerard–2127
Murakami, Takahiro–2214, 2215
Murata, Taichi–2134
Murphy, Stefan M.–2226
Murphy, William J.–2134, 2165,
Cochair Session 2aNSa (2133),
Cochair Session 2pNSa (2165)
Murray, Alastair R.–2077
Murray, Nathan E.–2101, 2139,
2167
Murray, Patrick–2195
Murta, Bernardo H.–2097
Muzi, Lanfranco–2155
Myers, Kyle R.–2209
Myers, Rachel–2302
Nachtigall, Paul E.–2093
Naderyan, Vahid–2139
Nagao, Kyoko–2306
Naghshineh, Koorosh–2209
Nagle, Anna S.–2125
Nakamura, Aya–2167
Nam, Hosung–2082
Namjoshi, Jui–2176
Nandamudi, Srihimaja–2295
Narayanan, Shrikanth S.–2143
Nariyoshi, Pedro–2125
Nault, Isaac–2192
Neal, Matthew T.–2091
Nearey, Terrance M.–2081
Neel, Amy T.–2210, Cochair
Session 3aSC (2210)
Neely, Stephen T.–2211
Neilsen, Tracianne B.–2079, 2081,
2100, 2101, 2102, 2128, 2135,
2167, 2169, 2171, 2199, Cochair
Session 2pNSb (2167)
Nelson, Danielle V.–2074
Nennig, Benoit–2077
Netchitailo, Vladimir–2279
Neubauer, Juergen–2259
Newhall, Arthur–2315
Newhall, Arthur E.–2093
Nguon, Chrisna–2256, 2289
Nguyen, Man M.–2302
Nguyen, Vincent–2140
Nicholas, Michael–2252
Nicolaidis, Katerina–2107
Nielsen, Peter L.–2155
Nieukirk, Sharon L.–2154
Nightingale, Kathryn–2249
Nijhof, Marten J.–2111
Nishi, Kanae–2211
Nishimiya, Kojiro–2213
Nissen, Jene–2246
Nittrouer, Susan–2262
Noble, John M.–2139, 2169
Nohara, Timothy–2129
Norris, Andrew–2078
Norris, Andrew N.–2098
Norris, Thomas F.–2246
Northridge, Simon–2093
Nosal, Eva-Marie–2094
Nottoli, Chris S.–2142
Nozawa, Takeshi–2108
Nusbaum, Howard C.–2202,
2261
Nusierat, Ola–2252
Nystrand, Martin–2215
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Oberai, Assad A.–2159
O’Boy, Daniel J.–2076
O’Brien, William–2123, 2219
O’Brien, William D.–2095, 2158
O’Brien Jr., William D.–2096
O’Connell, Victoria–2185
Odom, Jonathan–2214
Odom, Robert I.–2187, 2199
Oelschlaeger, Karl–2223
Oelze, Michael–2158, Chair Session
2pBA (2157), Cochair Session
2aBA (2122)
Oelze, Michael L.–2123, 2125
Ogata, Kazuto–2314
Oh, Byung Kwan–2209
Oh, Seongmin–2274
Ohl, Claus-Dieter–2256, 2289
Ohl, Siew-Wan–2289
Ohm, Won-Suk–2137
Okerlund, David–2293
Okutsu, Kenji–2074
Oleson, Erin–2118, 2153, 2154
Oliphant, Michelle–2127
Olivier, Dazel–2077
Ollivier, Benjamin–2266
Ollivier, Sebastien–2289
Olney, Andrew–2215
Olson, Bruce C.–2090
O’Neal, Robert–2203
Onsuwan, Chutamanee–2315
O’Reilly, Meaghan A.–2300
Oren, Liran–2126, 2144
Orr, Marshall H.–2121
Osman, E.–2168
Ostarek, Markus–2243
Ostashev, Vladimir E.–2138, 2139
Ostendorf, Mari–2175
Ostrovskii, Igor–2252
Ostrovskiy, Dmitriy B.–2180
Oswald, Julie N.–2154
Ota, Anri–2215
Otero, Sebastian–2150
ounadjela, abderrhamane–2253
Ouyang, Huajiang–2224, 2318
Ow, Dave–2289
Owen, Kylie–2185
Owens, Gabe E.–2301
Ozmeral, Erol J.–2291
Pace, Mike–2266
Pack, Adam A.–2154
Page, Juliet A.–2079
Paillasseur, Sébasien–2171
Pajak, Bozena–2107
Pallayil, Venugopalan–2268
Palmer, William K.–2204
Palumbo, Daniel L.–2285
Papamoschou, Dimitri–2080
Papesh, Melissa–2291
Parizet, Etienne–2309
Park, Hanyong–2174
Park, Hyo Seon–2209
Park, Junhong–2209, 2210
Park, Taeyoung–2137
Parks, Susan E.–2185
Partan, Jim–2153
Partanen, Ari–2249, 2278
Pate, Michael B.–2281
Patel, Sona–2176
Patterson, Brandon–2158
Paul, Adam L.–2219
Paul, Stephan–2097, 2253, 2282,
2304, 2305
Paustian, Iris–2268
Pavese, Lorenzo–2294
Pawliczka, Iwona–2248
Payton, Karen–2189, 2265
Pearson, Heidi–2185
Pearson, Michael F.–2169
Pecknold, Sean–2226, 2297, 2298
Peddinti, Vijay Kumar–2164
Pedro, Rebecca–2128
Pedrycz, Adam–2213
Pellegrino, Paul M.–2256
Peng, Tao–2095
Peng, Yuan–2205
Peng, Zhao–2126, 2200, 2274,
Cochair Session 3aID (2197)
Penny, Christopher W.–2285
Penrod, Clark S.–2188
Perez, Camilo–2095, 2279
Pestorius, Frederick M.–2188
Petchpong, Patchariya–2094
Petillo, Stephanie–2110
Pettersen, Michael S.–2129
Pettinato, Michèle–2262
Pettit, Chris L.–2138
Pettyjohn, Steve–2182, 2208
Pfeiffer, Scott–2114, 2244
Pfeiffer, Scott D.–2089, 2244
Pfeifle, Florian–2132, 2164
Philipp, Norman H.–Chair Session
3pAA (2218)
Phillips, James E.–2208
Piao, Shengchun–2265
Piccinini, Page–2107
Pichora-Fuller, Margaret K.–2292
Pierce, Allan D.–2179
Pineda, Nick–2268
Pinson, Samuel–2121, 2268, 2296
Pinton, Gianmarco–2279
Piovesan, Tenile–2305
Piperkova, Rossitza–2202
Pisoni, David B.–2314
Plath, Niko–2132
Plotkin, Kenneth–2286
Plotkowski, Andrea R.–2310
Plotnick, Daniel–2087, 2172
Plotnick, Daniel S.–2088, 2110,
2111
Plsek, Thomas J.–2115
Pol, Graland-Mongrain–2279
Poncot, Remi–2135
Ponte, Aurelien–2316
Pope, Hsin-Ping C.–2101
Popper, Arthur N.–2205
Porta-Gándara, Miguel A.–2118
Porter, Thomas R.–2300
Possing, Miles–2215
Potty, Gopu–2178
Potty, Gopu R.–2156, 2190, 2197,
2206
Powell, Larkin A.–2073
Powers, Jeffry–2300
Powers, Russell–2101
Pozzer, Talita–2253
Prakash, Arun–2141
Prater, James L.–2112
Preisig, James C.–2266
Preminger, Jill E.–2198, 2308
Preston, John R.–2225, 2297
Price, John C.–2203
168th Meeting: Acoustical Society of America
2362
Pritz, Tamas–2256
Probert Smith, Penny–2302
Qiang, Bo–2124
Qiao, Shan–2281
Qiao, Wenxiao–2254
Qin, Jixing–2156
Qin, Zhen–2107
Quick, Nicola J.–2247
Quijano, Jorge E.–2147, Chair
Session 2aUW (2147)
Radhakrishnan, Kirthi–2199, 2303
Rafferty, Tom–2151
Raghukumar, Kaustubha–2155
Raghunathan, Shreyas B.–2097
Rakerd, Brad–2309
Raman, Ganesh–2172
Ramanarayanan, Vikram–2143
Ramdas, Kumaresan–2164
Ranft, Richard–2182
Rankin, Shannon–2117, 2245
Rankinen, Wil A.–2082
Rao, Marepalli B.–2125
Rasmussen, Per–2080, 2102
Raspet, Richard–2139
Rathsam, Jonathan–2224, Cochair
Session 3pNS (2223)
Ratilal, Purnima–2093, 2147, 2226,
2246, 2298, 2317
Rawlings, Samantha–2181
Read, Andrew J.–2277
Reba, Ramons A.–2080
Redford, Melissa A.–2263
Reed, Heather–2195
Reeder, Ben–2316
Reeder, D. Benjamin–2120
Reeder, Davis B.–2178
Reese, Marc C.–2281
Reetz, Henning–2175
Reetzke, Rachel–2263
Reganti, Namratha–2159
Regier, Kirsten T.–2082
Reichman, Brent–2079
Reichman, Brent O.–2102, 2169
Reidy, Patrick–2262
Reiter, Sebastian–2202
Remillieux, Marcel C.–2252, 2265
Ren, Gang–2202
Ren, Xiaoping–2125
Rennies, Jan–2273
Reuter, Eric L.–2271
Riahi, Nima–2304
Rich, Kyle T.–2302
Richards, Angela–2243
Richards, Roger T.–Chair Session
4pEA (2281)
Richie, Carolyn–2212
Riddle, Jason–2276
Rideout, Brendan P.–2094
Riegel, Kimberly A.–2243
Rietdijk, Frederik–2286
Rimington, Dennis–2277
Riquimaroux, Hiroshi–2152
Rivens, Ian–2250, 2301
Rivera-Campos, Ahmed–2105
Rivers, Julie–2246
Rizzi, Stephen A.–2285, 2286, 2287,
Cochair Session 4pNS (2285)
Roberts, Bethany L.–2277
Roberts, Joshua J.–2129
2363
Roberts, Philip J.–2175
Roberts, William W.–2193, 2251,
2280
Robinette, Martin–2166
Robinson, Stephen P.–2216, 2217
Roch, Marie A.–2073, 2153
Rodriguez, Christopher F.–2285
Rodriguez, Peter–2256
Rogers, Catherine L.–2198, 2211,
2273, Cochair Session 3aSC
(2210)
Rogers, Chris B.–2285
Rogers, Jeffrey S.–2214
Rogers, Lydia R.–2210
Rogers, Peter H.–2159
Rohrbach, Daniel–2123, 2157
Romero-Vivas, Eduardo–2118
Rone, Brenda K.–2246
Ronsse, Lauren M.–2129
Rosado-Mendez, Ivan–2122
Rosado-Mendez, Ivan M.–2124
Rosado Rogers, Lydia–2211
Rosen, Stuart–2243
Rosenberg, Carl–2162, Cochair
Session 2pID (2161)
Rosenfield, Jonathan R.–2157
Ross, Susan–2192
Rossing, Thomas D.–2170
Rossi-Santos, Marcos–2073
Roth, Ethan–2118
Rourke, Christopher S.–2174
Rouse, Jerry W.–2223
Rowan-West, Carol–2203
Rowcliffe, Marcus J.–2276
Rowland, Elizabeth–2275
Roy, Kenneth–2181
Roy, Kenneth P.–Chair Session
3aAA (2181)
Rudisill, Chase J.–2128
Ruf, Joseph–2168
Ruf, Joseph H.–2167
Ruhnau, Marcel–2206
Rupp, Martin–2202
Ruscher, Christopher J.–2100
Russell, Daniel A.–2197, 2200
Ryerson, Erik J.–2151
Sabra, Karim G.–2111, 2149, 2190
Sacks, Jonah–2089
Sadeghi-Naini, Ali–2123
Sadykova, Dina–2247
Saegusa-Beecroft, Emi–2123, 2157
Sagers, Jason D.–2178, 2317
Sahu, Saurabh–2312
Sakaguchi, Aiko–2074
Sakamoto, Nicholas–2141
Sakata, Yoshino–2213
Sakiyama, Naoki–2255
Salter, Ethan–2181
Salton, Alexandira R.–2169
Salton, Alexandria R.–2079, 2167
Saltzman, Elliot–2082
Sambles, Roy–2077
Sammelmann, Gary S.–2112
Samson, David J.–2150
Sanchez-Dehesa, Jose–2076
Sandhu, Jaswinder S.–2157
Sanghvi, Narendra T.–2220, Cochair
Session 3pBA (2219)
Sankin, Georgy–2191, 2192
Sannachi, Lakshmanan–2123
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Sapozhnikov, Oleg–2193, 2301
Sapozhnikov, Oleg A.–2191, 2193,
2249, 2250
Sapoznikov, Oleg–2251
Sarkar, Jit–2091, 2185
Sarkissian, Angie–2086, 2112, 2194
Satter, Michael J.–2117
Scanlon, Michael V.–2252
Scanlon, Patricia–2182
Scarborough, Rebecca–2083
Scarbrough, Paul–2115, 2151
Schade, George–2278
Schade, George R.–2249, 2251,
2278
Scharenbroch, Gina–2311
Scherer, Ronald–2296
Scherer, Ronald C.–2295, 2296
Schertz, Jessamyn L.–2108
Schlinker, Robert H.–2080
Schmid, Charles E.–2161
Schmidt, Anna M.–2146
Schmidt, Henrik–2110
Schnitzler, Hans-Ulrich–2091
Schomer, Paul D.–2204, Cochair
Session 3aNS (2203)
Schrader, Matthew K.–2128
Schreiber, Nolan–2143
Schulte-Fortkamp, Brigitte–2204
Schutz, Michael–2307
Schwan, Logan–2098
Scott, E. K. Ellington–2202
Scott, Michael P.–2306
Scott, Sophie K.–2243
Scott-Hayward, Lindesay A.–2247
Segala, David–2195
Seger, Kerri–2247
Seibert, Anna-Maria–2091
Selep, Andrew–2158
Seo, Seonghoon–2141
Seong, Woojae–2180
Sepulveda, Frank–2305
Sereno, Joan–2177
Setter, Jane–2312
Shade, Neil T.–2115
Shafer, Benjamin–Chair Session
3aSAb (2208)
Shafer, Benjamin M.–Chair Session
3aSAa (2207)
Shah, Apurva–2302
Shannon, Dan–2080
Sharma, Ariana–2243
Shattuck-Hufnagel, Stefanie–2174,
2260
Shekhar, Himanshu–2199, 2302
Shen, Jing–2314
Shen, Junyuan–2215
Sheng, Li–2263
Sheng, Xueli–2148
Shepherd, Micah R.–2142, 2284
Sherren, Richard S.–2209
Sheth, Raj C.–2306
Shi, Lu-Feng–2173
Shi, William T.–2300
Shih, Chilin–2145
Shin, Ho-Chul–2078
Shin, Kumjae–2253
Shinn-Cunningham, Barbara–2258,
2271
Shiu, Yu–2275
Shofner, William–2199, 2308
Shrivastav, Rahul–2293, 2295
Shrotriya, Pranav–2279
Siderius, Martin–2155, 2189
Sieck, Caleb F.–2099
Siegmann, William L.–2179, 2190
Signorello, Rosario–2295
Sikarwar, Nidhi–2101
Silbert, Noah H.–Chair Session
5aPPa (2306)
Silbert, Noah H.–2174, 2307
Siliceo, Oscar E.–2268
Sillings, Roy–2212
Simmen, Jeffrey A.–2187
Simmons, James A.–2154, 2272,
Chair Session 1aAB (2073)
Simon, Julianna–2301
Simon, Julianna C.–2249
Simonis, Anne–2153
Simons, Theodore–2276
Simpson, Brian D.–2166
Simpson, Harry–2112
Simpson, Harry J.–2111, 2112
Sirovic, Ana–2148, Cochair Session
3aAB (2184)
Sivaraman, Ganesh–2082
Sivriver, Alina–2279
Skordilis, Zisis Iason–2143
Skowronski, Mark D.–2293
Slaton, William–2127, 2160, 2288,
Cochair Session 4aPAb (2256),
Cochair Session 4pPA (2288)
Smaragdakis, Costas–2120
Smiljanic, Rajka–2109, 2241
Smirnov, Dmitry–2078
Smith, Adam B.–2093
Smith, Anthony R.–2087, 2088
Smith, Chad–2268, 2269
Smith, Cory J.–2208
Smith, Eric–2178
Smith, Jennifer A.–2073
Smith, John D.–2077
Smith, Nathan D.–2128
Smith, Sherri L.–2292
Smith, Silas–2145
Smith, Valerie–2181
Snell, Colton D.–2091
Soles, Lindsey–2307
Sommerfeldt, Scott D.–2199
Sommers, Mitchell–2259, Cochair
Session 4aSCa (2259)
Son, Su-Uk–2298
Song, Aijun–2148
Song, H. C.–2148
Song, Hee-Chun–2148
Song, Heechun–2180
Song, Zhongchang–2075
Sorensen, Mathew–2192
Sorensen, Mathew D.–2192, 2193
Sorenson, Matthew–2193
Souchon, Remi–2279
Soule, Dax C.–2092
Sounas, Dimitrios–2099, 2281
Southall, Brandon L.–2247
Souza, Pamela–2314
Sparrow, Victor–2188, 2197
Sparrow, Victor W.–2200
Speights, Marisha–2082
Spincemaille, Pascal–2144
Spivack, Arthur J.–2156
Sponheim, Nils–2097
Sprague, Mark W.–2276
Srinivasan, Nirmal–2311
168th Meeting: Acoustical Society of America
2363
Srinivasan, Nirmal K.–2242
Srinivasan, Nirmal Kumar–2242
Stansell, Megan–2242
Stanton, Timothy K.–2187, 2222
Stauffer, Stauffer A.–2186
Stecker, G. Christopher–2198, 2308
Steininger, Gavin–2085, 2269, 2298
Sterling, John–2194
Stewart, Kenneth–2128
Stiles, Timothy–2158
Stilp, Christian–2308, 2310, 2311
Stilp, Christian E.–2198
Stilz, Peter–2091
Stimpert, Alison K.–2247
Stockman, Ida–2312
Stojanovik, Vesna–2312
Stokes, Michael A.–2083, 2314
Story, Brad H.–2272, 2293, 2307
Stott, Alex–2129
Stotts, Steven A.–2297
Stout, Trevor A.–2081, 2100
Straley, Janice–2091, 2185
Stratton, Kelly–2094
Strickland, Elizabeth A.–2306
Strong, John–2244
Strong, William J.–2199
Sturm, Frédéric–2119, 2121
Styler, Will–2083
Subramanian, Swetha–2125
Sucunza, Federico–2277
Sugiyama, Hitoshi–2213
Sü Gül, Zühre–2219
Suits, Joelle I.–2166
Sullivan, Edmund–2213
Summers, Jason E.–2214
Sun, Lin–2318
Sung, Min–2253
Surve, Ratnaprabha F.–2285
Suzuki, Ryota–2074
Svegaard, Signe–2248
Swaim, Zach–2277
Swalwell, Jarred–2095
Swearingen, Michelle E.–2139
Sweeney, James F.–2281
Swift, Hales S.–2081
Szabo, Andrew R.–2153
Szabo, Thomas L.–2249, 2254
Szymczak, William G.–2086
Tabata, Kyohei–2215
Tabatabai, Ameen–2157
Tadayyon, Hadi–2123
Taft, Benjamin N.–2152
Taggart, Rebecca–2094
Taguchi, Kei–2303
Taherzadeh, Shahram–2078, Cochair
Session 2aPA (2138)
Tajima, Keiichi–2146
Takada, Mieko–2174
Takeyama, Yousuke–2168
Talbert, Coretta M.–2127
Talesnick, Lily–2313
Tamaddoni, Hedieh–2301
Tamaddoni, Hedieh A.–2302
Tanaka, Ryo–2215
Tandiono, Tandiono–2289
Tang, Dajun–2225, 2226, 2267,
Chair Session 3pUW (2225)
Tang, Sai Chun–2160
Tanizawa, Yumi–2167
Tanji, Hiroki–2215
2364
Tantibundhit, Charturong–2315
Tao, Sha–2106
Taroudakis, Michael–2120
Tarr, Eric–2262
Tatara, Eric–2172
Tavakkoli, Jahan–2096
Tavossi, Hasson M.–2208
Taylor, Chris–2117
Tebout, Michelle–2128
Teilmann, Jonas–2248
Tennakoon, Sumudu P.–2290
Tenney, Stephen M.–2169
ter Haar, Gail–2220, 2250, 2301
ter Hofstede, Hannah M.–2185
Tewari, Muneesh–2278
Thaden, Joseph J.–2102
Thangawng, Abel L.–2252
Theis, Melissa A.–2133
Theis, Mellisa A.–2134
Themann, Christa L.–2134
Theobald, Pete D.–2216, 2217
Thiel, Jeff–2192
Thode, Aaron–2091, 2185, 2216
Thode, Aaron M.–2247
Thomas, Derek C.–2081
Thomas, Jean-Hugh–2171
Thomas, Len–2245, 2247, 2248, 2275
Thompson, Charles–2140, 2256,
2289
Thompson, Eric R.–2166
Thompson, Stephen C.–2131, 2252
Thorsos, Eric I.–2297
Tiberi Ljungqvist, Cinthia–2248
Tilsen, Sam–2144, 2176, Chair
Session 2aSC (2143)
Timmerman, Nancy S.–Cochair
Session 3aNS (2203)
Tinney, Charles E.–2101, 2167,
2168
Titovich, Alexey–2078
Titovich, Alexey S.–2098
Titze, Ingo R.–2163, 2259
Tognola, Gabriella–2313
Tohid, Usama–2305
Tokudome, Shinichiro–2137
Tollefsen, Cristina–2298
Tolstoy, Maya–2092
Tong, Bao N.–2138
Too, Gee-Pinn J.–2266
Tougaard, Jakob–2248
Tournat, Vincent–2077
Towne, Aaron–2080, 2081
Tracy, Erik C.–2173
Tran, Duong D.–2093
Tran, Trang–2175
Tregenza, Nick–2248
Trevino, Andrea C.–2211
Treweek, Benjamin C.–2158
Trickey, Jennifer S.–2073
Trone, Marie–2217
Troyes, Julien–2168
Tsutsumi, Seiji–2137, Cochair
Session 2aNSb (2135)
Tsysar, Sergey A.–2191, 2250
Tu, Juan–2095
Tune, Johnathan–2192
Tuomainen, Outi–2262
Turgeon, Christine–2105
Turgut, Altan–2121, 2122
Turnbull, Rory–2172, 2313
Turner, Cathleen–2156
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Turo, Diego–2209
Tuttle, Brian C.–2287
Tyack, Tyack L.–2247
Tyson, Cassandra–2300
Tyson, Thomas–2089
Ueberfuhr, Margarete A.–2306
Ui, Kyoichi–2137
Ulrich, Timothy J.–2252, 2265
Umemura, Shin-ichiro–2303
Umnova, Olga–2077, 2078, 2098,
Cochair Session 1aNS (2076),
Cochair Session 1pNS (2098)
Urbán, Jorge–2247
Urban, Jocelyn–2281
Urban, Matthew–2159
Urban, Matthew W.–2124
Valero, Henri Pierre–2253
Valero, Henri-Pierre–2213
Vali, Mansour–2312
Van Engen, Kristin–2241
Van Hedger, Stephen C.–2202
Vannier, Michaël–2309
Van Parijs, Sofie–2277
Van Stan, Jarrad H.–2260
Van Uffelen, Lora J.–2118
van Vossen, Robbert–2297
Vasilyeva, Lena–2173
Vatikiotis-Bateson, Eric–2105, 2310
Vavrikova, Marlen–2202
Vecherin, Sergey N.–2138
Venalainen, Kevin–2265
Vergez, Christophe–2283
Verlinden, Chris–2091
Verweij, Martin D.–2097
Vick, Jennell–2143
Vigeant, Michelle C.–2091, 2272,
2304
Vigmostad, Sarah–2224
Vignola, Joseph–2209
Vignola, Joseph F.–2186, 2194
Vignon, Francois–2300
Villa Médina, Franciso–2118
Villanueva, Flordeliza S.–2302
Villermaux, E.–2207
Visser, Fleur–2247
Vitorino, Clebe T.–2305
Vlaisavljevich, Eli–2250
Vogel, Irene–2176
Voix, Jeremie–2134, 2135
Volk, Roger–2112
Volk, Roger R.–2111
von Benda-Beckmann, Alexander
M.–2247
Von Borstel-Luna, Fernando D.–
2118
von Estorff, Otto–2206
Vuillot, François–2168
Wada, Kei–2137
Wage, Kathleen E.–2147
Wahlberg, Magnus–2091
Walden, David–2162
Walker, Bruce E.–2204
Wall, Alan T.–Chair Session 5aNS
(2304)
Wall, Alan T.–2079, 2100, 2102,
2171, Cochair Session 1aPA
(2079), Cochair Session 1pPA
(2100)
Wall, Carrie–2117
Waller, Steven J.–2116, 2270
Wallin, Brenton–2129, 2197
Walsh, Edward J.–2073
Walsh, Timothy–2141
Walton, Joseph P.–2258
Wan, Lin–2317
Wang, Chau-Chang–2179
Wang, Chenghui–2095
Wang, Chunhui–2148
Wang, Delin–2093, 2246, 2298
Wang, Ding–2075
Wang, Jingyan–2148
Wang, Kon-Well–2196
Wang, Lily–2126
Wang, Lily M.–2126, 2183, 2200,
2274, Cochair Session 4aAAa
(2241), Cochair Session 4pAAb
(2273)
Wang, Qi–2159
Wang, Ruijia–2254
Wang, Wenjing–2106
Wang, Xiuming–2254, 2255
Wang, Yak-Nam–2157, 2249, 2251,
2278, 2279
Wang, Yang–2125
Wang, Yen-Chih–2084
Wang, Yi–2144
Wang, Yijie–2133
Wang, Yiming–2139
Wang, Yue–2106
Wang, Zhitao–2075
Ward, Gareth P.–2077
Ward, Michael P.–2276
Warnecke, Michaela–2185
Warnez, Matthew–2280
Warren, Joseph–2185
Warzybok, Anna–2273
Washington, Jonathan N.–2105
Waters, Zachary J.–2112
Waters, Zack–2112
Waters, Zackary J.–2111
Watson, Charles S.–2212, 2308
Webster, Jeremy–2139
Wei, Chong–2075
Weinrich, Till–2164
Weirathmueller, Michelle–2092
Welton, Patrick J.–2179
Wennerberg, Daniel–2248
Werker, Janet F.–2263
Werner, Lynne–2309
Wessells, Hunter–2192
West, James E.–2130
Whalen, Cara–2073
Whalen, Douglas H.–2103
White, Charles E.–2178, 2197
White, Ean–2271
White, Robert D.–2127, 2285
Whiting, Jonathon–2270
Wickline, Samuel–2095, 2264,
2282
Wiggins, Sean M.–2073, 2092,
2118, 2148, 2153
Wilcock, Tom–2090
Wilcock, William S. D.–2092
Wilcock, William SD–2092
Wild, Lauren–2185
Williams, James C.–2191
Williams, Kevin–2111, 2268
Williams, Kevin L.–2087, 2110,
2111, 2225, Chair Session
168th Meeting: Acoustical Society of America
2364
1aUW (2086), Chair Session
4pUW (2296)
Williams, Michael–2286
Williams, Neil–2156
Williams, Neil J.–2156
Wilson, D. Keith–2138
Wilson, David K.–2139
Wilson, Ian–2143
Wilson, Kieth–2203
Wilson, Michael B.–2222
Wilson, Preston S.–2098, 2099,
2166, 2188, 2200, 2207, 2219,
2305, Cochair Session 3aAO
(2187), Cochair Session 3aID
(2197)
Wiseman, Suzi–2305
Withnell, Robert–2199
Withnell, Robert H.–2144, 2306
Wittum, Gabriel–2202
Wixom, Andrew–2194
Wochner, Mark S.–2207, Cochair
Session 3aPA (2205)
Wolff, Daniel M.–2201
Woodstock, Zev C.–2126
Woodworth, Michael–2195
Woolfe, Katherine F.–2149
Woolworth, David S.–2090, 2114,
2271
Worcester, Peter F.–2149, 2155
Worthmann, Brian–2148
Wrege, Peter H.–2275
Wright, Andrew–2248
Wright, Beverly A.–2292
Wright, Lindsay B.–2293
2365
Wright, Neil A.–2262
Wright, Richard–2175, 2314
Wu, Chenhuei–2145
Wu, Juefei–2300
Wu, Kuangcheng–2140
Wu, Sean F.–2171, Chair Session
2aSAb (2142), Chair Session
2pSA (2171)
Wylie, Jennifer–2122
Wylie, Jennifer L.–2121
Xian, Wei–2185
Xiang, Ning–2084, 2162, 2198,
2214, 2219, 2222, Cochair
Session 1aSP (2084)
Xie, Feng–2300
Xie, Zilong–2263, 2314
Xin, Penglai–2255
Xu, Bo–2144
Xu, Jin–2279, 2280
Xu, Zhen–2250, 2251
Xue, Yutong–2183
Yack, Tina M.–2246, Cochair
Session 4aAB (2245), Cochair
Session 4pAB (2275)
Yamaguchi, Tadashi–2123
Yamakawa, Kimiko–2175
Yamamoto, Hiroaki–2255
Yan, Hanbo–2174
Yan, Qingyang–2109
Yanagihara, Eugene–2123, 2157
Yang, Byunggon–2146
Yang, Chung-Lin–2109
J. Acoust. Soc. Am., Vol. 136, No. 4, Pt. 2, October 2014
Yang, Desen–2189
Yang, Jie–2225, 2297
Yang, Ming–2164
Yang, Shie–2265
Yang, Tsih C.–2147
Yang, Yiing Jang–2316
Yang, Yiqun–2159
Yang, Yuanxiang–2256
Yasuda, Jun–2303
Yeh, Meng-Hsin–2312
Yellepeddi, Atulya–2266
Yi, Dong Hoon–2226
Yi, Han-Gyol–2264
Yi, Hao–2144
Yi, Hoyoung–2109
Yoder, Timothy–2112
Yoder, Timothy J.–2111, 2112
Yoneyama, Kiyoko–2146, 2174
Yonovitz, Al–2145
Yoshioka, Yutoku–2159
Yoshizawa, Shin–2303
Younk, Darrel–2286
Yu, Hsin-Yi–2074
Yuldashev, Petr–2249
Yuldashev, Petr V.–2193, 2250,
2289
Zabolotskaya, Evgenia A.–2158
Zabotin, Nikolai–2156
Zabotin, Nikolay A.–2156
Zabotina, Liudmila–2156
Zagzebski, James–2122
Zaher, Eesha A.–2294
Zahorik, Pavel–2198, 2242
Zanartu, Matias–2260
Zander, Anthony C.–2136
Zang, Xiaoqin–2156
Zartman, David J.–2172
Zayats, Victoria–2175
Zeale, Matt–2185
Zerbini, Alexandre N.–2246,
2277
Zhang, Fawen–2306
Zhang, Mingfeng–2202
Zhang, Tao–2224, 2318
Zhang, Weifeng G.–2316
Zhang, Weifeng Gordon–2317
Zhang, Xiaoming–2124
Zhang, Xiumei–2254
Zhang, Ying–2191
Zhang, YongOu–2224, 2318
Zhang, Yu–2075, 2315
Zhang, Zhaoyan–2259, 2293, 2294,
2295
Zhao, Dan–2288
Zhao, Xiaofeng–2096
Zheng, Fei–2280
Zhong, Pei–2191, 2192, 2221
Zhou, Nina–2205
Zhou, Yinqiu–2255
Zhu, Hongxiao–2075
Zhuang, Hanqi–2073
Ziaei, Ali–2083
Zimman, Lal–2295
Zimmerman, John–2203
Zorgani, ali–2196, 2279
Zou, Bo–2318
Zurk, Lisa M.–2112, 2147
168th Meeting: Acoustical Society of America
2365
I N D E X TO A DV E RT I S E R S
Acoustics First Corporation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Cover 2
www.acousticsfirst.com
AFMG Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .A1
www.AFMG.eu
Brüel & Kjær . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Cover 4
www.bksv.com
G.R.A.S. Sound & Vibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .A3
www.gras.dk
Meyer Sound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .A9
meyersound.com
PCB Piezotronics Inc. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Cover 3
www.pcb.com
Scantek, Inc. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .A7
www.Scantekinc.com
A DV E RT I S I N G S A L E S O F F I C E
JOURNAL ADVERTISING SALES
Robert G. Finnegan, Director, Journal Advertising
AIP Publishing, LLC
1305 Walt Whitman Road, Suite 300
Melville, NY 11747-4300
Telephone: 516-576-2433
Fax: 516-576-2481
Email: rfinnegan@aip.org
SR. ADVERTISING PRODUCTION MANAGER
Christine DiPasca
Telephone: 516-576-2434
Fax: 516-576-2481
Email: cdipasca@aip.org
www.pcb.com/acoustics
When You Need to Take a
Sound Measurement
PCB’s Broad Range of Acoustic Products
High Quality ■ Unbeatable Prices ■ Fast Delivery ■ Best Warranty
To Learn More Visit: www.pcb.com/acoustics
Toll Free in USA 800-828-8840 ■ E-mail info@pcb.com
THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA
Postmaster: If undeliverable, send notice on Form 3579 to:
ACOUSTICAL SOCIETY OF AMERICA
1305 Walt Whitman Road, Suite 300,
Melville, NY 11747-4300
ISSN: 0001-4966
CODEN: JASMAN
Periodicals Postage Paid at
Huntington Station, NY and
Additional Mailing Offices