This study investigated how auditory and visual information is integrated when presented through an augmented reality device. Subjects were shown a visual cue in the form of a green square at different angles using Sony SmartEyeGlass and played white noise sounds through headphones at angles from +45 to -45 degrees. Subjects responded whether the sound came from the direction of the visual cue. The results showed that two audio rendering methods - amplitude-based panning and HRTF-based methods - did not produce significant differences for the AR device. This suggests that simpler panning methods can be used to save computational costs for AR applications with simple visual displays. Future work will explore integration in other planes and with more complex visual cues using different AR devices like