To achieve adaptive user interfaces (UI) for smartphones, researchers have been developing sensing methods to detect how a user is holding a smartphone. A variety of promising adaptive UIs have been demonstrated, such as those that automatically switch the displayed content and the position of interactive components in accordance with how the phone is being held. In this paper, we present a follow-up study on ReflecTouch, a state-of-the-art grasping posture detection method proposed by Zhang et al. that uses corneal reflection images captured by the front camera of a smartphone. We extend the previous work by investigating the performance of this method towards actual use and its potential challenges through a crowdsourced experiment with a large number of participants.