The document proposes a hierarchical approach to planning for partially observable Markov decision processes (POMDPs) to address their exponential complexity. It involves breaking the problem into smaller related POMDPs, each with a subset of the full action space, and imposing a policy constraint. This exploits structure in the problem domain to find near-optimal policies. As an example, a medical assistance POMDP is broken into controllers for different areas and actions. Abstract actions are modeled using the local policies of corresponding low-level controllers. The value function for the top-level controller provides an upper bound on the approximation.