The document presents a methodology for crowdsourcing the assessment of Linked Data quality. The methodology involves a two stage process - a find stage using Linked Data experts to identify potential quality issues, and a verify stage using microtasks on Amazon Mechanical Turk to validate the issues. The study assesses three types of quality problems in DBpedia through this methodology and analyzes the results in terms of precision. The findings indicate that crowdsourcing is effective for detecting certain quality issues, and that the expertise of Linked Data experts is best for domain-specific tasks while microtask workers perform well on data comparison tasks. The conclusions discuss integrating crowdsourcing into Linked Data curation processes and conducting further experiments.