# Sample Size Calculation In Cross Sectional Studies Pdf Download

I receive a lot of queries regarding sample size calculation for this article. Recently, someone asked a question that involved calculating sample size with Epi Info 7. I believe many more would benefit from a public response, hence this article.

## Sample Size Calculation In Cross Sectional Studies Pdf Download

It provides for easy data entry form and database construction, a customized data entry experience, and data analyses with epidemiologic statistics, maps, and graphs for public health professionals who may lack an information technology background. It also includes a tool for sample size calculation.

Dear Dr RoopeshFirst of all Thank you for your immediate and polite response ..I am confusing with calculation of such senario when I enter the outcome of unexposed and the rest required with out entering outcome of exposed I get sample size I want to proceed but if enter outcome of exposed the sample size is extremely small so what shall I do?

So if the researcher is interested in knowing the average systolic blood pressure in pediatric age group of that city at 5% of type of 1 error and precision of 5 mmHg of either side (more or less than mean systolic BP) and standard deviation, based on previously done studies, is 25 mmHg then formula for sample size calculation will be

To calculate this adequate sample size there is a simple formula, however it needs some practical issues in selecting values for the assumptions required in the formula too and in some situations, the decision to select the appropriate values for these assumptions are not simple (3). The following simple formula would be used for calculating the adequate sample size in prevalence study (4); n=Z2P(1-P)d2 Where n is the sample size, Z is the statistic corresponding to level of confidence, P is expected prevalence (that can be obtained from same studies or a pilot study conducted by the researchers), and d is precision (corresponding to effect size).

Choosing a suitable sample size in qualitative research is an area of conceptual debate and practical uncertainty. That sample size principles, guidelines and tools have been developed to enable researchers to set, and justify the acceptability of, their sample size is an indication that the issue constitutes an important marker of the quality of qualitative research. Nevertheless, research shows that sample size sufficiency reporting is often poor, if not absent, across a range of disciplinary fields.

We recommend, firstly, that qualitative health researchers be more transparent about evaluations of their sample size sufficiency, situating these within broader and more encompassing assessments of data adequacy. Secondly, we invite researchers critically to consider how saturation parameters found in prior methodological studies and sample size community norms might best inform, and apply to, their own project and encourage that data adequacy is best appraised with reference to features that are intrinsic to the study at hand. Finally, those reviewing papers have a vital role in supporting and encouraging transparent study-specific reporting.

Other work has sought to examine practices of sample size reporting and sufficiency assessment across a range of disciplinary fields and research domains, from nutrition [34] and health education [32], to education and the health sciences [22, 27], information systems [30], organisation and workplace studies [33], human computer interaction [21], and accounting studies [24]. Others investigated PhD qualitative studies [31] and grounded theory studies [35]. Incomplete and imprecise sample size reporting is commonly pinpointed by these investigations whilst assessment and justifications of sample size sufficiency are even more sporadic.

Similarly, fewer than 10% of articles in organisation and workplace studies provided a sample size justification relating to existing recommendations by methodologists, prior relevant work, or saturation [33], whilst only 17% of focus groups studies in health-related journals provided an explanation of sample size (i.e. number of focus groups), with saturation being the most frequently invoked argument, followed by published sample size recommendations and practical reasons [22]. The notion of saturation was also invoked by 11 out of the 51 most highly cited studies that Guetterman [27] reviewed in the fields of education and health sciences, of which six were grounded theory studies, four phenomenological and one a narrative inquiry. Finally, analysing 641 interview-based articles in accounting, Dai et al. [24] called for more rigor since a significant minority of studies did not report precise sample size.

Despite increasing attention to rigor in qualitative research (e.g. [52]) and more extensive methodological and analytical disclosures that seek to validate qualitative work [24], sample size reporting and sufficiency assessment remain inconsistent and partial, if not absent, across a range of research domains.

A structured search for articles reporting cross-sectional, interview-based qualitative studies was carried out and eligible reports were systematically reviewed and analysed employing both quantitative and qualitative analytic techniques.

Ten (47.6%) of the 21 BMJ studies, 26 (49.1%) of the 53 BJHP papers and 24 (17.1%) of the 140 SHI articles provided some sort of sample size justification. As shown in Table 2, the majority of articles which justified their sample size provided one justification (70% of articles); fourteen studies (25%) provided two distinct justifications; one study (1.7%) gave three justifications and two studies (3.3%) expressed four distinct justifications.

The qualitative content analysis of the scientific narratives identified eleven different sample size justifications. These are described below and illustrated with excerpts from relevant articles. By way of a summary, the frequency with which these were deployed across the three journals is indicated in Table 3.

It has previously been recommended that qualitative studies require a minimum sample size of at least 12 to reach data saturation (Clarke & Braun, 2013; Fugard & Potts, 2014; Guest, Bunce, & Johnson, 2006) Therefore, a sample of 13 was deemed sufficient for the qualitative analysis and scale of this study. (BJHP50).

The present study sought to examine how qualitative sample sizes in health-related research are characterised and justified. In line with previous studies [22, 30, 33, 34] the findings demonstrate that reporting of sample size sufficiency is limited; just over 50% of articles in the BMJ and BJHP and 82% in the SHI did not provide any sample size justification. Providing a sample size justification was not related to the number of interviews conducted, but it was associated with the journal that the article was published in, indicating the influence of disciplinary or publishing norms, also reported in prior research [30]. This lack of transparency about sample size sufficiency is problematic given that most qualitative researchers would agree that it is an important marker of quality [56, 57]. Moreover, and with the rise of qualitative research in social sciences, efforts to synthesise existing evidence and assess its quality are obstructed by poor reporting [58, 59].

Cox or Poisson regression with robust variance and log-binomial regression provide correct estimates and are a better alternative for the analysis of cross-sectional studies with binary outcomes than logistic regression, since the prevalence ratio is more interpretable and easier to communicate to non-specialists than the odds ratio. However, precautions are needed to avoid estimation problems in specific situations.

Epidemiologic studies found in the literature are frequently cross-sectional, as this is a simple, fast and inexpensive design alternative. Often the outcomes are binary, and logistic regression is used for the analysis. This results in the odds ratio being frequently reported in situations where incidence or prevalence ratios are estimable, despite the fact that it is "biologically interpretable only insofar as it estimates the incidence-proportion or incidence-density ratio" [1].

From a survey done by the authors in the International Journal of Epidemiology and in the Revista de Saúde Pública (São Paulo, Brazil) published in 1998, 221 original articles were found. Among these, 110 (50%) were based on cross-sectional studies, and 45 (20%) on longitudinal studies. Logistic regression was used for the analysis of 37 (34%) and 10 (22%) of these studies, respectively. We have, therefore, that an important proportion of such studies end up reporting odds ratios, the effect measure yielded by logistic regression, rather than prevalence or incidence ratios.

When a constant risk period is assigned to everyone in the cohort, the hazard rate ratio estimated by Cox regression equals the cumulative incidence ratio in longitudinal studies, or the prevalence ratio in cross-sectional studies [17, 18]. Although this model can produce correct point estimates, the underlying distribution of the response is Poisson. As prevalence data in a cross-sectional study follow a binomial distribution, the variance of the coefficients tends to be overestimated, resulting in wider confidence intervals compared to those based on the binomial distribution. This is easily explained by comparing the binomial variance, p(1-p), with a maximum of 0,25 when p = 0,5 with Poisson variance, λ, that grows steadily with the intensity of the process. That is, the variance estimated by the Poisson model will be very close to the binomial variance when the outcome is rare, but will be increasingly greater as the outcome becomes more frequent. In such a situation we have underdispersion, the opposite to the more commonly observed overdispersion, where the data is more dispersed than the model predicts.

A structured search of PubMed identified RCTs examining the efficacy of pharmaceutical interventions. Number of outcome measures, number of animals enrolled per trial, whether a primary outcome was identified, and the presence of a sample size calculation were extracted from the RCTs. The source of funding was identified for each trial and groups compared on the above parameters.