Monografías Plus      Agregar a favoritos      Ayuda      Português      Ingles     

Lexical Development in Students Writing While Progressing from B1 to B2 Level of CEFR




  1. Abstract
  2. Introduction
  3. Methods
  4. Results
  5. Discussion
  6. Conclusions
  7. Bibliography

Abstract

The present study is aimed at describing the characteristic features of lexical development for students of English language taking the B2 level course. Five lexical indices were analyzed for comparison with other published benchmark indicators of lexical development. The methodology followed during the study was the model proposed by Creswell and Clark in which qualitative and quantitative data strands were collected concurrently and analyzed independently. The data was then combined at the interpretation stage. The results obtained during the study highlight the importance of lexical diversity and fundamentality as the crucial variables to propose new pedagogical interventions in the students" learning process, especially for lexical development. When the students progress from B1 to B2 level of the Common European Framework of Reference, It has been observed how the index of fundamentality decreases, and this indicates the importance of taking pedagogical control of this variable. Lexical diversity benchmark indicators for B1 and B2 level help to control the students" progress during the course and to control the quality of the criterion-referenced assessment of writing skill.

Key words

Lexical development -diversity-fundamentality-complexity-density-CEFR

Introduction

The major English Language with second foreign language, French has decided to align all courses for English language learning to the system of assessment proposed by the Council of Europe with its Common European Framework of Reference (CEFR). Particularly, the students from first year, who required a B1 level to start this first course, take the subject English Language I. This subject is divided into 4 components: the four basic skills. For the speaking skill, the students take Face2Face intermediate and upper-intermediate courses. For listening and reading, they take the First Certificate in English; while for writing, they take a course called Practical Writer in which they are trained in writing essays and reports. Reviews have also been added. Moreover, the learning management system, MOODLE, has allowed teachers to collect the students" pieces of writing. Thus, the overall aim of this study is to measure lexical variables that describe the students" progress towards the B2 level of CEFR. To achieve this overall aim, we have asked the following research questions:

To what extent do the indices of lexical sophistication describe the progress to the B2 level of CEFR for the first year students of English at Universidad Central Marta Abreu de Las Villas?

Are the values for each lexical index of the dataset close to other published values of lexical sophistication for B1 and B2 levels of CEFR?

Which lexical indices describe best the educational intervention required for the students" improvement in writing achieving a B2 level of CEFR?

Methods

To fulfill the proposed aims, we followed a combined use of qualitative and quantitative methods as the model proposed by Creswell and Plano Clark. In this model, qualitative and quantitative data strands are collected concurrently and analyzed independently. The data is then combined at the interpretation stage (Creswell & Plano Clark, 2011). Thus, researchers can build up a thorough description of learning, teaching, and assessment to make appropriate decisions in the educational scenario (Docherty, Gratacós Casacubieta, Rodriguez Pazos, & Canosa, 2014). See figure 1 below.

Adaptation of the Model proposed by Creswell and Clark

Monografias.com

The number of students included in the study was 52. Their essays were chosen from three small groups of students who had taken the first course of English language. The collected essays (52) were converted into the .txt format and uploaded into the UAM Corpus Tool for a POS parsing (O'Donnell). Researchers decided to analyze the following lexical indices: lexical diversity, density, fundamentality, complexity, and noun orientation.

First, the .txt files were parsed using UAM Corpus Tool so that lexical diversity, density and complexity could be calculated. Afterwards, the txt files were processed using Anthony"s package specially AntConc and AntProfiler (Anthony, 2014). The different word lists considered illustrated the lexical sophistication in terms of the index of fundamentality for the dataset. Finally, the results obtained during the parsing with UAM Corpus Tool were used to calculate noun orientation (nouniness).

Text size influences negatively in the indices of lexical diversity, especially the TTR index. To reduce the effect of text size, all texts were sorted to an equal size (300 words). They were processed using Gramulator.V. 6.2 to obtain the Measure of Text Lexical Diversity (MTLD), which has been chosen as the most appropriate to reduce text size effect (Fergadiotis, Wright, & Green, 2015). For comparison with another dataset, Herdan"s C was also calculated since this was the measure used by Ishikawa (Ishikawa, 2015).

Descriptive statistics allowed the researchers to subject unexpected results to a qualitative analysis. These qualitative procedures are referred to below.

Size effect generates construct-irrelevant variances and misleading results (Koizumi, 2012).Through a literature review, the researchers of this study decided to use MTDL indices so that those texts, which diverge significantly from the MTDL means, both the global sample and the end-of-the-course chosen texts, could be analyzed qualitatively one by one.

This text-by-text analysis led researchers to crucial decision for improvements in the learning context.

Results

The dataset was processed using AntWordProfiler. The 52 files processed consist of 22183 tokens that represent the 80%, and 1740 types that the 41.12% and 845 families for the first wordlist. The second wordlist has 2205 tokens that represent the 7.97%, 844 types stand for the 19.89% and 547 families. The third word list consists of 916 tokens (3.31%), 384 types (9.05%). It is also important to highlight that 2352 tokens that represent the 8.5 % of the total were not in the BNC word list of which 1271 were types and stand for the 29.95%.

Table 1

Monografias.com

Lexical diversity

All the texts were sorted to 300 hundred words using Gramulator in a pre-assessment phase. The calculation of the Measure of Text Lexical Diversity (MTLD) was carried out through Evaluator. In this case, the dataset averaged a MTDL (raw) index of 89.86275 and a standard deviation of 17.195. For comparison with Ishikawa"s dataset, Herdan"s C was calculated and yielded average values of 0.87 for the dataset (N=52) and 0.83 for the end-of- the- course texts (N=21).We also calculated Herdan"s C index (0.8164) for the Philippines" (B2 level) text chosen (N=20) from Ishikawa"s corpus. Ishikawa dataset is made of texts within the range of 250 to 300 words. This text length depended on the prompts used to elicit two specific themes. This influences a great deal in reducing size effect for Herdan"s C index. However, our dataset average Herdan"s C index is affected by size effect since the prompts for the writing tasks were diverse and covering different period within the second semester. For instance, the Herdan"s C average for the 52 files yielded a value equal to 0.87.

Table 2

N

Mínimum

Maximum

Mean

STDEV

Herdan"s C

52

0.83

0.9

0.8726

0.01727

N valid

52

Lexical density

This measure describes the ratio of lexical and open-class words (i.e., content words), such as nouns, verbs, adjectives, and occasionally parts of adverbs, in relation to the whole vocabulary. We interpreted in two different ways: an index of information-orientation and lexical easiness/immaturity (Ishikawa, 2015, pp. 203,204). The average lexical density index for the dataset (49.53) indicates a value approaching that of native speakers in writing (48.90). Ishikawa obtained values for B2 levels students from China, Japan, Taiwan and India averaging 50.35.

Lexical fundamentality

As expressed by Ishikawa, "lexical fundamentality is a measure based on the ratio of high frequency fundamental words to the whole vocabulary. It is generally used as an index of dependence on basic vocabulary, and consequently non-native-learnerness as well. Hasselgren (1994) maintains that L2 learners, unlike English native speakers, often rely on a relatively limited set of familiar and comfortable words in order to convey various ideas" (Ishikawa, 2015).

High-frequency fundamental words can be identified in many different ways, such as by extracting the top 1000-3000 words from existing word lists, or by selecting them directly from corpora. Nation (2012) chose the top 2000 words from a one-million-word corpus composed of speeches and essays in American and British English. The Range Software package calculates the ratios of the top 1000, 2000, and 3000 words from several sources. For wordlist one, the results averaged 80.21. Table 2 shows the averaged results for texts grouped according to the approximate number of words.

Table 3 Lexical Fundamentality

Word list

Tokens

%

Types

%

One

22183

80.21

1745

41.12

Two

2205

7.97

844

19.89

Three

916

3.31

384

9.05

Not in the lists

2352

8.5

1271

29.95

Total

27656

4244

1693

 

During the text-by-text analysis, Word list 4 was subject to analysis, it showed the students" problems in word collocation and colligations. For example, one student wrote, "…cartoon movies are endless fountains of imagination…" In another file, another student wrote, "Satellites are playing an amazing role in the globe."In file 52 we can read, "Almost every day people deal with irritating pests that constantly stalk them."

The following examples, taken from three different texts, show how a type (huge) with high frequency in the dataset and found in word list four is incorrectly collocated, "...a huge spectrum of emotions". "Another side in which these two musical genres are hugely different is the singer"s voice." "…hurricanes can produce huge storms with strong winds and heavy rains."

Lexical complexity

Lexical complexity often leads to lexical difficulty, since longer words are generally more difficult to use and understand than shorter words. Lexical complexity, expressed here by the number of characters per word, averaged 4.6 with a standard deviation of 0.21.

Noun orientation or nouniness

This index is usually calculated by dividing the number of nouns by the number of verbs (Biber & Reppen, 1998). Although this noun /verb ratio could be interpreted in different ways, in this study, we consider the degree of formality that characterizes the written texts. The noun/verb ratio is 1.54-noun/ verb ratio.

Discussion

Lexical sophistication measures in this study include lexical diversity, lexical density, fundamentality, lexical complexity and noun orientation. The first of these measures, lexical diversity, refers to text difficulty and text base cohesion. As stated by Glasser and MacNamara, greater lexical diversity adds more difficulty because each unique word needs to be encoded and integrated into the discourse context. Lexical diversity provides a global measure of the cohesiveness of the text: the lower the lexical diversity of the text, the greater the repetition of the terms in the text (Graesser & McNamara).

The lexical diversity measured using Gramulator v 6.0 showed descriptive statistics varying from a minimum of 53.242 to a maximum of 123.458. For N=20 the mean value was 83.2245. Although the sample is very small, only 6 out of 20 texts measured more than 90 MTLD (raw). These results indicate that the MTLD indices for the final period of the course is similar to other measures obtained by other researchers; for example, Shin"ichiro Ishikawa obtained Herdan"s C indices around 0.81 for Chinese learners, 0.85 for Indian learners and 0.77 for Japanese learners. Taiwanese showed 0.79, all these indices for learners at B2 level of CEFR (Ishikawa, 2015).

Thomas insisted on the trend of lower type token ratio from B1 to C2 level by indicating a 0.47 TTR average for B2 compared to 0.5 for B1 and 0.38 for C1 level of the CEFR (Thomas, 2014). However, the high indices of lexical diversity obtained in this study indicated an added level of difficulty for each text. Moreover, text cohesion is compromised. Fortunately, 38.4% of the sampled texts showed average indices of lexical diversity, which matched the benchmark indicators for B2 level of CEFR. In this study, MTLD averaged 83.69647619 with a standard deviation 17.50551323.

Lexical density is another index of lexical sophistication. Johansson confirms the results from Ure 1971 in which both measures, lexical diversity and density, showed a modality effect, in that they are significantly higher for the written discourses. However, genre only affects lexical diversity; the lexical density measure seems to be indifferent to genre (Johansson, 2008).She also reflects on the developmental patterns for both measures. The result obtained in this study was 49.53 and approximates Ishikawa benchmark indicator for B2 (50.35).

Lexical fundamentality illustrates the learners" dependability of basic vocabulary. The average result in this study (81.36) shows a smaller value than those of Ishikawa"s study (86.10) for the B2 level of CEFR (Ishikawa, 2015). Paul Nation insisted on the research evidence that support the division of vocabulary into a general service list and special purpose vocabulary. He also says that Typically, the coverage of the General Service List is around 75% of the running words in non-fiction texts (Hwang, 1989) and around 90% of the running words in fiction (Hirsh, 1993). "Coverage" refers to the percentage of the running words in a text or corpus that are also in, or covered by, a particular word list. So, if we examined a page of a novel, we might find that there were 300 running words on the page (each repetition of a word already counted would be counted as a new running word), and that around 90% of these words were in the General Service List. However, Nation cites Hwang and emphasizes the coverage of academic texts and gives a 8.5% coverage of academic texts, 3.9% coverage of newspapers, and 1.7% coverage of fiction (Nation & Kyongho, 1995). Thus, The index of fundamentality needs to be paid close attention by the part of teachers

Lexical complexity average index 4.6 for lexical complexity was a bit higher than the indices obtained by Ishikawa in his research (4.48) for B2 level of CEFR and an average of 4.44 for Native writers" sampled texts (Ishikawa, 2015). Ji-young Kim affirms that lexical complexity, has been an indicator of L2 learners" lexical development and overall writing proficiency (Kim, 2014)

Noun orientation (Nouniness) showed positive results with an average 1.54 ratio, which indicates the level of formality attained by learners when writing their essays. However, Moxley, cited by Ishikawa, proposed that in essay writing the ratio functions as an index of dynamic and effective description. Indeed, many style guides advise writers to maintain a high verb/noun ratio in order to imbue their language with "a sense of vigor by eliminating unnecessary nouns and choosing powerful verbs" (Ishikawa, 2015).

Conclusions

The results obtained in this study confirm the validity of the CEFR assessment criteria for writing and the need to enhance the expertise of testers so that more consistency will be obtained in the assessment results. In this case, 38.4% of the sample texts met the standards for B2+ level and the rest within the overlapped B1+/B2-. Of course, this overlapping was an expected result depending on the way the sample was collected.

Lexical diversity and fundamentality were measures that marked the difference for the study. In the case of diversity, the size effect of the sample texts was solved by grouping texts firstly according to its size to calculate the TTR and Herdan"s C indices for each group. MTLD definitely helped to give solution to the problem of text size effect using Gramulator. We obtained the desire descriptive statistics for 300 words texts. Fundamentality also helped to describe the students" difficulties on word collocation and colligation when the dataset was subject to a qualitative analysis. Thus, lexical diversity and fundamentality marked the difference to evaluate the quality of criterion-referenced assessment for the writing skills in terms of lexical development at different levels of CEFR, especially B1 and B2 level

We propose teachers to involve their students in using tools as AntWordProfiler to give solution to the detected problems on word collocation and colligation. We also recommend the elaboration of feedback exercises with the same aim.

Bibliography

Anthony, L. (2014, September 17). AntConc (Windows, Macintosh OS X, and Linux). Tokyo, Japan: Waseda University.

Biber, C. D., & Reppen, R. (1998). Corpus linguistics: Investigating language structure and use.

Creswell, J. W., & Plano Clark, V. L. (2011). Designing and Conducting Mixed Methods Research.

Docherty, C., Gratacós Casacubieta, G., Rodriguez Pazos, G., & Canosa, P. (2014). Investigating the impact of assessment in a single-sex educational setting in Spain. Research Notes , 3-14.

Fergadiotis, G., Wright, H. H., & Green, S. B. (2015). Psychometric Evaluation of Lexical Diversity Indices: Assessing Length Effects. Journal of Speech, Language, and Hearing Research , 1-13.

Graesser, A. C., & McNamara, D. S. Computational Analyses of Multilevel Discourse Comprehension. University of Memphis, Psychology Department. Mempgis: Topics in Cognitive Science, in press.

Ishikawa, S. (2015). Lexical Development in L2 English Learners" Speeches and Writings. 7th International Conference on Corpus Linguistics: Current Work in Corpus Linguistics: (p. 206). Kobe: Elsevier ScienceDirect.

Johansson, V. (2008). Lexical diversity and lexical density in speech and writing: a developmental perspective. Working papers, Lund University, Dept. of Linguistics and Phonetics, Lund.

Kim, J.-y. (2014). Predicting L2 Writing Proficiency Using Linguistic Complexity Measures: A Corpus-Based Study. English teaching , 69, 1-32.

Koizumi, R. (2012). Relationships Between Text Length and Lexical Diversity Measures: Can We Use Short Texts ofLess than 100 Tokens? 1 (1), pp. 60-69.

O'Donnell, M. (n.d.). The UAM Corpus Tool: software for Corpus Annotation and Exploration. Retrieved 10 28, 2017, from Wagsoft.com: http://www.wagsoft.com/Papers/AESLA08.pdf

Thomas, R. P. (2014). Academic Writing in English: a corpus-based inquiry into the linguistic characteristics of levels B1-C2. Retrieved december 23, 2016, from EALTA.

 

 

 

Autor:

Humberto Miñoso Machado M.A.

associate professor

Universidad Central Marta Abreu de Las Villas (UCLV)

Tania Machado Armas M.A.

associate professor

Universidad Central Marta Abreu de Las Villas (UCLV)


Comentarios


Trabajos relacionados

Ver mas trabajos de Lengua y Literatura

 
 

Nota al lector: es posible que esta página no contenga todos los componentes del trabajo original (pies de página, avanzadas formulas matemáticas, esquemas o tablas complejas, etc.). Recuerde que para ver el trabajo en su versión original completa, puede descargarlo desde el menú superior.


Todos los documentos disponibles en este sitio expresan los puntos de vista de sus respectivos autores y no de Monografias.com. El objetivo de Monografias.com es poner el conocimiento a disposición de toda su comunidad. Queda bajo la responsabilidad de cada lector el eventual uso que se le de a esta información. Asimismo, es obligatoria la cita del autor del contenido y de Monografias.com como fuentes de información.

Iniciar sesión

Ingrese el e-mail y contraseña con el que está registrado en Monografias.com

   
 

Regístrese gratis

¿Olvidó su contraseña?

Ayuda