Science 1 (P1-P2-New)
Quiz by Trần Thị Hùynh Như
Feel free to use or edit a copy
includes Teacher and Student dashboards
Measure skillsfrom any curriculum
Tag the questions with any skills you have. Your dashboard will track each student's mastery of each skill.
- edit the questions
- save a copy for later
- start a class game
- automatically assign follow-up activities based on students’ scores
- assign as homework
- share a link with colleagues
- print as a bubble sheet
- Q1
In 1971 researchers hoping to predict earthquakes in the short term by identifying precursory phenomena (those that occur a few days before large quakes but not otherwise) turned their attention to changes in seismic waves that had been detected prior to earthquakes. An explanation for such changes was offered by “dilatancy theory,” based on a well-known phenomenon observed in rocks in the laboratory: as stress builds, microfractures in rock close, decreasing the rock’s volume. But as stress continues to increase, the rock begins to crack and expand in volume, allowing groundwater to seep in, weakening the rock. According to this theory, such effects could lead to several precursory phenomena in the field, including a change in the velocity of seismic waves, and an increase in small, nearby tremors.
Researchers initially reported success in identifying these possible precursors, but subsequent analyses of their data proved disheartening. Seismic waves with unusual velocities were recorded before some earthquakes, but while the historical record confirms that most large earthquakes are preceded by minor tremors, these foreshocks indicate nothing about the magnitude of an impending quake and are indistinguishable from other minor tremors that occur without large earthquakes.
In the 1980s, some researchers turned their efforts from short-term to long-term prediction. Noting that earthquakes tend to occur repeatedly in certain regions, Lindh and Baker attempted to identify patterns of recurrence, or earthquake cycles, on which to base predictions. In a study of earthquake-prone sites along the San Andreas Fault, they determined that quakes occurred at intervals of approximately 22 years near one site and concluded that there was a 95 percent probability of an earthquake in that area by 1992. The earthquake did not occur within the time frame predicted, however.
Evidence against the kind of regular earthquake cycles that Lindh and Baker tried to establish has come from a relatively new field, paleoseismology. Paleoseismologists have unearthed and dated geological features such as fault scarps that were caused by earthquakes thousands of years ago. They have determined that the average interval between ten earthquakes that took place at one site along the San Andreas Fault in the past two millennia was 132 years, but individual intervals ranged greatly, from 44 to 332 years.
The passage is primarily concerned with
describing the development of methods for establishing patterns in the occurrence of past earthquakes
suggesting that accurate earthquake forecasting must combine elements of long-term and short-term prediction
challenging the usefulness of dilatancy theory for explaining the occurrence of precursory phenomena
explaining why one method of earthquake prediction has proven more practicable than an alternative method
discussing the deficiency of two methods by which researchers have attempted to predict the occurrence of earthquakes
300s - Q2
In 1971 researchers hoping to predict earthquakes in the short term by identifying precursory phenomena (those that occur a few days before large quakes but not otherwise) turned their attention to changes in seismic waves that had been detected prior to earthquakes. An explanation for such changes was offered by “dilatancy theory,” based on a well-known phenomenon observed in rocks in the laboratory: as stress builds, microfractures in rock close, decreasing the rock’s volume. But as stress continues to increase, the rock begins to crack and expand in volume, allowing groundwater to seep in, weakening the rock. According to this theory, such effects could lead to several precursory phenomena in the field, including a change in the velocity of seismic waves, and an increase in small, nearby tremors.
Researchers initially reported success in identifying these possible precursors, but subsequent analyses of their data proved disheartening. Seismic waves with unusual velocities were recorded before some earthquakes, but while the historical record confirms that most large earthquakes are preceded by minor tremors, these foreshocks indicate nothing about the magnitude of an impending quake and are indistinguishable from other minor tremors that occur without large earthquakes.
In the 1980s, some researchers turned their efforts from short-term to long-term prediction. Noting that earthquakes tend to occur repeatedly in certain regions, Lindh and Baker attempted to identify patterns of recurrence, or earthquake cycles, on which to base predictions. In a study of earthquake-prone sites along the San Andreas Fault, they determined that quakes occurred at intervals of approximately 22 years near one site and concluded that there was a 95 percent probability of an earthquake in that area by 1992. The earthquake did not occur within the time frame predicted, however.
Evidence against the kind of regular earthquake cycles that Lindh and Baker tried to establish has come from a relatively new field, paleoseismology. Paleoseismologists have unearthed and dated geological features such as fault scarps that were caused by earthquakes thousands of years ago. They have determined that the average interval between ten earthquakes that took place at one site along the San Andreas Fault in the past two millennia was 132 years, but individual intervals ranged greatly, from 44 to 332 years.
According to the passage, laboratory evidence concerning the effects of stress on rocks might help account for
differences in magnitude among earthquakes
differences in the frequency with which earthquakes occur in various areas
variations in the intervals between earthquakes in a particular area
the unreliability of short-term earthquake predictions
certain phenomena that occur prior to earthquakes
45s - Q3
In 1971 researchers hoping to predict earthquakes in the short term by identifying precursory phenomena (those that occur a few days before large quakes but not otherwise) turned their attention to changes in seismic waves that had been detected prior to earthquakes. An explanation for such changes was offered by “dilatancy theory,” based on a well-known phenomenon observed in rocks in the laboratory: as stress builds, microfractures in rock close, decreasing the rock’s volume. But as stress continues to increase, the rock begins to crack and expand in volume, allowing groundwater to seep in, weakening the rock. According to this theory, such effects could lead to several precursory phenomena in the field, including a change in the velocity of seismic waves, and an increase in small, nearby tremors.
Researchers initially reported success in identifying these possible precursors, but subsequent analyses of their data proved disheartening. Seismic waves with unusual velocities were recorded before some earthquakes, but while the historical record confirms that most large earthquakes are preceded by minor tremors, these foreshocks indicate nothing about the magnitude of an impending quake and are indistinguishable from other minor tremors that occur without large earthquakes.
In the 1980s, some researchers turned their efforts from short-term to long-term prediction. Noting that earthquakes tend to occur repeatedly in certain regions, Lindh and Baker attempted to identify patterns of recurrence, or earthquake cycles, on which to base predictions. In a study of earthquake-prone sites along the San Andreas Fault, they determined that quakes occurred at intervals of approximately 22 years near one site and concluded that there was a 95 percent probability of an earthquake in that area by 1992. The earthquake did not occur within the time frame predicted, however.
Evidence against the kind of regular earthquake cycles that Lindh and Baker tried to establish has come from a relatively new field, paleoseismology. Paleoseismologists have unearthed and dated geological features such as fault scarps that were caused by earthquakes thousands of years ago. They have determined that the average interval between ten earthquakes that took place at one site along the San Andreas Fault in the past two millennia was 132 years, but individual intervals ranged greatly, from 44 to 332 years.
It can be inferred from the passage that one problem with using precursory phenomena to predict earthquakes is that minor tremors
are not always followed by large earthquakes
are directly linked to the mechanisms that cause earthquakes
are difficult to distinguish from major tremors
have proven difficult to measure accurately
typically occur some distance from the sites of the large earthquakes that follow them
45s - Q4
CIn 1971 researchers hoping to predict earthquakes in the short term by identifying precursory phenomena (those that occur a few days before large quakes but not otherwise) turned their attention to changes in seismic waves that had been detected prior to earthquakes. An explanation for such changes was offered by “dilatancy theory,” based on a well-known phenomenon observed in rocks in the laboratory: as stress builds, microfractures in rock close, decreasing the rock’s volume. But as stress continues to increase, the rock begins to crack and expand in volume, allowing groundwater to seep in, weakening the rock. According to this theory, such effects could lead to several precursory phenomena in the field, including a change in the velocity of seismic waves, and an increase in small, nearby tremors.
Researchers initially reported success in identifying these possible precursors, but subsequent analyses of their data proved disheartening. Seismic waves with unusual velocities were recorded before some earthquakes, but while the historical record confirms that most large earthquakes are preceded by minor tremors, these foreshocks indicate nothing about the magnitude of an impending quake and are indistinguishable from other minor tremors that occur without large earthquakes.
In the 1980s, some researchers turned their efforts from short-term to long-term prediction. Noting that earthquakes tend to occur repeatedly in certain regions, Lindh and Baker attempted to identify patterns of recurrence, or earthquake cycles, on which to base predictions. In a study of earthquake-prone sites along the San Andreas Fault, they determined that quakes occurred at intervals of approximately 22 years near one site and concluded that there was a 95 percent probability of an earthquake in that area by 1992. The earthquake did not occur within the time frame predicted, however.
Evidence against the kind of regular earthquake cycles that Lindh and Baker tried to establish has come from a relatively new field, paleoseismology. Paleoseismologists have unearthed and dated geological features such as fault scarps that were caused by earthquakes thousands of years ago. They have determined that the average interval between ten earthquakes that took place at one site along the San Andreas Fault in the past two millennia was 132 years, but individual intervals ranged greatly, from 44 to 332 years.
According to the passage, some researchers based their research about long-term earthquake prediction on which of the following facts?
Changes in the volume of rock can occur as a result of building stress and can lead to the weakening of rock.
The historical record confirms that most earthquakes have been preceded by minor tremors
Some regions tend to be the site of numerous earthquakes over the course of many years.
The average interval between earthquakes in one region of the San Andreas Fault is 132 years.
Paleoseismologists have been able to unearth and date geological features caused by past earthquakes.
45s - Q5
In 1971 researchers hoping to predict earthquakes in the short term by identifying precursory phenomena (those that occur a few days before large quakes but not otherwise) turned their attention to changes in seismic waves that had been detected prior to earthquakes. An explanation for such changes was offered by “dilatancy theory,” based on a well-known phenomenon observed in rocks in the laboratory: as stress builds, microfractures in rock close, decreasing the rock’s volume. But as stress continues to increase, the rock begins to crack and expand in volume, allowing groundwater to seep in, weakening the rock. According to this theory, such effects could lead to several precursory phenomena in the field, including a change in the velocity of seismic waves, and an increase in small, nearby tremors.
Researchers initially reported success in identifying these possible precursors, but subsequent analyses of their data proved disheartening. Seismic waves with unusual velocities were recorded before some earthquakes, but while the historical record confirms that most large earthquakes are preceded by minor tremors, these foreshocks indicate nothing about the magnitude of an impending quake and are indistinguishable from other minor tremors that occur without large earthquakes.
In the 1980s, some researchers turned their efforts from short-term to long-term prediction. Noting that earthquakes tend to occur repeatedly in certain regions, Lindh and Baker attempted to identify patterns of recurrence, or earthquake cycles, on which to base predictions. In a study of earthquake-prone sites along the San Andreas Fault, they determined that quakes occurred at intervals of approximately 22 years near one site and concluded that there was a 95 percent probability of an earthquake in that area by 1992. The earthquake did not occur within the time frame predicted, however.
Evidence against the kind of regular earthquake cycles that Lindh and Baker tried to establish has come from a relatively new field, paleoseismology. Paleoseismologists have unearthed and dated geological features such as fault scarps that were caused by earthquakes thousands of years ago. They have determined that the average interval between ten earthquakes that took place at one site along the San Andreas Fault in the past two millennia was 132 years, but individual intervals ranged greatly, from 44 to 332 years.
The passage suggests which of the following about the paleoseismologists’ findings described in lines 42–50?
They indicate that researchers attempting to develop long-term methods of earthquake prediction have overlooked important evidence concerning the causes of earthquakes.
They suggest that researchers may someday be able to determine which past occurrences of minor tremors were actually followed by large earthquakes
They suggest that paleoseismologists may someday be able to make reasonably accurate longterm earthquake predictions
They suggest that the recurrence of earthquakes in earthquake-prone sites is too irregular to serve as a basis for earthquake prediction.
They suggest that the frequency with which earthquakes occurred at a particular site decreased significantly over the past two millennia.
45s - Q6
In 1971 researchers hoping to predict earthquakes in the short term by identifying precursory phenomena (those that occur a few days before large quakes but not otherwise) turned their attention to changes in seismic waves that had been detected prior to earthquakes. An explanation for such changes was offered by “dilatancy theory,” based on a well-known phenomenon observed in rocks in the laboratory: as stress builds, microfractures in rock close, decreasing the rock’s volume. But as stress continues to increase, the rock begins to crack and expand in volume, allowing groundwater to seep in, weakening the rock. According to this theory, such effects could lead to several precursory phenomena in the field, including a change in the velocity of seismic waves, and an increase in small, nearby tremors.
Researchers initially reported success in identifying these possible precursors, but subsequent analyses of their data proved disheartening. Seismic waves with unusual velocities were recorded before some earthquakes, but while the historical record confirms that most large earthquakes are preceded by minor tremors, these foreshocks indicate nothing about the magnitude of an impending quake and are indistinguishable from other minor tremors that occur without large earthquakes.
In the 1980s, some researchers turned their efforts from short-term to long-term prediction. Noting that earthquakes tend to occur repeatedly in certain regions, Lindh and Baker attempted to identify patterns of recurrence, or earthquake cycles, on which to base predictions. In a study of earthquake-prone sites along the San Andreas Fault, they determined that quakes occurred at intervals of approximately 22 years near one site and concluded that there was a 95 percent probability of an earthquake in that area by 1992. The earthquake did not occur within the time frame predicted, however.
Evidence against the kind of regular earthquake cycles that Lindh and Baker tried to establish has come from a relatively new field, paleoseismology. Paleoseismologists have unearthed and dated geological features such as fault scarps that were caused by earthquakes thousands of years ago. They have determined that the average interval between ten earthquakes that took place at one site along the San Andreas Fault in the past two millennia was 132 years, but individual intervals ranged greatly, from 44 to 332 years.
The author implies which of the following about the ability of the researchers mentioned in line 18 to predict earthquakes?
They can determine the regions where earthquakes have occurred in the past but not the regions where they are likely to occur in the future
They can identify the regions where earthquakes are likely to occur but not when they will occur.
They can identify when an earthquake is likely to occur but not how large it will be
They are unable to determine either the time or the place that earthquakes are likely to occur.
They are likely to be more accurate at short-term earthquake prediction than at long-term earthquake prediction.
45s - Q7
Suppose we were in a spaceship in free fall, where objects are weightless, and wanted to know a small solid object’s mass. We could not simply balance that object against another of known weight, as we would on Earth. The unknown mass could be determined, however, by placing the object on a spring scale and swinging the scale in a circle at the end of a string. The scale would measure the tension in the string, which would depend on both the speed of revolution and the mass of the object. The tension would be greater, the greater the mass or the greater the speed of revolution. From the measured tension and speed of whirling, we could determine the object’s mass.
Astronomers use an analogous procedure to “weigh” double-star systems. The speed with which the two stars in a double-star system circle one another depends on the gravitational force between them, which holds the system together. This attractive force, analogous to the tension in the string, is proportional to the stars’ combined mass, according to Newton’s law of gravitation. By observing the time required for the stars to circle each other (the period) and measuring the distance between them, we can deduce the restraining force, and hence the masses.
It can be inferred from the passage that the two procedures described in the passage have which of the following in common?
They rely on the use of a device that measures tension.
They can only be applied to small solid objects.
Their purpose is to determine an unknown mass.
They involve attraction between objects of similar mass.
They have been applied in practice.
120s - Q8
Suppose we were in a spaceship in free fall, where objects are weightless, and wanted to know a small solid object’s mass. We could not simply balance that object against another of known weight, as we would on Earth. The unknown mass could be determined, however, by placing the object on a spring scale and swinging the scale in a circle at the end of a string. The scale would measure the tension in the string, which would depend on both the speed of revolution and the mass of the object. The tension would be greater, the greater the mass or the greater the speed of revolution. From the measured tension and speed of whirling, we could determine the object’s mass.
Astronomers use an analogous procedure to “weigh” double-star systems. The speed with which the two stars in a double-star system circle one another depends on the gravitational force between them, which holds the system together. This attractive force, analogous to the tension in the string, is proportional to the stars’ combined mass, according to Newton’s law of gravitation. By observing the time required for the stars to circle each other (the period) and measuring the distance between them, we can deduce the restraining force, and hence the masses.
According to the passage, the tension in the string mentioned in highlight text is analogous to which of the following aspects of a double-star system?
The gravitational attraction between the stars
The combined mass of the two stars
The amount of time it takes for the stars to circle one another
The distance between the two stars
The speed with which one star orbits the other
30s - Q9
Suppose we were in a spaceship in free fall, where objects are weightless, and wanted to know a small solid object’s mass. We could not simply balance that object against another of known weight, as we would on Earth. The unknown mass could be determined, however, by placing the object on a spring scale and swinging the scale in a circle at the end of a string. The scale would measure the tension in the string, which would depend on both the speed of revolution and the mass of the object. The tension would be greater, the greater the mass or the greater the speed of revolution. From the measured tension and speed of whirling, we could determine the object’s mass.
Astronomers use an analogous procedure to “weigh” double-star systems. The speed with which the two stars in a double-star system circle one another depends on the gravitational force between them, which holds the system together. This attractive force, analogous to the tension in the string, is proportional to the stars’ combined mass, according to Newton’s law of gravitation. By observing the time required for the stars to circle each other (the period) and measuring the distance between them, we can deduce the restraining force, and hence the masses.
Which of the following best describes the relationship between the first and the second paragraph of the passage?
The second paragraph provides evidence to support a claim made in the first paragraph.
The first paragraph describes a hypothetical situation whose plausibility is tested in the second paragraph.
The second paragraph analyzes the practical implications of a methodology proposed in the first paragraph.
The first paragraph provides an illustration useful for understanding a procedure described in the second paragraph.
The first paragraph evaluates the usefulness of a procedure whose application is described further in the second paragraph.
30s - Q10
Suppose we were in a spaceship in free fall, where objects are weightless, and wanted to know a small solid object’s mass. We could not simply balance that object against another of known weight, as we would on Earth. The unknown mass could be determined, however, by placing the object on a spring scale and swinging the scale in a circle at the end of a string. The scale would measure the tension in the string, which would depend on both the speed of revolution and the mass of the object. The tension would be greater, the greater the mass or the greater the speed of revolution. From the measured tension and speed of whirling, we could determine the object’s mass.
Astronomers use an analogous procedure to “weigh” double-star systems. The speed with which the two stars in a double-star system circle one another depends on the gravitational force between them, which holds the system together. This attractive force, analogous to the tension in the string, is proportional to the stars’ combined mass, according to Newton’s law of gravitation. By observing the time required for the stars to circle each other (the period) and measuring the distance between them, we can deduce the restraining force, and hence the masses
The author of the passage mentions observations regarding the period of a doublestar system as being useful for determining
the degree of gravitational attraction between the system’s stars
the time it takes for each star to rotate on its axis
the size of the orbit the system’s two stars occupy
the distance between the two stars in the system
the speed at which the star system moves through space
30s - Q11
Suppose we were in a spaceship in free fall, where objects are weightless, and wanted to know a small solid object’s mass. We could not simply balance that object against another of known weight, as we would on Earth. The unknown mass could be determined, however, by placing the object on a spring scale and swinging the scale in a circle at the end of a string. The scale would measure the tension in the string, which would depend on both the speed of revolution and the mass of the object. The tension would be greater, the greater the mass or the greater the speed of revolution. From the measured tension and speed of whirling, we could determine the object’s mass.
Astronomers use an analogous procedure to “weigh” double-star systems. The speed with which the two stars in a double-star system circle one another depends on the gravitational force between them, which holds the system together. This attractive force, analogous to the tension in the string, is proportional to the stars’ combined mass, according to Newton’s law of gravitation. By observing the time required for the stars to circle each other (the period) and measuring the distance between them, we can deduce the restraining force, and hence the masses
The primary purpose of the passage is to
point out the conditions under which a scientific procedure is most useful
describe the steps by which a scientific measurement is carried out
contrast two different uses of a methodological approach in science
analyze a natural phenomenon in terms of its behavior under special conditions
explain a method by which scientists determine an unknown quantity
30s - Q12
Globally, about a third of the food produced for human consumption goes to waste, implying that a third of the water, land use, energy and financial resources that go into producing it are also squandered. Yet people often think of food as environmentally benign because it is biodegradable, while label food packaging as a wasteful use of resources leading to nothing but more pollution, despite the reality that the energy that goes into packaging makes up a mere 10% of the total energy that goes into producing, transporting, storing and preparing food. Needless to say, their view ignores the negative impact of food production, supply, and consumption, and the benefits possible from the right kind of food packaging.
Indeed the dislike for food packaging is not all baseless. There is a lot of bad and wasteful packaging out there. But any assessment of its impact on the environment must take into account the benefits one can derive from packaging in the shape of reduced food waste that can be realized by protecting and dispensing food properly. For instance, two percent of the milk produced in the US goes bad on supermarket shelves before it can be purchased. This dairy waste can be avoided with packaging technology such as Tetra Pak that saves milk from spoiling, even without refrigeration. However, environmentally aware consumers tend to dislike Tetra Pak material because they think it cannot be recycled. The truth, however, is that it can be recycled, but the process is rather complicated. Irrespective of the recycling aspect, Tetra Pak is a good environmental bet because it can extend the shelf life of milk up to nine months, reducing the need for refrigeration — and reducing the amount of milk that goes bad on retail shelves. Clearly, the environmental benefit of the food-protection technology outweighs the negative impact of the packaging itself.
The author is primarily concerned with
citing an example of a belief that is not entirely baseless
presenting a more complete picture of a situation and suggesting a radical solution to the problem
attacking a mindset that has no empirical basis
summarizing the negative impacts of an industry, effects of which people
arguing against a popular belief
300s - Q13
Globally, about a third of the food produced for human consumption goes to waste, implying that a third of the water, land use, energy and financial resources that go into producing it are also squandered. Yet people often think of food as environmentally benign because it is biodegradable, while label food packaging as a wasteful use of resources leading to nothing but more pollution, despite the reality that the energy that goes into packaging makes up a mere 10% of the total energy that goes into producing, transporting, storing and preparing food. Needless to say, their view ignores the negative impact of food production, supply, and consumption, and the benefits possible from the right kind of food packaging.
Indeed the dislike for food packaging is not all baseless. There is a lot of bad and wasteful packaging out there. But any assessment of its impact on the environment must take into account the benefits one can derive from packaging in the shape of reduced food waste that can be realized by protecting and dispensing food properly. For instance, two percent of the milk produced in the US goes bad on supermarket shelves before it can be purchased. This dairy waste can be avoided with packaging technology such as Tetra Pak that saves milk from spoiling, even without refrigeration. However, environmentally aware consumers tend to dislike Tetra Pak material because they think it cannot be recycled. The truth, however, is that it can be recycled, but the process is rather complicated. Irrespective of the recycling aspect, Tetra Pak is a good environmental bet because it can extend the shelf life of milk up to nine months, reducing the need for refrigeration — and reducing the amount of milk that goes bad on retail shelves. Clearly, the environmental benefit of the food-protection technology outweighs the negative impact of the packaging itself.
Which of the following statement can be derived from the passage?
No biodegradable substance can be labelled as completely benign for the environment.
The complexity involved in the process of recycling Tetra Pak is the reason behind the material’s limited popularity with environmentally aware consumers.
The popularity of Tetra Pak in the packaging industry would increase manifold if the environmentally aware customers change their opinion about it.
It is likely that developed countries, which use a lot more food packaging material than developing countries, have lower rates of food wastage than developing countries.
E. In some cases, the recyclability of a material is not the overriding factor in determining its impact on the environment.
30s - Q14
Globally, about a third of the food produced for human consumption goes to waste, implying that a third of the water, land use, energy and financial resources that go into producing it are also squandered. Yet people often think of food as environmentally benign because it is biodegradable, while label food packaging as a wasteful use of resources leading to nothing but more pollution, despite the reality that the energy that goes into packaging makes up a mere 10% of the total energy that goes into producing, transporting, storing and preparing food. Needless to say, their view ignores the negative impact of food production, supply, and consumption, and the benefits possible from the right kind of food packaging.
Indeed the dislike for food packaging is not all baseless. There is a lot of bad and wasteful packaging out there. But any assessment of its impact on the environment must take into account the benefits one can derive from packaging in the shape of reduced food waste that can be realized by protecting and dispensing food properly. For instance, two percent of the milk produced in the US goes bad on supermarket shelves before it can be purchased. This dairy waste can be avoided with packaging technology such as Tetra Pak that saves milk from spoiling, even without refrigeration. However, environmentally aware consumers tend to dislike Tetra Pak material because they think it cannot be recycled. The truth, however, is that it can be recycled, but the process is rather complicated. Irrespective of the recycling aspect, Tetra Pak is a good environmental bet because it can extend the shelf life of milk up to nine months, reducing the need for refrigeration — and reducing the amount of milk that goes bad on retail shelves. Clearly, the environmental benefit of the food-protection technology outweighs the negative impact of the packaging itself.
Which of the following is the function of the first paragraph in the passage?
To introduce a view that is responsible for a significant proportion of wastage in an industry
To contrast two views on a highly debated topic
To state a situation that has severe damaging effects on the environment
To raise a few considerations against a popular belief
To highlight that a popular belief, although credible sometimes, does not take in to account the full situation
30s