Tuesday, August 31, 2010

Algebra 2 Trigonometry Regents


FRANKFORT, N.Y. (WKTV) - The Senior Vice-President of Texas Instruments paid a visit to Frankfort-Schuyler High School on Tuesday in order to see first hand how their new advanced calculators are being used.

Texas Instruments calculators were often the most important tool to students in 11th and 12th grade mathematics. However, as technology has changed, the company has created newer, more advanced devices.

Mrs. Audrey Cucci's math class at Frankfort-Schuyler is just one of thousands of districts across the state who use the new devices.!

"It's a special-purpose, handheld device that is aimed specifically at helping students learn math and science," said Melendy Lovett, Senior Vice-President of Texas Instruments.

Cucci's class is preparing for the upcoming regents exams. From calculus, to algebra, to trigonometry, the 11th graders are interactive with their learning thanks to a new device - the TI Nspire calculator.

Cucci said she just started using the Nspire calculators this year, and that the students are already comfortable with the new devices because they are so used to new technology. She says their adaptive abilities to the ever-changing technology has shown in their work.

"I have found from not using them last year to using them this year...I have seen a great boost in grades," Cucci said. "My kids are probably, on average, ten points higher than my kids this year."

"What she does and how she incorporates the new technology really enhances the ! classroom," said Dana Morse, the New York State Education Tech! nology C onsultant for Texas Instruments. "And I thought it would be great to show people from Dallas how the technology is being used to reach out to the students and raise the level of the curriculum."
Share/Bookmark

old geometry regents exams

Related Rates Exam Review

SUMMARY
  • Exam Review Intro
  • Related Rates: Area, Perimeter, and Diagonal of a Rectangle
  • Related Rates: Shadows Including Similar Triangles
  • Related Rates: Distance and Velocity and Spongebob!

Exam Review Intro

We do our exam review in three ways:
  1. Doing mini-exams pre-test style
  2. Doing questions in class related to rusty topics
  3. Doing old exam free-response questions and study how each question evolved throughout the years and do 3 questions a night
Today's class we decided to do #2, reviewing related rates.


Related Rates: Area, Perimeter, and Diagonal of a Rectangle



The length (L), the width (W), and the rates of which the lengths (dL/dt) and widths (dW/dt) are changing.

By convention, we're going to designate an increasing rate as a positive rate and a decreasing rate as a negative rate.

a) We know the formula for the area of a rectangle as A = L * W, where A is area, L is length, and W is width. Since we're looking for the rates of change (That's the definition of a derivative!) we differentiate the formula, with respect to time.

Since L * W is a product, we use The Product Rule to differentiate.

Since area, length, and width are all with respect to time--meaning that area, length, and width are functions of time--we must use The Chain Rule to differentiate.

Answer: dA/dt = 32 cm/s^2; increasing

b) ! We know the formula for the perimeter of a rectangle as P = 2L + 2W. Differentiate the formula using The Chain Rule, since P, L, and W are with respect to time--a function within a function! Then plug and chug.

Answer: dP/dt = -2 cm/s; decreasing

c) We know that the diagonal (D), the length (L), and the width (W) are related in The Pythagorean Theorem: the square of two sides (in this case, L and W) equals the square of the hypotenuse (D), so D^2 = L^2 + W^2. Differentiate the formula using The Power Rule and The Chain Rule.

< /a>
Answer: dD/dt = -33 cm/13 s; decreasing


Related Rates: Shadows Including Similar Triangles



We know the height of the lamppost (L = 16), the height of the man (M = 6), the rate at which the man walks toward the streetlight (db/dt = -5), and the length of the man's shadow from the base of the lamppost (b = 10).

Since the man is walking toward the lamppost, b! y convention, the rate is negative.

b) Using similar triangles, we can see that the ratio of L to M equals the ratio of s to b+s. We simplify our proportions and differentiate to determine the rates of change (That's the definition of a derivative!). To differentiate, we use The Product Rule and The Chain Rule--Refer to Related Rates: Area, Perimeter, and Diagonal of a Rectangle for reference. We plug in the numbers to obtain ds/dt.

Answer: -3 ft/sec

a) Note this question is underlined in blue. The exam would never ask you to do part B because you need to do part B anyways to answer part A. The rate of the tip of the man's shadow is dP/dt. To obtain dP/dt, we differentiate P = b + s.

Answer: -8 ft/sec


Related Rates: Distance and Velocity and Spongebob!

HOMEWORK!



HOUSEKEEPING
  • Next scribe is bench.
  • Wiki constructive modification due Sunday midnight.
  • Developing Expert Voices projects due soon.
  • AP calculus exam is in two weeks.
  • Three AP calculus exam free-response per night.
  • Homework: Olympic Spongebob!
  • We will be reviewing applications of derivatives (related rates and optimization), applications of integrals (density and volume), and techniques of antidifferentiating (integration by! parts).

related rates problems and solutions

How to Solve Problems of Ratio and Proportions

Ratio and Proportions


The problems and ratio and proportions can be on the basis of unitary methods. For example if A can do a piece of work in 12 days. B is 20% more efficient than A. Find the number of days it takes B to do the same piece of work. It means If A is 100, B is 120. Then we do the calculations like this. We want to find out the number of days. So the days which are given are taken in the numerator. So we write

12

Then one quantity is 100 and 120. Now if A is 100 and B is 120, what will be the impact on the quantity we want to find out i.e. number of days. Definitely B will take less days. So we take 100 in the numerator and 120 in the denominator. And write

12 x (100/120) and calculate the required value.

Taking one more example, if 30 men working 7 hours a day can do a piece of work in 18 days, in how many days will 21 men working 8 hours a day do the same piece of work. Again we think like this: the quantity(days) that we want to find out, we take the corresponding quantity(18 days) in the numerator. Thus:

18

Now we find the impact of individual element on the number of days. Taking men, first there were 30men, now there are 21, So number of men have decreased so number of days will be more so we take greater quantity (30) in the numerator and 21 in the denominator as:

18 x (30/21)

Taking hours a day, first there were 7 hours per day, now there are 8 hours per day. So number of days to do a particular work will become less as the number of hours per day are increasing. So we take 7 ( less qty) in the numerator and the other in the denominator as:

18 x (30/21) x (7/8) Solving we can get the answer.

The questions can be asked about specific ratios e.g. Divide 581 into three parts such that 4 times the first may be equal to 5 times the second and 7 times the third.

In such cases the ratio will be ¼ : 1/5:1/7, Then divide the amount in this ratio.

In any two-d figure if the corresponding sides are in the ratio a: b, then there areas in the ratio a^2 :b^2. Eg. Sides of a hexagon becomes three times. Find the ratio of the areas of the new and the old hexagons. The ratio will be 9:1

The questions are also asked of Mixtures eg. A mixture contains milk and water in the ratio 8:3. On adding 3 liters of water, the ratio of milk to water becomes 2: 1. Find the quantity of milk and water in the mixture.

Here we proceed like this:

M : W

8:3 <-- Initial ratio

2:1<-- After adding three liters of water.

Now since there is no change in the milk , we make the ratio of both the quantities equal by multiplying the second ratio by 4. Then it looks like:

M: W

8:3 <-- Initial Ratio

8:4 <-- After adding 3 liters of water

-----

0: 1 <-- Difference in ratios.

-----

Thus increase in qty of water in ratio =1 ~ 3 liter actually, so Initial qty of milk must be (3/1) x 8= 24 liters and water (3/1) x 3 = 9 liters

This method is very powerful and can be used in such situations under varying exam conditions.

Or the questions can be asked like this: How many rupees, fifty paise coins and twenty-five paise coins of which the numbers are proportional to 2 ½, 3 and 4 are together worth Rs. 210. Here the ratio is 2 ½:3:4 = 5:6:8 Their proportional value= 5 x 1: 6/2: 8/4= 5:3:2, Now Divide 210 Rs. In this ratio we have value of rupees= 5/10 x 210= 105 Rs. So there are 105 coins of one rupee, Similarly value of 50 paise coins will be 3/10 x 210= 63 Rs. So there are 126 coins of 50 paise and so on.

Or it can be find the numbers which when added to the terms of the ratio 11:23 makes it equal to the ratio 4:7. Here we find a number that we add to the terms so that it becomes a multiple of 4 and then check. Adding 1 to 11 and 23 makes it equal to 12/24 but the ratio is not 1:2 . Then we add 5 to 11:23 and find that the numbers become 16:28 or the ratio becomes 4:7 so the number that should be added is 5.

Most of the questions in ratio and proportions can be solved by this method.

You can evaluate yourself taking this test.

solve proportions

Week of May 24-28, 2010

Monday, May 24 - all assignments are due on Tuesday
Hon. Geom. - Review Ch. 10. 6 - 10.7 **Study for short test - tomorrow

Hon. Alg. 8 - Solving systems by elimination using multiplication. p.391 #9-14, 17-22

Hon. Alg. II - Review for final exam

PSSA Math 7 - Writing proportions for word problems. Worksheet. Continue to work on corrections


Tuesday, May 25 - all assignments are due on Wednesday
Hon. Geom. - Had a test. **Study for final exam on June 1
Hon. Alg. 8 - Reviewed for final. **Study for final exam - pd. 2 on June 1, pd. 4 on June 3

Hon. Alg. II - Reviewed for final. **Study for final exam - pd. 5 on June 4, pd. 6 on June 2

PSSA Math 7 - Cumulative review. Worksheet. Continue to work on corrections


Wednesday, May 26 - all assignments are due on Thursday
Hon. Geom. - Reviewed last test, reviewed for final. **Final - June 1

Hon. Alg. 8 - Reviewed for final. Worksheet. **Final - pd. 2 on June 1, pd. 4 on June 3

Hon. Alg. II - Reviewed for final. *! *Final - pd. 5 on June 4, pd. 6 on June 2

PSSA Math 7 - Worked on corrections. Continue to work on corrections


Thursday, May 27 - all assignments are due on Friday
Hon. Geom. - Study for final exam on June 1

Hon. Alg. 8 - Review worksheet. **Final - pd. 2 on June 1, pd. 4 on June 3

Hon. Alg. II - Study for final exam. **Final - pd. 5 on June 4, pd. 6 on June 2

PSSA Math 7 - Worked on corrections. Continue to work on corrections.


Friday, May 28
Hon. Geom. - Study for final on Tuesday

Hon. Alg. 8 - Final - pd. 2 on June 1; pd. 4 on June 3

Hon. Alg. II - Final - pd. 5 on June 4; pd. 6 on June 2

PSSA Math 7 - Worked on corrections. Continue to work on corrections


simplifying rational expressions worksheet

Complexity Everywhere

I know that whenever I write about TCS politics on this blog, it ends up bad. For instance, I get a comment such as the following one (left by an anonymous to my last post):

What makes it tough for some of the papers you cite is the view that shaving off log factors is often viewed as much less interesting than larger improvements.
This, of course, makes my latin blood run even hotter, and I cannot help writing follow-up posts (this is the first). If only I could keep my promise of not writing about politics, my life would be so much simpler. (If only I could learn from history... I got to observe my father become a leading figure in Romanian Dermatology a decade before he could get a faculty position — mainly due to his latin blood. He got a faculty position well into his 50s, essentially going straight to department chair after the previous chair retired.)

So, let's talk about shaving off log factors (a long overdue topic on! this blog). As one of my friends once said:
All this talk about shaving off log factors from complexity people, who aren't even capable of shaving on a log factor into those circuit lower bounds...
There is something very deep in this quote. Complexity theorists have gone way too long without making progress on proving hardness, their raison d'être. During this time, drawing targets around the few accidental arrows that hit walls became the accepted methodology. For instance, this led to an obsession about the polynomial / non-polynomial difference, where at least we had an accepted conjecture and some techniques for proving something.

Complexity theory is not about polynomial versus non-polynomial running times. Complexity theory is about looking at computational problems and classifying then "structurally" by their hardness. There are beautiful structures in data structures:
  • dictionaries take constant time, ra! ndomized. (But if we could prove that deterministically, dynam! ic dicti onaries need superconstant time per operation, it would be a very powerful message about the power of randomness — one that computer scientists could understand better than "any randomized algorithm in time nc can be simulated deterministically in time n10c if E requires exponential size circuits.")

  • predecessor search requires log-log time. The lower bound uses direct sum arguments for round elimination in communication complexity, a very "complexity topic." A large class of problems are equivalent to predecessor search, by reductions.

  • the hardness of many problems is related to the structure of a binary hierarchy. These have bound of Θ(lg n) or Θ(lg n / lglg n) depending on interesting information-theoretic issues (roughly, can you sketch a subproblem with low entropy?). There are many nonobvious reductions bet! ween such problems.

  • we have a less sharp understanding of problems above the logarithmic barrier, but knowledge is slowly developing. For instance, I have a conjecture about 3-player number-on-forehead games that would imply nΩ(1) for a large class of problems (reductions, again!). [This was in my Dagstuhl 2008 talk; I guess I should write it down at some point.]

  • the last class of problems are the "really hard" ones: high-dimensional problems for which there is a sharp transition between "exponential space and really fast query time" and "linear space and really slow query time." Whether or not there are reductions among these is a question that has preoccupied people for quite a while (you need some gap amplification, a la PCP). Right now, we can only prove optimal bounds for decision trees (via communication complexity), and some weak connections to NP (if SAT requires strongly exponential time, partial match requires we! akly exponential space).
Ok, perhaps you simply! do not care about data structures. That would be short-sighted (faster data structures imply faster algorithms; so you cannot hope for lower bounds for algorithms before proving lower bounds for data structures) — but it is a mistake that I can tolerate.

Let's look at algorithms:
  • Some problems take linear time (often in very non-obvious ways).

  • Sorting seems to take super-linear time, and some problems seem to be as fast as sorting. My favorite example: undirected shortest paths takes linear time, but for directed graphs it seems you need sorting. Why?

  • FFT seems to require Θ(n lg n) time. I cannot over-emphasize how powerful an interdisciplinary message it would be, if we could prove this. There are related problems: if you can beat the permutation bound in external memory, you can solve FFT in o(n lg n). The permutation bound in external memory is, to me, the most promissing attack to circu! it lower bounds.

  • some problems circle around the Θ(n sqrt(n)) bound, for reasons unclear. Examples: flow, shortest paths with negative lengths, min convolution with a mask. But we do have some reductions (bipartite matching is as hard as flow, bidirectionally).

  • some problems circle around the n2 bound. Here we do have the beginning of a classification: 3SUM-hard problems. But there are many more things that we cannot classify: edit distance and many other dynamic programs, min convolution (signal processing people thought hard about it), etc.

  • some problems have an n*sort(n) upper bound, and are shown to be X+Y-hard. Though the time distinction between n2 and n*sort(n) is tiny, the X+Y question is as tantalizing as they get.

  • some problems can be solved in nω by fast matrix multiplication, while others seem to be stuck at n3 (all pairs sh! ortest paths, given-weight triangle). But interestingly, this ! class is related to the n2 problems: if 3SUM needs quadratic time, given-weight triangle requires cubic time; and if min-convolution requires quadratic time, APSP requires cubic time.

  • what can we say about all those dynamic programs that run in time n5 or something like that? To this party, TCS comes empty-handed.

  • how about problems in super-polynomial sub-exponential running time? Ignoring this regime is why the misguided "polynomial / non-polynomial" distinction is often confused with the very meaningful "exponential hardness." There is much recent work here in fixed-parameter tractability. One can show, for instance, that k-clique requires nΩ(k) time, or that some problems require 2Ω(tree-width) time.

    And what can we say about k-SUM and all the k-SUM-hard problems (computational geometry in k dimensions)? This is an important illustration of the "curse of dimensionality" in ! geometry. I can show that if 3SAT takes exponential time, k-SUM takes nΩ(k) time.

    Finally, what can we say about PTAS running times? In my paper with Piotr and Alex, we showed that some geometric problems requires nΩ~(1/ε^2) running time. This has a powerful structural message: the best thing to do is to exhaustive search after a Johnson-Lindenstrauss projection.

  • inside exponential running time, there is the little-known work of [Impagliazzo-Paturi] showing, for instance, that sparse-3SAT is as hard as general 3SAT. Much more can be done here.
Lest we forget, I should add that we have no idea what the hard distributions might look like for these problems... Average case complexity cannot even talk about superpolynomial running times (a la hidden clique, noisy parity etc). 


This is what complexity theory is about. Sometimes, it needs to unde! rstand log factors in the running time. Sometimes, it needs to! underst and log factors in the exponent. Whereever there is some fascinating structure related to computational hardness, there is computational complexity.

While we construct exotic objects based on additive combinatorics and analyze the bias of polynomials, we should not forget that we are engaging in a temporary exercise of drawing a target around an arrow — a great exploration strategy, as long as it doesn't make us forget where we wanted to shoot the arrow in the first place.

And while complexity theory is too impotent right now to say anything about log factors, it should not spend its time poking fun at more potent disciplines.


polynomial simplifier

Gchem Lecture 5: Nuclear Structure

Protons and neutrons in a nucleus are held together by the strong nuclear force. It's the strongest of the four fundamental forces because it must overcome the electrical repulsion between the protons.

Unstable nuclei are said to be radioactive, and they undergo a transformation to make them more stable--they do this by altering the number and ratio of protons and neutrons. This is called radioactive decay. There's 3 types: alpha, beta, and gamma. The nucleus that undergoes radioactive decay is called the parent, and the resulting (more stable nucleus) is called the daughter.

Alpha: When a large nucleus wants to become more stable by reducing the number of protons and electrons it emits an alpha particle--it contain! s 2 protons and 2 neutrons. This reduces the parent's atomic number by 2 and the mass number by 4.

Beta: there are 3 types: Beta (-), Beta (+), and electron capture. Each type involves the transmutation of a neutron into a proton (and vice versa) through the action of the weak nuclear force; beta particles are less massive than alpha particles and therefore less dangerous

Beta (-): Unstable nucleus contains too many neutrons--> it converts a neutron into a proton and an electron (Beta (-) particle that is ejected; the resulting atomic number is increased by 1 but the mass number remains the same. This is the most common type of beta decay so when the MCAT mentions it, it means this.

Beta (+): Unstable nucleus contains too few neutrons--> it converts a proton into a neutron and a positron (ejected). The positron is like ! an electron, only positive. The resulting atomic mass is 1 le! ss than the parent but the mass number remains the same.

Electron Capture: unstable nucleus capture an electron from the closest electron shell (n=1) and uses it to convert a proton into a neutron--> causes the atomic number to be reduced by 1 while the mass number remains the same

Gamma Decay: is simply an expulsion of energy; a nucleus in an excited energy state (which is usually the case after a nucleus has undergone alpha or any type of beta decay) can "relax" to its ground state by emitting energy in the form of one or more photons. These photons are called gamma photons. They have neither mass nor charge. Their ejection from a radioactive atom changes neither the atomic number nor the mass number of the nucleus (i.e. does not change the identity of the nucleus like alpha or beta decay).

Quick note on nuclear binding energy: every nucleus that contains protons and neutrons has this. It is the energy that was released when the individuals nucleons were bound together by the strong force to form the nucleus. It's also equal to the energy that would be required to break up the intact nucleus into its individual nucleons. In short, the greater the binding energy per nucleon, the more stable the nucleus.

Mass defect: when nucleons bind together to form a nucleus, some mass is converted to energy, so the mass of the combined nucleus is less than the sum of the masses of all its nucleons individually. The difference, deltaM, is the mass defect and will always be positive.
DeltaM=total mass of separate nucleons - mass of nucleus

polar aprotic solvents list

Slope calculator

In this blog we are going to learn about mathematics concept slope.
Slope: -It is the measure of a line which is inclined in relation to the horizontal.
Slope of any line in geometry means the ratio of vertical to the horizontal distance between any points on it.
Meaning of Slope of a line tangent to the graph in differential calculus is given by the particular functions derivatives & represents the rate of modify of the function with respect of modify in the independent variable.
In a graph of a position function, the slope of the tangent signifies an objects instantaneous velocity.
Point Slope Forms: -
Point Slope is the technique of representing or plotting an equation on a graph paper on an x-y axis. It is used to take a graph & find the equation in a specific contour.
The equation for Point Slope Form is given below: -
Y-y1=m(x-x1)
Here is the example of how we can find the slope.

Example:

Find the slope from the given equation, y = 2x + 4

Solution:

Here, the equation is given in slope intercept form,

y = mx + c

Where, m = slope

y = 2x + 4

m = 2.

The answer is 2.

This is just an example on find slope.Next time we will see slope calculator.Before knowing slope calculator,going through slope worksheet you should know about slope.
Also you should know the concept of ogive,as its a equally important concept in math.

point slope calculator