Home » Syllabus » KTU » Kerala University B.Tech S6 Question Answers of PPL MAY 2012

Kerala University B.Tech S6 Question Answers of PPL MAY 2012

Kerala University B.Tech S6 Question Answers of Principles of Programming Languages               

                               SIXTH SEMESTER B.TECH DEGREE EXAMINATION,
MAY2012
(2008 SCHEME)
08.602: PRINCIPLES OF PROGRAMMING
LANGUAGES
Tirne: 3 Hours                                                                                            Max. Marks: 100
   

                                                                    PART-A
                                         Answer all questions. Each carries4 marks.

  1. What determines whether an object is allocated statically, on the stack or in the heap?
    2. What do – you mean by tail recursive function? What is its advantage?
    3. Differentiate between enumeration and subrange data types.
    4. What do you mean by bit vector representation of a set? What are the other implementation options?
    5. What is an inline subroutine? What are its advantages and disadvantages?
    6. What is the use of scope resolution operator in C++?
    7. Differentiate between normal order and applicative order evaluation.
    8. What do you mean by a spin-then-yield lock?
    9. Briefly explain pipes and redirection in scripting languages.   
  2. 10.What do you mean by reflection?


PART—B


 

 11 .(a) How is a case statement internally implemented? What are the
different search strategies that can be used? (10)
(b) Explain with examples the different types of  logically controlled loops.(10)

                                                                      OR

12.(a) What is a type clash? Differentiate between type coercion and type casts. (5)
(b) Write short notes on array.

13.(a) What is a static chain? How is it maintained during a subroutine call? 10

(b) Write short notes on special purpose parameters subroutines.(10)

 

                                                                     OR

14.(a) Write short notes on

                       i)Streams and Monads

                       ii)Higher order functions

                       iii)Equality testing in scheme

                       iv)S-expressions.

15.(a) Write short notes on thread implementation.(10)

(b)Explain scheduler based synchronization.(10)

                                                                     OR

16.(a) Briefly explain the features of any one scripting language. (10)

(b) What do you mean by late binding of machine code? What are its advantages and disadvantages?

 

 

 Solutions  for Kerala University B.Tech S6 Question Answers of Principles of Programming Language.

                                                             ANSWERS
PART-A

 

  1. What determines whether an object is allocated statically, on the stack or in the heap?

Object lifetimes generally correspond to one of three principal storage allocation mechanisms, used to manage the object’s space:
(i) Static objects are given an absolute address that is retained throughout the program’s execution.
(ii) Stack  objects are allocated and de-allocated in last-in, first-out order, usually in conjunction with subroutine calls and returns.
(iii) Heap objects may be allocated and de-allocated at arbitrary times. They require a more general (and expensive) storage management algorithm.
Global variables are the obvious example of static objects. Numeric and string-valued constant literals are also  statically allocated.

  1. What do – you mean by tail recursive function? What is its advantage?

A tail recursive function is one in which  additional computation never follows a recursive call:the return value is simply whatever the recursive call returns.For such functions,dynamically allocated stack space is unnecessary:the compiler can reuse the space belonging to the current iteration when it makes the recursive call,

Eg.int gcd(int a, int b)

{

If(a==b) return a;

Else if (a>b) return gcd(a-b,b);

Else return gcd(a,b-a);}

Because it is tail recursive,the function can be translated into machine code that does not allocate stack space for recursive calls.

  1. Differentiate between enumeration and subrange data types.

Enumeration were introduced by with in the design of pascal,they facilitate the creation of readable programs and allow the compiler to catch certain kinds of programming errors.an enumeration type consist of a set of named elements,in pascal,one can write

Type weekday=(sun,mon,tue,wed,thu,fri,sat);

The values of an enumeration type are ordered,so comparisons are generally valid(mon<tue),and there is usually a mechanism to determine the predecessor or successor of an enumeration value.values of an enumeration type are typically represented by small integers,usually a consecutive range of small integers starting at zero

Like enumerations,subranges were first introduced in pascal,and are found in many subsequent algol-family languages.a subranges,so subranges whose values are large require large storage locations,even if the number of distinct value is small.

  1. What do you mean by bit vector representation of a set? What are the other implementation options?

The most common implementation of a set employs a bit vector whoe length is the number of  distinct values of the base type .A set of  characters ,for example (in a language that uses ASCII)would be 128 bits—16 bytes—in length.A one in the kth position in the bit vector indicate that the k th element of the base type is a member of the set:a zero indicate that is not.Operations on bit vector sets can make use of fast logical instructions on most machines.Union is bitwise or,intersection is bitwise and,difference is bit wise not,followed by bitwise and.there are many ways to implement sets,including arrays,hash tables,and various forms of trees.

  1. What is an inline subroutine? What are its advantages and disadvantages?

As an alternative to stack –based caling conventions,many language implementations allow certain subroutines to be expanded in line at the point of call,in line expansion avoids a variety of  overheads,including space allocation,branch delays,from the call and return,maintaining the static chain or display and saving and restoring registers.it also allows the compilerto perform code improvements such as  global register allocation and common sub expression elimination across the boundaries between subroutines something that most compilers can’t to otherwise.

In many implementations the compiler chooses which subroutines to expand in line and which to compile conventionally.in some languages,the programmer can suggest that particular routines be in lined.in C++ and C99,the keyword inline can be prefixed to a function declaration:

Inline int max(int a, int b){return a>b?a:b;}

In ada the programmer can request I line expansion with a significant comment,or pragma.probably the most important argument for in line expansion is that it allows programmers to adopt a very modular programming style,with lots of tiny subroutines without sacrificing performance,

 

  1. What is the use of scope resolution operator in C++?

Scope resolution operator is :: the scope resolution operator helps to identify and specify the context to which an identifier refers.

Eg:class stu

{

Int x=1;

};

Int stu::x=2;

  1. . Differentiate between normal order and applicative order evaluation.

Expressions can be evaluated in more than one order.in particular,one can choose  to evaluate function arguments before passingthem to a function,or to pass them unevaluated.the former option is called applicative order evaluation;the latter is called normal-order  evaluation.like most imperative languages,is available in special cases.

Suppose,For example,that we have defined the following function.

(define double(lambda(x)(+XX)))

Evaluating the expression (double(*3 4))in applicative order (as scheme does),we have

(double(*3 4))

–>(double 12)

–>(+12 12)

–>24

Under normal order evaluation we would have

(double(*3 4))

–>(+(*3 4) (*3 4))

–>(+12 (*3 4))

–>(+12 12)

–>24

Here we end up doing extra work:normal order causes us to evaluate (*3 4) twice.

  1. What do you mean by a spin-then-yield lock?

A spin lock will suffice for the “low-level”lock that protects the ready list and condition queues,as long as every process runs on a different processor.However,it makes little sense to spin for a condition that can only be made true by some other process using the processor on which we are spinning.if we know that we are running on a uniprocessor,then we don’t need a lock on the scheduler.if we might be running on a uniprocessor,however or on a multiprocessor with fewer processors,than processes,then we must be prepared to give up the processor if unable to obtain a lock.the easiest way to do this is with a “spin then-yield” lock,first suggested by ousterhout.On a multiprogrammed machine,it might also be desirable to relinquish the processor inside reschedule when the ready list is empty:though no other process of the current application will be able to do anything,overall system throughput may improve if we allow the operating system to give the processor to a process from another application.

Eg:type lock = Boolean :=false;

Procedure acquire lock(ref L: look)

While not test and set(L)

Count:= TIMEOUT

While L

Count-:=1

If count =0

Os yield –relinquish processor count:= TIMEOUT

Procedure release lock(ref L: lock)

L:= false

  1. Briefly explain pipes and redirection in scripting languages.

One of the principal innovations of unix was the ability to chain commands together,”piping”the output of one to the input of the next.Like most shells bash uses the vertical bar character (|) to indicate a pipe.To count the number of figures in our directory,without distinguishing between EPS and PDF versions,might type for fig in *;do echo $(gig%.*);done | sort –u | we -1.Here the first command,a for loop,prints the name of all files with extensions(dot-suffixes) removed.The echo command after the loop remove duplicates and we -1 counts the lines.like most shells,bash also allows output to be directed to a file or input read from a file.to create a list of figures,we might type for fig in *; do echo $(fig %.*); done | sort –u > all_figs the “greater than” sign indicates output redirection.

  1. What do you mean by reflection?

In computer science reflection is the ability of a program to examine and modify the structure and behavior (specifically the values,meta-data,properties and functions) of an object at runtime.Reflection is most commonly used in high level virtual machine programming language like smaltalk and scripting language and also in statically typed programming languages like java,C#,ML

In object oriented programming languages such as java,reflection,allows inspection of classes,interfaces,fields and methods at runtime without knowing the names of the interfaces,fields,methrds at compile time,it also allows instantiation of new objects and invocation of methods.Reflection can also be used to adapt a given program to different situations dynamically.

 

 

                                                                    PART –B

                                                                   Module – I

11.a) How is a case statement internally implemented? What are the different search strategies that can be used? (10)
            the case statements of Algol W and its descendants provide alternative syntax for a special case of nested if…then…else.when each condition compares the same integer expression to a different compile time constant,then the following code(written here in modula – 2)

I:=..(* potentially complicated expression *)

IF i=1 THEN

Clause A

ELSEIF I IN 2,7 THEN

Clause B

ELSEIF(i=10)THEN

Clause D

Else

Clause E

END

The IF….THEN….ELSE statement is most naturally translated as follows.

R1:=…                        -calculate tested expression

If r1= 1 goto LI

Clause A

Goto L6

L1: if r1=2 goto L2

If r1 =7 goto L3

L2:clause B

Goto L6

Goto L6                     -jump to code to compute address

L1: clause A

Goto L7

L2: clause B

Goto L7

L3: clause C

Goto L7

L4: clause D

Goto L7

L5: clause E

Goto L7

L6: r1:=….      -computed target of branch

Goto *r1

L7:

Rather than test its expression sequentially against a series of possible values,the case statement is meant to compute an address to which it jumps in a single instruction.the code at label L6 can take any of several forms.The most common of these simply indexes into an array:

T:         &L1     –tested expression =1

&L2

&L3

&L3

&L3

&L3

&L5

&L2

&L5

&L5

&L4        –tested expression = 10

L6 : r1 := …    –calculate tested expression

If r1<1 goto L5

If r1>10 goto L5      –L5 is the “else” arm

R1-:=1                    –subtract off lower bound

R2 :=T[r1]

Goto *r2

L7;

Here the “code” a+- label T is actually a table of address,known as a jump table.it contains one entry for each  integer between the lowest and highest values,inclusive;found among the case statement labels.the code at L6 checks to make sure that the tested expression is within the bound of the array(if not,we should exexute the else arm of the table and branches to it.

Alternative implementations:

A linear jump table is fast.it is also space-efficient whwn the overall set of case statement labels is dense and does not contain large ranges,it can consume an extraordinarily large amount of space,however,if the set of labels is non dense or includes large value ranges.Alternative methods to compute the adderss to which to branch include sequential testing,hashing,and binary search.sequential testing(as in an if…then…else statement) is the method of choice if the total number of case statement labels is small.it runs in time O(n),where n is the number of labels.a hash table is attractive if the range of label values is large but has many missing values and no large ranges.with an appropriate hash function it will run in time O(1).Unfortunately, a hash table requires a separate entry for each possible value of the tested expression.making it unsuitable for statements with large value ranges.binary search can accommodate ranges easily.it runs in time O(logn),with a relatively low constant factor.

 

(b) Explain with examples the different types of  logically controlled loops.(10)

 

in comparison to enumeration controlled loops,logically controlled loops have many fewer semantic subtleties.the only  read question to be ansered is where within the body of the loop the terminating  condition is tested.by far the most common approach is to test the condition before each iteration.the familiar while loop syntax to do this was introduced in Algol-W and retained in pascal:while condition do statement.to obtain the effect of a while loop in Fortran77 one must resort tto goto S:

10 if negated condition goto 20

Goto 10

20

Post –test Loops

Occasionally it is handy to be able to test the terminating condition at the bottom of a loop.pascal introduced special syntax for this case,which was retained in modula but dropped in Ada.A post –test loop allows us,for example,to write

Repeat

Readln(line)

Untle line [1] = ‘$’

Instead of

Readln(line);

While line [1]   <> ‘$’ do

Readln(line);

The difference between these constructs is particularly important when the body of the loop is longer.note that the body of a post-test loop is always executed atleast once.C provides a post-test loop whose condition works “the other direction”(ie,”while” instead of “untle”);

Do

{

Line = read_line(stdin);

}while line[0]!=$;

Midtest Loop

Test the terminating condition in the middle of a loop.this “midtest” can be accomplished with an if and a goto in most languages,but a more structured alternative is preferable.Modula(1) introduced a midtest,or one-and –a –half loop that allows a terminating condition to be tested times as desired within the loop:

Loop

Statement list

When condition exit

Statement list

When condition exit

End

Using this notation the pascal construct

While true to begin

Readln(line);

If all_blanks(line)then goto 100;

Consume_line(line)

End;

100:

Can be written as follows in modula(1);

Loop

Line := Readline;

whenAllBlanks(line)exit;

consumeLine(line)

}

12.(a) What is a type clash? Differentiate between type coercion and type casts. (5)

 

a language with static typing,there are many contexts in which values of a specific type are expected.in the statement a expression ,we expect the right hand side to have the same type as a  in the  expression a+b,the overloaded +symbol designates either integer or floating point addition;we therefore expect either that a and b will both be integers or that they will both be reals,in a call to a subroutine,foo(arg1,arg2,…argN),we expect the types of the arguments to match those of the formal parameters as declared in the subroutine’s header.

Suppose for the moment that we require in each of these cases that the types(expected and provided)be exactly the same.then if the programmer wishes to use a value of one type in a context that expects another,he or she will need to specify an explicit type conversion(also sometimes called a type cast).Depending on the types involved,the conversion may or may not require code to be executed at run time.there are three principal cases:

1.The types would be considered structurally equivalent,but the language uses name equivalence,in this case the types employ the same low level representation,and have the same set of values.the conversion is therefore a purely conceptual operation;no code will need to be executed at run time.

2.the types have different sets of values but the intersecting values are represented in the same way.one type may be a subrange of the other,for example,or one may consist of two’s complement signed integers,while the other is unsigned,if the provided type has some values that the expected type does not,then code must be executed at run time to ensure that the current value is among those that are valid in the expected type.if the check fails,then a dynamic semantic error results.if the check succeeds,then the underlying representation of the value can be used,unchanged.Some language implementation may allow the check to be disabled,resulting in faster but potentially unsafe code.

3.the types have different low level representations,but we can nonetheless define some sort of correspondence among their values.a 32 bit integer,for example,can be converted to a double precision IEEE.floating point number with no loss of precision.most processors provide a machine instruction to effect this conversion.a floating point number can be converted to an integer by rounding or truncating,but fractional digit will be lost,and the conversions will overflow for many exponent values,Again,most processors provide a machine instruction to effected by discarding or sign extending high order bytes.

Whenever a language allows a value of one type to be used in a context what expects another,the Language  implementation must perform an automatic,implicit conversion to the expected type.this conversion is called a type coercion.Like an explicit conversion,a coercion may require run time code to perform a dynamic semantic check or to convert between low level representations.Ada coercions sometimes need the former,though never the latter.Coercions are a controversial subject in language design.Because they allow types to be mixed without an explicit indicationof intent on the part of the programmer,they represent a significant weakening of type security.Fortaran and C which have relatively weak type systems,perform quite a bit of coercion.they allow values of most numeric types to be intermixed in expression and will coerce types back and forth “as necessary”.

 (b) Write short notes on array.

arrays are the most common and important composite data types.they have been a fundamental part of almost every high level language,beginning with Fortran I.Unlike records,which group related fields of disparate types,arrays are usually homogeneous.semantically,they can be thought of as a mapping from an index type to a componenet or element type.some languages require that the index type be integer;many languages allow it to be any discrete type.some languages require that the element type.

Some language allow non discrete index types.the resulting associative arrays must generally be implemented with hash tables,rather than with the more efficient contigious allocation.Associative arrays in c++ are known as map s;they are supported by a standared library template.java and C # have similar library classes.

In some language one declares an array by appending subscript notation to the syntax that would be used to declare a scalar.In C;char upper[26]; In Fortran;character dimension (1:26) :: upper

Character (26) upper   !shorthand notation

In C the lower bound of an index range is always zero;the indices of an n-element array are 0….n-1.in Fortran,the lower bound of the index range is one by default.Fortran 90 allows a different lower bound to be specified if desired,using the notation shown in the first of the two declarations above.In other languages arrays are declared with an array constructor.In Pascal;var upper;array[‘a’…’z’] of char; In Ada;upper;array(character range ‘a’..’z’)of character,

Most languages make it easy to declare multidimensional arrays:matrix:array(1…10,1…10) of real;  –Ada    real,dimension (10,10):: matrix      !Fortran

In some languages(eg.Pascal,Ada,and Modula-3),one can also declare a multidimensional array by using the array constructor more than once in the same declaration;In modua-3,

VAR matrix:ARRAY [1…10],[1..10] OF REAL; is syntactic sugar for       VAR matrix : ARRAY[1..10] OF ARRAY[1…10] OF REAL;

A slice or section is a rectangular portion of an array.Fortran90 provides extensive facilities for slicing,as do many scripting languages,including Perl,Python,Ruby and R.Ada provides more limited support : a slice is simply a contigious range of elements in a one dimensional array.In most languages,the only operations permitted on an array are selection of an element and assignment.A few languages allow  arrays to be compared for equality.Ada allows one-dimensional arrays whose elements are discrete to be compared for lexicographic ordering;A<B if the first element of A that is not equal to the corresponding element of B less than that corresponding element.Ada also allows the built in logic operators to be applied to Boolean arrays.Fortran 90 has a very rich set of array operations;built in operations that take entire arrays as arguments.Because Fortran uses structural type equivalence,the operands of an array operator need only have the same element type and shape.In particular,slices of the same shape can be intermixed in array operations,even if the arrays from which they were sliced have very different shapes.Any of the built in arithmetic operators will take arrays as operands;the result is an array,of the same shape as the operands,whose elements are the result of applying the operator to corresponding elements.As a simple example,A+B is an array each of whose elements is the sum of the corresponding elements of A and B.

 

                                                Module – II

13.(a) What is a static chain? How is it maintained during a subroutine call? (10)

in a language with nested subroutines and static scoping,objects that lie in surrounding subroutines and are thus neither local nor global can be found by maintaining a static chain.Each stack frame contains a reference to the frame of the lexically surrounding subroutine.this reference is called the static link.By analogy,the saved value of the frame pointer,which will be restored on subroutine return,is called the dynamic link.The static and dynamic links may or may not be the same ,depending on whether the current routine was called by its lexically surrounding routine,or by some other routine nested in that surrounding routine.

Maintenance of the subroutine call stack is the responsibility of the calling sequence—the code executed by the caller immediately before and after a subroutine call—and of the prologue and epilogue of the subroutine itself.Sometimes the term “calling sequence” is used to refer to the combined operations of the caller.the prologue and the epilogue.perhaps the trickiest division-of-labor issue pertains to saving registers.The ideal approach is to save precisely those registers that are both in use in the caller and needed for other purposes in the callee.Because of separate compilation,however,it is difficult to determine this intersecting set A simpler solution is for the caller to save all registers that are in use,or for the callee to save all registers that it will overwrite.

In languages with nested subroutines,at least part of the work required to maintain the static chain must be performed by the caller,rather than callee,because this work depends on the lexical nesting depth of the caller,The standared approach is for the caller to compute the callee’s static link and to pass it as an extra,hidden parameter.The following subcases arise.

  1. The callee is nested inside the caller,in this case,the callees’s static link should refer to the caller’s frame.the caller therefore passes its own frame pointer as the callee’s static link.
  2. The callee is k>=0 scopes”outward”—closer to the outer level of lexical nesting.in this case,all scopes that surround the callee also surround the caller.the caller dereferences its own static link k times and passes the result as the callee’s static link

 

 (b) Write short notes on special purpose parameters subroutines.(10)

 

Conformant Arrays:

The binding time for array dimensions and bounds varies greatly from language to language,ranging from compile time to elaboration time to arbitrary times during execution.In several languages the rules for parameters are looser than they are for variables.A formal array parameter whose shape is finalized at run time is called a conformant,or open,array parameter,Because it passes array as pointers,C allows actual parameters of different shapes to be passed through the same  parameter,but the called routine are within the bounds of the actual array.

Default(optional)parameters:

The principal use of dynamic scope is to change  the default behavior of a subroutine.We also noted that the same effect can be achieved with default parameters.A default parameter is one that need not necessarily be provided by the caller;if it is missing then a pre established default value will be used instead.One common use of default parameters is in  I/O library routines.In Ada for example the put routine for integers has the following declaration in the text_IO library package.

Type field is integer range 0..integer’last;

Type number_base is integer range 2..16;

Default_width : field   :=integer’width;

Default_base : number_base:=10;

Procedure put(item : in integer;

Width : in field   := default_width;

Base : in number_base :=default_base);

Here the declaration of default_width uses the built –in type attribute width to determine the maximum number of columns required to print an integer in decimal on the current machine.

Named parameters:

In all of our discussions so far we have been assuming that parameters are positional:the first actual parameter corresponds to the first formal parameter,second actual to the second formal and so on.

In some languages,including Ada,common Lisp,Fortran90,Modula-3 and python,this need not be the case.These language allow parameters to be named.Named parameters are particularly useful in conjunction with default parameters.positional notation allows us to write put(37,4) to print “37” in a four column field,but it does not allow us to print in octal in a field of default width: any call(with positional notation) that specifies a base must also specify a width,explicitly,because the width parameter precedes the base in put’s parameter list.

Variable Numbers of Arguments:

Lisp,Python,and C and its descendants are unusual in that they allow the user to define subroutines that take a variable number of arguments.In C,printf can be declared as follows.

Int printf(char*format,…)

{….

The ellipsis(…) in the function header is a part of the language syntax.it indicates that there are additional parameters following the format,but that the types and numbers are unspecified.Since C and C++ are statically typed,additional parameters are not type safe.They are type safe in Common Lisp and Python,However,thanks to dynamic typing.

 14.(a) Write short notes on

i)Streams and Monads

A major source of side effects can be found in traditional I/O,including the built in functions read and display of scheme:read will generally return a different valueevery time it is called,and multiple calls to display,though they never return a value,must occur in the proper order if the program is to be considered correct.One way to avoid these side effects is to model input and output as streams:unbounded-length list whose elements are generated lazily.When it needs an input value,function my_prog forces evaluation of  the carof  input,and passes the cdr on to the rest of the program.To drive execution,the language implementation repeatedly forces evaluation of the car of output; points it,and repeats

(define driver(lambda(s)

(if(null?s)’0  ;nothing left

(display(car s))

(driver(cdr s)))))

(driver output)

Streams formed the basis of the I/O system in early versions of Haskel.Unfortunately,while they successfully encapsulate the imperative nature of interaction at a terminal,they don’t work very well for graphics or random access to files.More recent versions of Haskell language,a monad is an abstract data  type that supports a notion of sequencing.the values of the I/O monad take action as arguments return actions as results.

ii)Higher order functions
                        a function is said to be  a higher-order function (also called a functional form)if it takes a function as an argument or returns a function as a result.Suppose,for example,that we want to be able to “fold” the elements of a list together,using an associative binary operator:

(define fold(lambda(f1i)

(if (null?I)I    ;I is commonly the identity element for f

(f(car I)(fold f(cdr I)i)))))

Now (fold + ‘(1 2 3 4 5)0)gives us the sum of the first five natural numbers,and (fold *’(1 2 3 4 5)1)gives us their product.

(iii)Equality testing in scheme

scheme provides three different equality-testing functions.the eq? function tests whether its argument refer to the same object;eqv?tests whether its arguments are provably semantically equivalent;equal?tests whether its arguments have the same recursive structure,with eqv?Leaves.

(eq?a b)                       ;do a and b refer to the same object?

(eqv?a b)         ;are a and b provably semantically equivalent?

(equal?a b)       ;do a and b have the same recursive structure?

The intent behind the eq?predicate is to make the implementation as fast as possible while still producing useful results for many types of operands.The intent behind eqv?is to provide as intuitively appealing a result as possible for as wide a range of types as possible.The eq?.predicate behaves as one would expect for Booleans,symbols(names),and pairs(things built by cons),but can have implementation-defined behavior on numbers,characters,and strings.

 

iv)S-expressions.

A program in scheme takes the form of a list.in technical terms,we say that Lisp and scheme are homo iconic:self-representing.A parenthesized string of symbols(in which parentheses are balanced) is called a S-expression regardless of whether we think of it as a program or as a list.Infact,a program is a list,and can be constructed,de-constructed,and otherwise manipulated with all the usual list functions.just as quote can be used to inhibit the evaluation of a list that appears as an argument in a function call,Scheme provides an eval function that can be used to evaluate a list that has been created as a datastructure.

(define compose

(lambda (fg)

(lambda(x)(f(gx)))))

((compose car cdr) ‘(1 2 3))

(define compose2

(lambda(fg)

(eval(list’lambda’(x)(list f(list g’x)))

(scheme-report-environment 5))))

((compose2car cdr)’(1 2 3))

In the first of these declaration compose takes as arguments a pair of functions f and g.it returns as  result a function that takes as parameter avalue x,applies g to it,then applies f,and finally returns the result.In the second declaration,compose2 performs the same function,but in a different way.the function list returns a list consisting of its arguments.in the body of  compose2,this list is the unevaluated expression(lambda(x)(f(g(x))).when passed to eval,this list evaluates to the desired function.the second argument of eval specifies the referencing environment in which the expression is to be evaluated.

Module III

15.(a) Write short notes on thread implementation.(10)

The threads of a concurrent program are usually implemented on top of one or more processes provided by the operating system.At on extreme,we could use a separate OS process for every thread;at the other extreme we could multiplex all of a program’sthreads on top of a single process.On a personal computer with a single address space and relatively inexpensive processes,the one process-per-thread extreme is often acceptable.In a simple language on a uniprocessor,the all threads-on-one-process extreme may be acceptable.Commonly,language implementation adopt an in between approach,with a potentially large number of threads running on top of a processes.the problem with putting every thread on a separate process is that processes(even lightweight”ones)are simply too expensive in many operating systems.Because they are implemented in the kernel,performing any operation on them requires a system call.because they are general purpose,they provide featuresthat most languages do not need but have to pay for anyway.At the other extreme,there are two problems with putting all threads on top of a single process:first,it precludes parallel execution on a multiprocessor,second,if the currently running thread makes a system call that blocks,then none of the program’s other threads can run,because the single process is suspended by the OS.

In the common two-level organization of concurrency similar code appears at both levels of the system;the language run-time system implements threds on top of one or more processes in much the same way that the operating system implements processes on top of one or  more physical processors.A multiprocessor operating system may attempt to ensure that processes belonging to the same application run on separate processors simultaneously,in order to minimize synchronization delays.Alternatively,it may give an application exclusive use of some subset of the processors(this technique is called space sharing,or processor partitioning).

Typically,user-level threads are built on top of  coroutines.Recall that coroutines are a sequential control-flow mechanism,designed for implementation on top of a single OS process.The programmer can suspend the current  coroutine and resume a specific alternative by calling the transfer operation.The argument to transfer is typically a pointer to the context block of the coroutine.

To turn coroutines into threads,we can proceed in a series of three steps.First we hide the argument to transfer by implementing a scheduler that chooses which thread to run next when the current thread yields the processor.Second,we implement a preemption mechanism that suspends the current thread automatically on a regular,basis,giving other threads a chance to run.Third,we allow the data structures that describe our collection of threads to be shared by more than one OS process,possibly on separate processors,so that threads can run any of the processes.

(b)Explain scheduler based synchronization.(10)

The problem with busy-wait synchronization is that it consumes processor cycles,cycles that are therefore unavailable for other computation.Busy-wait synchronization makes sense only if (1)one has nothing better to do with the current processor,or (2) the expected wait time is less than the time that would be required to switch contexts to some other thread and then switch back again.To ensure acceptable performance on a wide variety of systems,most concurrent programming languages employ scheduler-based synchronization mechanism,which switch to a different thread when the one that was running blocks.we consider the three most common forms of scheduler-based synchronization:semaphores,monitors,and conditional critical regions.In each case,scheduler-based synchronization mechanisms remove the waiting thread from the scheduler’s ready list,returning it only when the awaited condition is true

Semaphore

Semaphores are the oldest of the scheduler-based synchronization mechanisms.A semaphore is basically a counter with two  associated operations,P and V.A thread that calls P atomically decrements the counter and then waits untle it is nonnegative.A thread that calls V atomically increments the counter and wakes up a waiting thread,if any.it is generally assumed that semaphore are fair,in the sense that threads complete P operations in the same order they start them.

Monitors

Semaphore suffer from two principal problems.First,their operations are simply subroutine calls,it is aeasy to leave one out.second,unless they are hidden inside an abstraction,uses of a given semaphore tend to get scattered troughout a program,making it difficult to track them down for purposes of  software maintainance.

Monitors were suggested by Dijkstra as a solution to these problems.A monitor is a module or object with operations,internal state,and a number of  condition variables.Only one operation of a given monitor is allowed to be active at a given point in time.A thread that calls a busy monitor is automatically delayed until the monitor is free.On behalf of its calling thread,any operation may suspend itself by waiting on a condition variable.An operation may also signal a condition variable,in which case one of the waiting threads is resumed,usually the one that waited first.

Conditional Critical Regions

Conditional critical regions(CCRs) are another alternative to semaphores,proposed by Brinch Hansen at about the same time as monitors.A critical region is a  syntactically delimited critical section in which code is permitted to access a protected variable.A conditional critical region also specifies a Boolean condition,which must be true before control will enter the region:

Region protected variable when Boolean condition do

End region

No thread can access a protected variable except within a region statement for that variable,and any thread that reaches a region statements waits until the condition is true and no other thread is as with nested monitor calls,the programmer needs to worry about deadlock.

16.(a) Briefly explain the features of any one scripting language. (10)

Perl permit optional declarations,primarily as a sort of compiler-checked documentation.Perl can be run in a mode(use strict ‘vars’)that requires declarations.In Perl all variables are global unless explicitly declared.Perl uses prefixes@,@@,$ characters to indicate type.Perl has evolved over the years.At first there were only global variables.Locals were  soon added for the sake of modularity,so a subroutine with a variable named I  wouldn’t have to worry about modifying a global I that was needed elsewhere in the code.

Any variable that is not declared is global in perl by default.Variables declared with the local operator are dynamically scoped.Variables declared with the my operator are statically scoped.

Eg:

Sub outer($){              #must be called with scalar arg

$sub_A = sub{

Print”sub_A$lex,$dyn\n”;

};

My$lex = $_[0];          #static local initialized to first arg

Local $dyn = $_[0]; #dynamic local initialized to first arg

$sub_B = sub {

Print”sub_B $lex,$dyn\n”;

};

Print “outer $lex ,$dyn\n”;

$sub_Aà();

$sub_Bà();

}

$lex = 1; $dyn =1;

Print “main $lex,$dyn\n”;

Outer(2);

Print “main $lex,$dyn\n”;

Like the “true” regular expressions of formal language theory,extended REs support concatenation,alternation,and kleeene closure.Parantheses are used for grouping.

/ab(cd|ef)g*/    matches abcd ,abcdg, abefg, abefgg,abcdggg,etc. Several other quantifiers (generalizations of  kleene closure) are also available;

? indicates zero or  one repetitions, + indicates one or more  repetitions, {n} indicates exactly n repetitions, {n},indicates at least n repetitions and {n,m} in-dicates n-m repetitions.Two zero- length assertions,^ and $ match only at the beginning and end,respectively,of a target string.Thus while/abe/will match abe,abet,babe, and label,/^abe/will match only the first two of these,’/abe$ will match only the first and the third,and /^abe$/will match only the first.Extended Res are a central part of perl.The built in =- operator is used to test for matching.

Like double-quoted strings,regular expressions in Perl support variable interpolation.Any dollar sign that does not immediately precede a vertical bar,closing parenthesis,or end of string is assumed to introduce the name of a perl variable,whose value as a string is expanded prior to passing the pattern to the regular expression evaluator.

Perl doesn’t generally require the declaration of types for variables.it perform extensive run-time checks to make sure that values are never used in inappropriate ways.In general,Perl takes the position that programmers should check for the errors they care about,and in the absence of such checks the program should do something reasonable.

$a =”4”;

Print$a.3.”\n”;#’.’ Is concatenation

Print$a + 3.”\n”;#’+’ is addition

It turns out that a global name in Perl can have multiple independent meanings.it is possible,for example,to use $foo.@foo,%foo,&foo and two different meaning of foo, all in the same program,to keep track of these multiple meanings,Perl interposes a level of indirection between the symbol table entry for foo and the various values foo may have.The intermediate structure is called a typeglob.

(b) What do you mean by late binding of machine code? What are its advantages and disadvantages?

A binding is an association between two things,such as a name and the thing it names.Binding time is the time at which a binding is created or ,more generally,the time at which any implementation decision is made.Binding at run time is known as late binding.In computing,just  in time compilation also known as dynamic translation,is a method to improve the runtime performance of computer programs based on byte code.Since byte code is interpreted it executes more slowly than compiled machine code unless it is actually compiled to machine code,which could be performed before the execution-making the program loading slow or during the execution.In this latter case- which is the basis for JIT compilation – the program is stored in memory as byte code but the code segment currently running is preparatively compiled to physical machine code in order to run faster.JIT compilers represent a hybrid approach,with translation occurring continuously as with interpreters,but with catching of translated code to minimize performance degradation.it also offers other advantages over statically compiled code at  development time,such as handling of late bound data type and the ability to enforce security guarantees.JIT builds upon two earlier ideas in run time environments;bytecode compilation and dynamic compilation.it converts code at runtime prior to executing it natively,for example bytecode into native machine code.

An incremental compiler is used in POP-2,POP-11,forth,some versions of Lisp,eg,Maclisp and at least one version of the ML programming language(prolog ML).This requires the compiler for the programming language to be part of the runtime system.In consequence,source code can be read in at any time,from the terminal,from a file,or possibly from a data-structure constructed by the running program and translated into a machine code block or function which is then immediately available for use by the program.Because of the need for speed of  compilation during interactive development and testing,the compiled code is likely not to be as heavily optimized as code produced by a standared’batch compiler’which reads in source code and produces object files that can subsequently be linked and run.

Portability is one of the prime reasons for late binding of machine code.Code in a bytecode or scripting language which is independent of the platform is a mobile code,Mobile  codes carry a variety of risks.To execute the mobile code in a risk free environment,sandboxing needs to be done.

10,933 total views, 2 views today

Leave a Reply

x

Check Also

business-seminars

KTU B.Tech S3 Syllabus Aeronautical Engineering

KTU B.Tech S3 Syllabus Aeronautical Engineering MA201 Linear Algebra & Complex Analysis- DOWNLOAD ...

Revaluation of S1S2 BTech Supplementary Examination

Revaluation of S1S2 BTech Supplementary Examination B.Tech Supplementary Revaluation can be requested from ...