Formal Methods II

CSE3305




Tute 3

0 comments

Perturb
'Die' Shaking up the die.
We're shaking up our eye balls on the search space.

LOCAL SEARCH
LOOP:
At some point (x, y, ...)
Perturb Pick a nearby point
Compare it with current point - eval (p1) < eval (p2)
if better, move there.

We're moving through the space of possible paths.
A

B C

D

The search space is
ABD
ACD

Local Optimum: Of the possible solutions we can see, the one that is the best. There might be one in the future, in the other parts of the search space that we can't see, but for the moment this one is the best. Neighbourhood. You can get stuck at a local optimum, because you cannot consider everything, so you end up missing the other, better solutions.

The book considers partial algs to be greedy algs. You assemble the solutions piece by piece.
Sorting algs take pieces of a problem and solve them. You can completely solve this and this, then merge them. Combining solns. - Divide and conquer.

Recursion. You have to keep on breaking down, then when you get to the end (base case) and take them all back up.

G & C (P)
Break P into P1, P2
S1 = D&C (P1)
S2 = D&C(P2)
S = Combine S1, S2 //you've still got stuff to do as you come back up.
Return S

Local search - only a subset of each step is to be examined.

It's an issue of unfeasibility (time/memory). Do local search when an exhaustive search will take too long.
But often you could prefer to do the local optimum even when the exhaustive approach is feasible. There's just no point.

The search space is the combination of all clothes.
Dressing yourself is a local search - you're not going to look at all your shirts. When you get to a shirt that is good enough, that will probably do.
It's a partial solution.

< S, P, T > Ignore T and you're ignoring a dimension makes it partial. Ignoring some points makes it local.

not wear pants - probably not a valid solution.


Chess.
It's going to probably be a local search. You have an exhaustive search for the first next move, then
The inner most eval function is going to look at the entire state of the board and be able to say what's a good state and what's a bad state.
You have an exhaustive search on the outside, which then calls another eval function that calls a local search.

In pac man, the graph isn't weighted.

d) It becomes more complicated with different terrain. The graph becomes weighted, and we have to use a local search.

Delegation IS D&C. Parallel computing

Doing a jigsaw puzzle. Find the edge pieces.


Tute Sheet 1

0 comments

State
Water has states of Solid, Liquid, Gas. It can be obvious what things have state in the physical world.

We're talking about Finite state automators set in space. The automator changes state. It's the machine that is moving through space.

A graph has edges and verticies.
Whenever you have a collection of things we call it a 'set'.


State Space

We will be interested in time/space complexity mostly.
We talk about THE PROGRAM rather than THE PROGRAM ON THIS COMPUTER. We want to analyse the program.

'It take 300 instructions for this program to run.' This cannot be used to measure the complexity because the INPUT changes.

We measure complexity with O notation because
a) it counts for the insignificant size of +n and +x when N is large. and it
b) counts for the variances in language. Chances are that something that takes 20 instructions in one lang are going to take something in the order of 20 instructions in any other language.

So we use this O notation to measure the complexity of algorithms (programs). We also use it for space soo. So we use it for measuring space / time complexity.

Contraints
Limits on the solution
Hard contraints cannot be broken. If you do, the solution in wrong. Soft contraints can be broken and still be correct.
Optimal. The best!
feasable. Is it possible to do it within these contraints? What can we physically manage? Constraints as in time / money. Like 100 bit encyption can be broken with a very simple algorithm which exists. It just takes a couple of thousand years. It's not IMPOSSIBLE but INFEASABLE.

Search Problem
We're searching for an solution in the search problem. We're going through all the possible solutions and finding the right solution. Eveything could be a search problem. Trying to find which photos to print is a search problem.
Heuristic
An innnacurate way of choosing what to do next. "I think it's most likely to be here so I'll look here first."
Websearch does it for you, pulling out what it thinks you need from all those millions of pages (with ranking). Pundits do it on the horses.

Natural numbers include zero. Positive numbers do not.


Supplimentary Notes 1

0 comments

Homomorphism and isomorphic

Supplementary notes 1.

List of 7 Definitions for the class.
Probably providing the building blocks of the greater pieces that will come of this class.


Lecture 2

0 comments

The texts are available at the Hargrave. The main text is about $100.

Check the online list of books for an idea of things to read if you want.

So that's problem solving... [the prev lecture]

Greedy search is the simplest kind of heuristic search.

We have the Travelling Salesman Problem (TSP)
Constraints
You want to minimise distance.
You only visit each city once.

You can make either of the constraints soft or hard.

SAT stands for satisfiablity.
Given a Boolean expression, is there an assignment of variables which makes the expression true?
Can you plug in values to make the evaluation of the expression true?

In order to solve this problem, you need to not leap through the SEARCH SPACE but go through it bit by bit by flipping itty bitty bits.

If you were going to start a search tree it would look like

You can change an expression into this form easily (polynomial time)

Then you do a few tricks and hey presto you've got something to put in your eval function for this problem ;-)

NLP
That's the messy mathematical one, but I think I actually kind of like it I was looking at it and imagining it. Sweet I think I'm finally 'getting' math.

Next Lecture. Ahem.

WHY analyse Algorithms?

You need to know how fast it's going to go, and how much space it's going to take.

It's part of the science of computer science.


There are different and common Orders for algorithms.
Constant Can't really do much with this. Return 3. bleh.
Logarithmic
Linear
Log Linear
Polynomial Generally speaking, here's the balance of useful / reasonable in terms of time and space.
Exponential
Superexpodential

So we usually want to know which one our algorithm falls into.

When we have to look at everything, we have something like O(n).

We can also think of the average case.
U[1,n] (Uniformly distributed across the list)
It's going to be (intuitively) somewhere in the middle. We can prove it mathematically but we're not going to bother.

What about an ordered list?
Someone's gone to all the trouble of ordering it. But with the same algorithm, the worst case won't change. Still linear. But in some cases the average will change.

"Logs come from Trees"
If you can look down one branch, can have log. It's also called 'divide an conquer'

When the algorithms are easy the analysis are easy.

So here's an algorithm to analyse algorithms.


You've found your problem, you've modelled it, and made your algorithm to solve it... then you want to know if it's any good.

We measure process time, not calendar time, and you only have the one process running. You track what the input sizes are, and then you come up with this crude kind of graphing mapping. And then later we'll use this to think about big O notation etc.

Empirical Program Testing.
There are problems where the analysis is far too difficult, so this is the only way to do it.
This


The best exact solutions are exponential.


Lecture 1: Introduction (a heavy one)

0 comments

The goal of this subject is to enhance our abilities to solve problems algorithmically.

Specifically, to:

  • problem solving

  • controlling space / time

  • heuristic search methods

  • stochastic search (probabilities)

  • stochastic simulation

  • limits of computation (what CAN be solved?)

  • formal complexity analysis


So WHAT IS COMPUTER SCIENCE? What is real computer science?

Science:

  • Discovering the nature of the world

  • emphasis on empirical methods

  • Science is about discovery

So there are these other subjects that float around the core subjects in CS, like AI, or soft eng. They don't deal with the real 'scientific' stuff. This is one opinion. So AI is one of the dispensable subjects.

One of the applications of CS.

So what is at the core? Computation. Turing machines. All that stuff.

So is computer science really a science? OR is it just like 'Hygiene Science' or 'Secretarial Science'?

Yes it is, because we can set out to discover what IS computable, and what the limits of computers are.

Just because we created something doesn't necessarily mean that we know what it does. Do you know what everything in a computer program you have created does and why? What can we do with computers?

We can also create applications through engineering. So computer science is also about applying computers to practical problem solving.

We're going to emphasise search as the algorithmic side of problem solving.

Problem => Model => Solution

Here's where the lecture hit its 'interesting peak'.

Def 1:

You have a problem when the present state of a system is not equal to its desired (goal) state (MF, p. 1)

So the first step is to model the problem.

Say you were going to model a star that was about to go nova. You could create model objects and properties / relationships for every particle, but you would be waiting forever for the dang thing to compute. The time complexity is absurd. So we have to decrease the resolution of the model somewhat. Now, there are a lot of things happening in the real version that don't get modeled in your computational model. You then manipulate that lower res computational model to create your solution.

This is what they mean by 'the details don't matter, they've been mostly omitted anyway!'.

Thus, in rough English, Def 2::

A model M of P is:

An Abstract representation of P

Abstract: e.g., mathematical, algorithmic

Abstractions lose (hopefully: irrelevant) details

States and operations of M parallel states and changes of P

Manipulating M (searching the state space of M) allows us to search for a goal state of M

Def 3:

A solution for a problem in M is a sequence of operations from an initial state to a goal state.

Thus problem solving is

  1. Find a problem (identify)

  2. Develop a model

  3. Search the model's state until a solution is found

This was illustrated in class with the example of finding the route to another city.

But we need to allow feasible searches and solutions.

Ok, so this means it ahs to run in polynomal time (space) or less, when we think in general like computer scientists.

Or we can think more generally and say that it is feasible when the solution meets the problem constraints.

State Space Complexity is simply the number of possible states a model has. Sometimes if repeated states are allowed, the search space complexity is infinite.

Complexity is sensitive to the choice of implementation. In other words, if you choose badly, you're going to have an awful program that hogs all your resources. It's way too complex.

I'll stop


here


Last posts

Archives

Links