How to run engine on GPU?

Asked

Viewed 59 times

1

I’m studying a simulator called Simian (http://pujyam.github.io/simian/), I am basing mainly on this article here: https://ieeexplore.ieee.org/document/7408239/

In this article, where one of the authors is the engine developer, it is said that in the main loop of the program it is possible to make it run on the GPU from "event scheduler". I wanted a light on how to fix this...

Here is the main engine loop:

while globalMinLeft < self.endTime:
        epoch = globalMinLeft + self.minDelay

        self.minSent = self.infTime
        while len(self.eventQueue) > 0 and self.eventQueue[0][0] < epoch:
            (time, event) = heapq.heappop(self.eventQueue) #Next event
            self.now = time #Advance time

            #Simulate event
            entity = self.entities[event["rx"]][event["rxId"]]
            service = getattr(entity, event["name"])
            service(event["data"], event["tx"], event["txId"]) #Receive

            numEvents = numEvents + 1

        if self.size > 1:
            globalMinSent = self.MPI.allreduce(self.minSent, self.MPI.MIN) #Synchronize minSent
            while True: #Busy wait for incoming messages; synchronize
                while self.MPI.iprobe(): #Outer repeat loop needed since per standard, MPI_Iprobe can give false negatives!!
                    remoteEvent = self.MPI.recvAnySize()
                    heapq.heappush(self.eventQueue, (remoteEvent["time"], remoteEvent))
                minLeft = self.infTime
                if len(self.eventQueue) > 0: minLeft = self.eventQueue[0][0]
                globalMinLeft = self.MPI.allreduce(minLeft, self.MPI.MIN) #Synchronize minLeft
                if globalMinLeft <= globalMinSent: break #Global queue is not ahead in time to global minsent
        else:
            minLeft = self.infTime
            if len(self.eventQueue) > 0: minLeft = self.eventQueue[0][0]
            globalMinLeft = min(self.minSent, minLeft)

    if self.size > 1:
        self.MPI.barrier()
        totalEvents = self.MPI.allreduce(numEvents, self.MPI.SUM)
    else:
        totalEvents = numEvents

    if self.rank == 0:
        elapsedTime = timeLib.clock() - startTime
        print "SIMULATION COMPLETED IN: " + str(elapsedTime) + " SECONDS"
        print "SIMULATED EVENTS: " + str(totalEvents)
        print "EVENTS PER SECOND: " + str(totalEvents/elapsedTime)
        print "==========================================="
  • Cnseguir simulate - whatever you want - in the GPU can be complex, and in fact is the theme from the second paper you called. Answering this here would be an equivalent work to redo the paper, and this without having a well specified goal in view.

No answers

Browser other questions tagged

You are not signed in. Login or sign up in order to post.